id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
248498005
pes2o/s2orc
v3-fos-license
Serial Cardiovascular Magnetic Resonance Studies Prior to and After mRNA-Based COVID-19 Booster Vaccination to Assess Booster-Associated Cardiac Effects Background mRNA-based COVID-19 vaccination is associated with rare but sometimes serious cases of acute peri-/myocarditis. It is still not well known whether a 3rd booster-vaccination is also associated with functional and/or structural changes regarding cardiac status. The aim of this study was to assess the possible occurrence of peri-/myocarditis in healthy volunteers and to analyze subclinical changes in functional and/or structural cardiac parameters following a mRNA-based booster-vaccination. Methods and Results Healthy volunteers aged 18–50 years (n = 41; m = 23, f = 18) were enrolled for a CMR-based serial screening before and after 3rd booster-vaccination at a single center in Germany. Each study visit comprised a multi-parametric CMR scan, blood analyses with cardiac markers, markers of inflammation and SARS-CoV-2-IgG antibody titers, resting ECGs and a questionnaire regarding clinical symptoms. CMR examinations were performed before (median 3 days) and after (median 6 days) 3rd booster-vaccination. There was no significant change in cardiac parameters, CRP or D-dimer after vaccination, but a significant rise in the SARS-CoV-2-IgG titer (p < 0.001), with a significantly higher increase in females compared to males (p = 0.044). No changes regarding CMR parameters including global native T1- and T2-mapping values of the myocardium were observed. A single case of a vaccination-associated mild pericardial inflammation was detected by T2-weighted CMR images. Conclusion There were no functional or structural changes in the myocardium after booster-vaccination in our cohort of 41 healthy subjects. However, subclinical pericarditis was observed in one case and could only be depicted by multiparametric CMR. INTRODUCTION Without any doubt, COVID-19 vaccines are a blessing and prevented many millions of people world-wide from becoming either very ill or even dying due to a COVID-19 infection. Nevertheless, various reports showed that particularly mRNAbased COVID-19 vaccines are associated with rare but sometimes serious cases of acute peri-/myocarditis (1,2). We still need to better understand why some people show cardiac adverse events following vaccination. Magnetic resonance imaging (MRI) is the non-invasive gold standard in the diagnosis of myocardial inflammation (3). In this context, some impressive case reports presented severe myocardial damage on cardiovascular magnetic resonance imaging (CMR) even without functional impairment (4,5), and the true number of COVID-19 vaccine-associated peri-/myocarditis may even be underreported since CMR is still not widely available. So far, there are only limited data available regarding the frequency of vaccine-associated peri-/myocarditis following a 3rd booster vaccination for COVID-19 and regarding the value of CMR (6,7). Hence, the aim of this prospective study was (a) to assess the possible occurrence of peri-/myocarditis following a mRNA-based booster vaccination in healthy volunteers and (b) to also analyze whether there are subclinical changes in functional and/or structural cardiac parameters possibly triggered by the preceding booster vaccination. METHODS Healthy volunteers aged 18-50 years were enrolled for a CMRbased serial screening before and after 3rd booster vaccination in the CMR-Center of University Hospital Muenster, Germany. Each study visit comprised a CMR scan, blood analyses with cardiac markers, markers of inflammation and SARS-CoV-2-IgG antibody titers, resting ECGs and a questionnaire regarding clinical symptoms. After their baseline examination, the study subjects received their 3rd booster dose of a mRNA-based COVID-19 vaccine with either mRNA-1273 (Moderna) or BNT162b2 (Pfizer-BioNTech) within 1-10 days. The follow-up examination was performed 4-10 days after booster vaccination. Cardiovascular magnetic resonance imaging was performed on a 1.5 T-scanner (Philips Healthcare, Best, Netherlands) with a modified standard protocol used in clinical practice for suspected myocarditis (8). The protocol included high resolution cine, native T1-as well as T2-mapping, T2-STIR imaging and flow measurements. Contrast agent administration with additional late-gadolinium-enhancement (LGE) imaging was only intended if the native scan showed clear signs of active myocardial inflammation. Native T1-and T2-times were measured on three short axis views using pixelwise maps. All subjects gave their written informed consent to the study. Skewed variables are expressed as median and interquartile range (IQR). Categorical variables are expressed as frequency with percentage. A p-value ≤ 0.05 was considered statistically significant. RESULTS Between November 2021 and January 2022, we prospectively examined 41 healthy, individuals with a median age of 35 years before (median 3 days) and after (median 6 days) their 3rd booster vaccination. The subjects (56% male) had no history of any cardiac disease or prior COVID-19 infection. There was one loss of follow up, because one participant experienced a COVID-19 infection in the interval between the third vaccination and the follow-up appointment. 30% of the subjects received mRNA-1273 (Moderna) and 70% received BNT162b2 (BioNTech) for booster ( Table 1). No association between the subjective burden of symptoms and the respective increase in SARS CoV-2-IgG titer was observed. There was no pathological elevation and no significant change in serum markers such as CK, CK-MB, high-sensitive troponin T, NT-proBNP, CRP or D-dimer before and after the 3rd booster vaccination ( Table 2). As expected, there was a highly significant rise in the SARS-CoV-2-IgG titer (p < 0.001) in our study population. In addition, females showed a significantly higher increase in SARS-CoV-2-IgG titer (p = 0.044) compared to males. In general, the assessment of both functional as well as structural CMR parameters showed highly consistent and reproducible values when the respective CMR parameters measured before and after the booster vaccination were compared. In particular, there was no change in biventricular function and volumes, in global longitudinal strain and in myocardial mass ( Table 2). Moreover, the global native T1-and T2-mapping values remained unchanged (988 vs. 983 ms in T1 and each 50 ms in T2). There was one female who demonstrated a new "pericardial" T2-STIR-weighted hyperintensity in the basal to midventricular inferolateral pericardium and also a new pleural effusion (Figure 1). In the absence of any symptoms or signs of other diseases, we interpreted these findings as a vaccination-associated form of very mild pericardial inflammation. There was no known clinical characteristic or laboratory parameter that could provide a predisposition to pericarditis in this case. DISCUSSION Although the pivotal approval studies, sponsored by the respective pharmaceutical companies, did not show an increased risk of myocarditis following COVID-19 vaccination (9,10), today there is no doubt that mRNA-based COVID-19 vaccination can cause peri-/myocarditis particularly in young males (1,11,12). It has also been shown that the risk of myocarditis is predominantly increased after the second vaccination dose. Assuming an autoimmune-mediated process, it is still unknown whether a 3rd booster vaccination is also associated with a non-neglectable risk of peri-/myocarditis. To the best of our knowledge, our present study is the first one that used multi-parametric serial CMR studies prior to and after mRNA-based COVID-19 booster vaccination to carefully assess potential booster-associated cardiac effects. Our major findings can be summarized as follows: First, the present data show that no relevant myocardial changes could be observed by CMR following the 3rd booster vaccination. Our data support current recommendations regarding booster vaccinations that should not be withheld for fear of adverse cardiac events in healthy subjects aged <50 years (considering that there is no vaccination with mRNA-1273 FIGURE 1 | Cardiac magnetic resonance (CMR) images of pericarditis. First row: T2-STIR-weighted short-axis images with the occurrence of pericardial hyperintensity as indication for edema/mild pericardial inflammation (red arrow) and a new pleural effusion (green arrow) following the 3rd COVID-19 vaccination. In addition, corresponding T1 mapping without signs of myocardial impairment. Second row: Corresponding images at baseline (prior to 3rd COVID-19 vaccination) from the same subject without any pathological findings. (Moderna) in subjects <30 years since cases of peri-/myocarditis were more frequently observed after Moderna vaccination in this age group). Second, subclinical pericarditis was observed in 1 out of 40 subjects following a 3rd booster vaccination whereas no cases of myocarditis were observed in the present study. Importantly, multi-parametric CMR imaging was the only diagnostic modality that allowed to depict such a mild pattern of pericardial inflammation. In line with this findings, a large descriptive study, based on reports to the VEARS (Vaccine Adverse Event Reporting System) reported only 17% of abnormal findings based on echocardiograms (in the cohort of myocarditis patients younger than 30 years), but abnormalities were reported in >70% by CMR (1). Today, CMR is well-known and robust modality for the non-invasive diagnosis of myocarditis that does not only detect regional dysfunction, but allows also to depict edema and other subtle structural changes based on elevated T1-and T2-mapping values and/or characteristic patterns of LGE (13,14). Therefore, the Lake Louise criteria for CMR-based diagnosis of myocardial inflammation were already established in 2009 and updated in 2018 (15,16). Since the diagnosis of vaccine-associated myocarditis is important for symptom management, for exercise recommendations, for further cardiomyopathy monitoring and future (e.g., booster) vaccination decisions (3), physicians should be aware of the potential of multi-parametric CMR. Last but not least, our present data clearly show that a 3rd booster vaccination leads to a substantial increase in the SARS CoV-2-IgG antibody titer -and interestingly to a higher increase in females compared to males. Hence, gender-based differences should be evaluated more carefully in future studies. CONCLUSION The present serial CMR data support current recommendations regarding the safety of 3rd booster vaccinations since no functional or subtle structural changes were observed in the myocardium -as long as current vaccination recommendations are followed. However, subclinical pericarditis was observed in one case and could only be depicted by multiparametric CMR. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author. ETHICS STATEMENT Ethical approval was not provided for this study on human participants because written consent of participants were obtained and no identifiable images or data are presented in the study. The patients/participants provided their written informed consent to participate in this study.
2022-05-03T13:16:47.789Z
2022-05-03T00:00:00.000
{ "year": 2022, "sha1": "17d36d1795571a106c59aac522f3077c4078f05e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "17d36d1795571a106c59aac522f3077c4078f05e", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
256296257
pes2o/s2orc
v3-fos-license
Twenty-Four-Hour Movement Behaviors, Fitness, and Adiposity in Preschoolers: A Network Analysis : The present study aimed to verify the associations between compliance with the 24-h movement behavior recommendations, fitness, and adiposity markers in preschoolers, considering the non-linear nature of these associations. The sample was comprised of 253 preschoolers. Preschoolers were assessed for anthropometric data and wore an accelerometer for seven consecutive days. Screen time and sleep duration were parent-reported in a face-to-face interview. The PREFIT test battery was used to assess physical fitness components (lower-body strength, cardiorespiratory fitness, and speed/agility). Descriptive statistics were used to describe the variables, and a network analysis was conducted to assess the emerging pattern of associations between the variables. Preschoolers’ greatest compliance with recommendations was observed for physical activity, while the lowest compliance was observed for the screen time recommendation. Among children aged three years, only 2.2% complied with all recommendations; only 1.0% of the four-year-olds and 1.3% of the five-year-olds complied with all recommendations. The results of the network analysis and centrality measures emphasized that cardiorespiratory fitness (CRF) and compliance with movement behavior recommendations were the most critical variables to address in preschoolers, reinforcing the importance of intervention programs focused on intense activities. Introduction Childhood adiposity constitutes a global health problem [1]. Between 1980 and 2013, the prevalence of obesity among children increased by 47.1% [2], and according to the World Health Organization (WHO) [3], by 2025, the number of children with overweight and obesity can reach 75 million. This evidence becomes even more relevant when considering that the obesity epidemic has direct implications for public health; in addition, it leads to a considerable increase in health costs [4]. Obesity has been previously linked to at least 100 associated factors of different natures [5], including movement behaviors (physical activity (PA), sedentary behavior, and sleep). When analyzing the association between each movement behavior and adiposity, evidence has indicated a negative association between PA time and adiposity [6,7]. Previous studies have also shown that sleep duration in children is inversely associated with adiposity [8][9][10], while short sleep duration is associated with a higher body mass index (BMI) and waist circumference (WC) [11]. Moreover, regardless of PA levels, sedentary behavior, especially the time exposed to screens, is associated with an increased risk for adiposity and low physical fitness levels [12]. Physical fitness indicates the ability to engage in daily physical activity (PA) without excessive fatigue, respond to environmental demands, and maintain and improve health [13]. Physical fitness components play a protective role Obesities 2023, 3 37 in physical, emotional, mental, and social health in childhood [14], as cardiorespiratory fitness substantially benefits cardiovascular health [15]; strength training may improve body composition, with reductions in BMI [16,17]; and agility explains the variance of moderate-to-vigorous physical activity (MVPA) levels in childhood [18]. Indeed, physical fitness is only partly genetically determined and is greatly influenced by several other factors of different natures, such as movement behaviors [14]. These factors are non-linear and dynamically interconnected, which gives them the characteristics of a complex system. In this type of system, small changes in a single component can result in important nondeterministic patterns throughout the network of associations between the interconnected variables that comprise the system [19]. The WHO recommends that three-and four-year-old children accumulate at least 180 min of PA daily, of which at least 60 min should be MVPA; they should not spend more than 1 h on recreational screens; and they should obtain good-quality sleep for between 10 and 13 h a day. For five-year-olds, recent recommendations established that a healthy 24-h day should include at least 60 min of daily MVPA [1]. Tremblay et al. [20] stated that besides PA, children at this age should limit sedentary screen time to less than 2 h and sleep between 9 and 11 h per day. Nonetheless, data from 23 countries have shown that the majority of young children do not comply with the three recommended behaviors of the 24-h movement guidelines, and one in five do not comply with any of the three recommendations [21][22][23]. It is notable that Brazilian children seem to have one of the lowest levels of adherence among several countries [23]. Furthermore, compliance with the 24-h movement recommendations independently or in combination was significantly associated with a lower BMI z-score [24]. Considering that non-compliance with movement behavior recommendations, which is associated with low physical fitness, is a risk factor for adiposity and that behaviors are established in early childhood and persist throughout life [23], it is necessary to investigate the associations between these variables in a critical phase of adiposity development, such as early childhood. Considering that the interconnections between these variables are understood as complex non-linear systems, they could be better explored as a network of variables that forms emerging patterns, enabling the identification of more sensitive variables to maintain or modify the entire system. This perspective allows the evaluation of the role of each variable within the system. By calculating the expected influence index, it is possible to determine the variables more sensitive to changes resulting from interventions [19], which can influence the entire system. Thus, exploring the associations between compliance with movement behavior recommendations and modifiable risk factors for the emergence of obesity, such as physical fitness and adiposity markers, from the approach of complex systems will provide a better understanding of the dynamical interrelationships between these variables, and it may also support the development of actions to promote healthy lifestyles in preschoolers. In fact, there is no study that the authors are aware of that has explored these relationships from a network approach or hypothesized whether there is a non-linear relationship between these variables, and if so, how they are related from the perspective of complexity. Thus, this study aimed to verify the associations between compliance with the 24-h movement behaviors recommendations, fitness, and adiposity markers in preschoolers, considering the non-linear nature of these associations. Materials and Methods This cross-sectional study used baseline data from the "Movement s Cool Project", which aimed to explore the associations between movement behaviors and health outcomes in low-income preschoolers. The main project was approved by the (removed for blind review). Setting and Population Characteristics Preschoolers aged 3 to 5 years of both sexes registered in early education childhood centers (EECCs) in João Pessoa were eligible and invited to participate. João Pessoa is a large seaside city in the northeast of Brazil. We conveniently selected two EECCs in the coastal and central districts to be included in the study. All children aged 3 to 5 years with typical neurological development who were attending the two EECCs (256) were invited for assessments. Of those, 253 presented signed informed consent, and three did not complete the study s protocol. Thus, the final sample was comprised of 253 preschoolers. The majority (62.5%) of mothers and fathers were unemployed. Over 45% of the mothers and 54% of the fathers had finished 9th grade or lower. The Human Development Index (HDI) of the EECCs' area ranges from 0.4 to 0.5. Procedures Assessments were conducted during a four-month period (November/December 2019 and February/March 2020). All the preschool staff and the preschoolers' parents were informed about the research protocols and procedures in meetings with the project coordinator (one session in each school) and agreed to participate. Trained physical education teachers and graduate students conducted the assessments. The school administration provided the children's ages, birth dates, and parent contacts. Parents were invited to a meeting at the preschool and were interviewed individually. Social information and screen and sleep time were collected during this interview, and parents were also informed about the accelerometer used. Anthropometric data and resting heart rate beat-to-beat data were assessed at the preschools. Anthropometric Measures Height (cm) and body mass (kg) were assessed using a Holtain stadiometer and a weighting scale (Seca 708, Hamburg, Germany) while the participant was lightly dressed and barefoot. Two measures were taken, and if they differed, the average value was adopted. BMI was calculated by dividing body weight by the squared height in meters (kg/m 2 ) [25], and BMI z-scores were calculated according to the WHO cut-offs [25]. Physical Activity PA was objectively assessed using accelerometry (Actigraph WGT3-X, Pensacola, FL, USA, which is validated for measuring PA in preschoolers [26]. The preschool teachers received verbal and written instructions for the accelerometer's correct use. The teachers were instructed to register an activity diary of wear and non-wear time. The device initialization, data reduction, and analysis were performed using ActiLife software (Version 6.13.3). Participants were advised to wear the right hip accelerometer for seven consecutive days (Wednesday morning to Tuesday afternoon). The children were allowed to remove the device during water-based activities and while sleeping (at night). During preschool, the teachers removed the accelerometers around 11 am for the children's baths and fastened them adequately afterward. During the week, parents received messages on their cellphones to remember to keep the accelerometer on their children. Accelerometers were analyzed as ActiGraph counts considering the vector magnitude and a 15-s epoch length [27]. Periods of ≥20 min of consecutive zero counts were defined as non-wear time and removed from the analysis. The first day of data was omitted from the analysis to avoid subject reactivity [28]. Only days with a minimum of 8 h of wear time were considered valid. The mean wear time was 10.9 h (SD ± 1.4 h of wear time between children). The Butte et al. [29] cut points for light (820 to 3907 counts), moderate (3908 to 6111 counts), and vigorous (≥6112 counts) PA intensity were used. Sleep Time Parents reported the children's usual daily sleep hours. This approach has been validated against estimates from sleep logs of objective actigraphy in young children [24]. Parents were asked to recall the total average hours their child slept as follows: "On weekdays, how many hours of sleep does your child usually have during the night?" and "On weekend days, how many hours of sleep does your child usually have during the night?". Questions were asked separately for weekdays and weekend days and reunited for the analyses. Overall sleep hours were calculated as follows: ((Sleep on weekdays × 5) + (Sleep on weekend days × 2))/7. Screen Time Parents were also asked to recall the total average duration their child watched TV and used the computer, smartphones, and video games. The questions were asked separately for weekdays and weekend days and reunited for the analyses (Cronbach's α = 0.87). For screen time, the questions were: "How many hours during a week day does your child usually watch TV, use computer, smartphones or electronics games?" and "How many hours during a weekend day does your child usually watch TV, use computer, smartphones or electronics game?". Then, the same procedure used for sleep hours was applied. Physical Fitness Reliable and feasible assessments of three health-related fitness tests from the PREFIT Battery [28] were assessed, as follows: 1. Cardiorespiratory fitness (CRF) was measured using the PREFIT 20-m shuttle run test [28]. Participants completed the PREFIT 20-m shuttle run keeping in time with an audible "bleep" signal. The frequency of the sound signals was increased every minute by 0.5 km/h, increasing the intensity of the test, and the children were encouraged to run to exhaustion. Some adaptations of the original test were made by decreasing the initial speed (i.e., 6.5km/h instead of the original 8.5 km/h). Evidence for the acceptable reliability and validity of the PREFIT 20-m shuttle run test for preschoolers was previously provided [27]. 2. Speed-agility (shuttle run test 4 × 10 m) consisted of running and turning as fast as possible between two parallel lines (10 m apart), covering a distance of 40 m. The best of two attempts was recorded (seconds). This test had a good correlation in boys and girls (r = 0.86) [27]. 3. Lower-body muscular strength was assessed by the standing long jump. From a parallel standing position and with the arms hanging loose at the sides, participants were instructed to jump twice as far as possible in the horizontal direction and land on both feet. The test score (the best of two trials) was the distance in centimeters, measured from the starting line to the point where the back of the heel landed on the floor, as previously proposed [27]. Statistical Analysis Descriptive statistics are described as means and standard deviation and were calculated for the assessed variables, and one-way ANOVA with Tukey's post hoc test was performed to compare the differences between means by age. The prevalence of compliance by age for each of the recommendations was calculated. Considering the importance of the assessed variables for an adequate emergent pattern, a non-linear machine learning approach entitled Network Analysis was used to explore the relationships between compliance with movement behaviors, physical fitness, and adiposity (BMI z-score and WC) according to age. This technique aims to establish relationships through multiple interactions between variables from graphical representations [30]. Concerning the cross-sectional nature of this study, an undirected weighted network analysis was used to estimate the relationship between nodes (assessed variables) from a correlation matrix; when transformed, they were represented by positive or negative edges, which were the relationships between the different nodes, but with no arrowheads to indicate the direction of effect. The Fruchterman-Reingold algorithm was applied so that the data were presented in the relative space in which variables with stronger associations remained together and the less strongly associated variables were repelled from each other [31]. The pairwise Markov random field model was used to improve the accuracy of the partial correlation network, which was estimated from L1 regularized neighborhood regression. The least absolute contraction and selection operator was used to obtain regularization and make the model less sparse [32]. The EBIC parameter was adjusted to 0.25 to create a network with greater parsimony and specificity [33]. The qgraph package of the R studio (free version) program was used to estimate and visualize the graph [33]. Regularized algorithms of the selection operator and minimum absolute reduction (LASSO) were used to obtain the precision matrix, which, when standardized, represented the associations between the network variables. The thickness and color intensity of the lines represent the magnitude of the associations. The blue lines represent positive associations, and the red lines represent negative ones. Finally, the centrality index's expected influence was calculated. The expected influence indicates the importance of a node for the structure and function of the network. This centrality measure consists of the sum of all possible edge weights that connect one node to another and is used to assess the nature and strength of a variable's cumulative influence within the network and thus the role it is expected to play in the activation, persistence, and remission of the network [34]. A positive expected influence means that the influence of that specific node in the network tends to increase for the acquisition of an adequate network pattern [34]. Results A total of 253 preschoolers (4.44 ± 0.76 years old) were assessed. Significant differences between ages were seen for body weight and body height (Table 1). PA was the behavior with the greatest compliance in preschoolers, while the lowest compliance was seen for the screen time recommendation. According to age, compliance with the three recommendations simultaneously was 2.2, 1.0, and 1.3%, for 3-, 4-, and 5-year-olds, respectively. (Figure 1). PA was the behavior with the greatest compliance in preschoolers, while the lowest compliance was seen for the screen time recommendation. According to age, compliance with the three recommendations simultaneously was 2.2, 1.0, and 1.3%, for 3-, 4-, and 5year-olds, respectively. (Figure 1). The network analysis highlighted the emergent patterns of the interrelationships between all the variables presented in the network. For the 3-year-old preschoolers, the emerging network ( Figure 2) showed that compliance with PA and sleep recommendations were negatively associated with sex and positively associated with CRF. At 4 years old, the emergent pattern highlighted a negative association between compliance with PA recommendations and sex and a positive association between CRF and speed-agility. Moreover, compliance with the screen time recommendation showed a positive association with adherence to the sleep duration recommendation. Finally, adherence to the sleep duration recommendations showed a positive association with CRF. At 5 years old, the emergent pattern indicated a positive association between compliance with the PA recommendations and CRF and lower limb strength. Additionally, sex was negatively associated with physical fitness components. When analyzing the centrality indexes for each age group, it was observed that for preschoolers aged 3 years, CRF presented the highest expected influence value (2.169). For the 4-and 5-year-old preschoolers, compliance with the PA recommendation was the variable with the highest value (2.222 and 1.309, respectively). (Table 2). The network analysis highlighted the emergent patterns of the interrelationships between all the variables presented in the network. For the 3-year-old preschoolers, the emerging network ( Figure 2) showed that compliance with PA and sleep recommendations were negatively associated with sex and positively associated with CRF. At 4 years old, the emergent pattern highlighted a negative association between compliance with PA recommendations and sex and a positive association between CRF and speed-agility. Moreover, compliance with the screen time recommendation showed a positive association with adherence to the sleep duration recommendation. Finally, adherence to the sleep duration recommendations showed a positive association with CRF. At 5 years old, the emergent pattern indicated a positive association between compliance with the PA recommendations and CRF and lower limb strength. Additionally, sex was negatively associated with physical fitness components. Obesities 2023, 4,6 PA was the behavior with the greatest compliance in preschoolers, while the lowest compliance was seen for the screen time recommendation. According to age, compliance with the three recommendations simultaneously was 2.2, 1.0, and 1.3%, for 3-, 4-, and 5year-olds, respectively. (Figure 1). The network analysis highlighted the emergent patterns of the interrelationships between all the variables presented in the network. For the 3-year-old preschoolers, the emerging network ( Figure 2) showed that compliance with PA and sleep recommendations were negatively associated with sex and positively associated with CRF. At 4 years old, the emergent pattern highlighted a negative association between compliance with PA recommendations and sex and a positive association between CRF and speed-agility. Moreover, compliance with the screen time recommendation showed a positive association with adherence to the sleep duration recommendation. Finally, adherence to the sleep duration recommendations showed a positive association with CRF. At 5 years old, the emergent pattern indicated a positive association between compliance with the PA recommendations and CRF and lower limb strength. Additionally, sex was negatively associated with physical fitness components. When analyzing the centrality indexes for each age group, it was observed that for preschoolers aged 3 years, CRF presented the highest expected influence value (2.169). For the 4-and 5-year-old preschoolers, compliance with the PA recommendation was the variable with the highest value (2.222 and 1.309, respectively). (Table 2). When analyzing the centrality indexes for each age group, it was observed that for preschoolers aged 3 years, CRF presented the highest expected influence value (2.169). For the 4-and 5-year-old preschoolers, compliance with the PA recommendation was the variable with the highest value (2.222 and 1.309, respectively). (Table 2). Discussion While previous studies have investigated compliance with the 24-h movement recommendations in preschoolers [23,24], this study offers unique insight into compliance with these recommendations and modifiable risk factors for obesity, considering the nonlinear nature of these associations. Our main results indicated low compliance with the recommendations for the three movement behaviors simultaneously and highlighted a lower prevalence of compliance compared with previous results about the associations between movement behaviors and adiposity indicators. For instance, previous results have shown a compliance prevalence of 93.1% for Australian children and 61.8% for Canadian children [35,36]. Nonetheless, it is important to note that the present study reported the prevalence according to age, allowing us to better identify age-related variability and the best period to implement intervention strategies. Moreover, the current results originated from socially vulnerable regions, and the participants were low-income preschoolers, who engage in less structured PA [37], show a greater level of sedentary behaviors [38,39], and belong to families whose levels of sedentary behavior, especially screen time, are documented to happen earlier in life [40]. Although it is recognized that young children should be encouraged to play freely, it is also important to establish that fitness is an important mediator for a positive relationship between PA, motor competence, and consequent healthy weight status. The current results highlight different associations between movement behaviors, fitness components, and adiposity according to age [38]. At age 3, compliance with the PA recommendations was negatively associated with compliance with sleep duration, sex, BMI, and lower limb strength, and it was positively associated with screen time. Although it is somewhat controversial, it is possible to argue that at 3 years of age, these relationships are not well established yet [41]. While it is known that screen time in the first years of life presents a greater relationship with involvement in PA throughout the day, there is also evidence that increased screen time in parents is positively associated with more screen time in children [41,42], suggesting that other factors that were not considered in this study could determine the observed relationships. To reinforce the abovementioned, at 4 years old, the associations observed became stronger once associations that were previously negative became positive from that age onward. Our data indicate that adherence to the PA recommendations was positively associated with sex, BMI, WC, lower limb strength, CRF, and speed-agility. Despite these positive results, adherence to the PA recommendation also showed a positive association with screen time, which could be credited to compensatory sedentary behavior, and a negative association with sleep duration. This negative association between PA and sleep duration at young ages has been previously shown. In fact, at young ages, children s sleep patterns are not well established, and more PA, especially before bedtime, could negatively impact children s sleep [40]. One explanation for this finding is that bedtime can directly interfere with adiposity measures, which is in line with the study by Xiu et al. [11], who concluded that more frequent exposure to late sleep was associated with greater increases in adiposity measures in children aged 2 to 6 years, especially when parents were obese. At 5 years old, compliance with the PA recommendation was positively associated with sleep duration, sex, BMI, lower limbs strength, CRF, and speed-agility. Moreover, PA compliance was negatively associated with compliance with screen recommendations and WC. These results reinforce the importance of an early-onset intervention for childhood obesity. The main strength of the current study was the novel approach used to explore the associations between compliance with movement behavior recommendations, fitness, and adiposity markers in preschool children, accounting for their intrinsic non-linear interrelationship, as seen in a real-life context. This approach allows the evaluation of the interactions between variables as a complex system based on measures of centrality [43]. In addition, keeping variables that have small effects on the complex system is also important, considering that a small effect can be responsible for important changes in the entire network [44]. However, notwithstanding the novelty of the present study, some limitations should be highlighted. Ensuring that children wear sensors at night is a real ecological barrier to objectively assessing sleep duration at such young ages. The variability in the fitness assessments in this age group is also an important factor that should be recognized; this diminished our ability to gather detailed insight into certain ages. Moreover, other possible correlates of movement behaviors, fitness, and adiposity markers, such as preschoolers nutritional behaviors, could be explored in future studies. Finally, although the observed results reinforce the importance of early intervention in preschoolers to avoid the aggravation of modifiable risk factors for the development of obesity, concerning its cross-sectional nature, we advocate longitudinal designs that lay out the developmental course of the observed associations and allow further exploration of the stability and the prediction of changes in the investigated networks. Conclusions This study emphasized CRF and compliance with PA recommendations as the most critical variables to address in preschoolers, reinforcing the importance of interventions based on intense activities even in early childhood. Conflicts of Interest: The authors declare no conflict of interest.
2023-01-27T16:09:32.478Z
2023-01-24T00:00:00.000
{ "year": 2023, "sha1": "c60a4cfcc08b2d0b24191c73ea05931d26006960", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2673-4168/3/1/4/pdf?version=1674551496", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "eff67e30d18220d60d413e939dbf66adf28bf47a", "s2fieldsofstudy": [ "Education", "Medicine" ], "extfieldsofstudy": [] }
53340830
pes2o/s2orc
v3-fos-license
Change points of global temperature We aim to address the question of whether or not there is a significant recent ‘hiatus’, ‘pause’ or ‘slowdown’ of global temperature rise. Using a statistical technique known as change point (CP) analysis we identify the changes in four global temperature records and estimate the rates of temperature rise before and after these changes occur. For each record the results indicate that three CPs are enough to accurately capture the variability in the data with no evidence of any detectable change in the global warming trend since ∼1970. We conclude that the term ‘hiatus’ or ‘pause’ cannot be statistically justified. Introduction The idea of a recent 'hiatus', 'pause' or 'slowdown' of global temperature rise has received considerable public and scientific attention in recent years (Mooney 2013, Hawkins et al 2014. The time interval to which people refer differs but is usually taken as starting either in 1998 or 2001. Global temperature trends starting from these particularly warm years until the present are smaller than the long-term trend since 1970 of 0.16 ± 0.02°C per decade, though still positive. While close to 50 papers have already been published on the 'hiatus' or 'pause' (Lewandowsky et al, in press), the important question of whether there has been a detectable change in the warming trend (rather than just variability in short-term trends due to stochastic temperature variations) has received little attention. An appropriate statistical tool to answer this question is change point (CP) analysis (e.g. Carlin et al 1992). CP analysis allows us to determine the magnitude of changes in rates of temperature increase/decrease and estimate the timings at which these changes occur. Methods We present an approach to CP analysis known as CP linear regression (e.g. Carlin et al 1992). This approach models a time series as piecewise linear sections and objectively estimates where/when changes in data trends occur. The model forces each line segment to connect, avoiding discontinuities. Isolated pieces of trend line with sudden temperature changes between them (i.e. a 'stairway model') would not provide a physically plausible model for global temperature given the thermal inertia of the system. A change in climate forcing can instantly change the rate of warming, but cannot instantly change global temperature. To specify the model, consider a sample size of n, with response data y y , , n 1 … observed at continuous times x x , , n 1 … with x x x n 1 2 < < … < . In the simplest regression case of a single CP we have: Here α is the expected value of y at the CP, and 2 σ is the residual variance. Most importantly γ is the time value where a change in rate occurs, and 1 β and 2 β are the slopes before and after the trend change. The on the parameters. Of most interest are the priors for the CP parameters l γ (the timings of the rate changes). These parameters are given uniform prior distributions over the entire range of the data, with the condition that they are ordered chronologically. Using Bayes' theorem allows the data to inform the model and update prior information to give posterior estimates for l γ and any other parameters of interest. We defer discussion of the technical details involved in this model to appendix A. The model as described takes the number of CPs m as a fixed parameter. To determine the most appropriate value of m, and thus the number of CPs, we use the deviance information criterion (DIC; Spiegelhalter et al 2002). The DIC works by penalising the deviance (a measure of the quality of the model's fit to data) by its complexity, determined by the effective number of parameters. In general as model complexity increases, the deviance will decrease, so adding this penalty will select parsimonious models that fit the data well but are not too complex. The DIC is negatively orientated (i.e. a smaller value indicates a better model). When running the models, we choose a range of values for m, e.g. from 0 to 5. Parameter convergence is monitored, models that do not show convergence are rejected and from the remainder DIC is used to decide on the most appropriate model for the data. The models were fitted in JAGS (just another Gibbs sampler; Plummer 2003). JAGS is a tool for analysis of Bayesian hierarchical models using Markov Chain Monte Carlo (MCMC) simulation. MCMC is a technique to approximate Bayesian posterior distributions for unknown parameters. The method simulates a Markov process (a type of random walk) where the distribution of the values it will take after a very large number of iterations is the required posterior distribution (see appendix B for more details on MCMC). JAGS offers cross platform support, and provides a direct interface to R using the package rjags (Plummer 2014). Bayesian models fitted using MCMC provide samples from the posterior distribution of the parameters. From these samples we can compute any summary statistics we require (e.g. means and standard deviations). When JAGS models are run, a number of samples are usually discarded as the algorithm converges to the true posterior, and a further number are thinned out to avoid autocorrelation in successive samples. We check convergence using standard methods in the R package coda (Plummer et al 2006). The R code required to run the CP models is provided in the supplementary material. Results In figure 1 we present a CP analysis applied to four global annual temperature data series (Had-CRUT4, 2015, GISTEMP 2015, NOAA 2015, Cowtan and Way 2015. In all cases, by monitoring convergence and using DIC, we find that the best fit is obtained with three CPs situated in about 1912, 1940 and 1970. The linear sections correspond to wellknown stages of global temperature evolution, where the plateau from 1940 to 1970 is related to a nearbalance of positive (greenhouse gas) and negative (aerosol) anthropogenic forcings, while the ∼0.6°C warming since 1970 has been attributed almost entirely to human activity (IPCC 2014). The three CP fit is similar to a smooth nonlinear trend line obtained by singular spectrum analysis (Rahmstorf and Coumou 2011). Our approach aims to identify trend changes in the data series. A side effect of the CP model choice is that the first derivative of temperature (i.e. the rate of change) over time is discontinuous; at a CP the rate of temperature increase/decrease switches from one value to another. Physically this is not implausible. A sudden change in the rate of temperature change is far less unphysical than a sudden change in temperature. The former could be caused by a change in the rate of forcing. An alternative approach would be to use a spline model and this would be particularly useful if our interests lay specifically in observing continuous rates of temperature change. Cubic splines are a popular choice (Eilers and Marx 1996), as these have two continuous derivatives. However, the choice of continuous derivatives is entirely arbitrary; curves with two continuous derivatives are appealing as they appear smooth to the human eye. Neither approach is inappropriate, indeed, we could think of our CP analysis as a version of a spline model where the knots (places where to splines intersect) are the CPs. When comparing a cubic spline fit with our three CP model fit we found the results to be very similar, so the additional degrees of freedom offered by the cubic spline appear unnecessary. We therefore simply apply the model that gives us direct access to the quantity of interest, namely detectable changes in linear rates of global temperature rise and their corresponding timings. Residual analysis demonstrates that our three CP model suitably captures the signal similarly to that of a cubic spline model, i.e. the residual data after subtracting the model fit indicated that most of the climate signal is fully accounted for. Using fewer than three CPs leaves one with highly auto-correlated residuals, i.e. a remnant climate signal. Attempts to find a fourth CP fail with poor parameter convergence. Comparison of the three CP model with a piecewise linear regression model that forces a trend change in 1998 and 2001 gives further validation to our results. For each record the comparison indicates only minor differences between the two models (figures 2 and 3). Further, the 95% confidence intervals for the fitted values (not shown) show strong overlap indicating no notable difference in both cases. Based on these results it is unsurprising that we did not find a fourth trend change in these data. The smallest difference Figure 1. Overlaid on the raw data are the mean curves predicted by the three CP model. The grey time intervals display the total range of the 95% confidence limits for each CP. The average rates of rise per decade for the three latter periods are 0.13 ± 0.04°C, −0.03 ± 0.04°C and 0.17 ± 0.03°C for HadCRUT, 0.14 ± 0.03°C, −0.01 ± 0.04°C and 0.15 ± 0.02°C for NOAA, 0.15 ± 0.05°C, −0.03 ± 0.04°C and 0.18 ± 0.03°C for Cowtan and Way and 0.14 ± 0.04°C, −0.01 ± 0.04°C and 0.16 ± 0.02°C for GISTEMP. Finally to conclusively answer the question of whether there has been a 'pause' or 'hiatus' we need to ask: If there really was zero-trend since 1998, would the short length of the series since this time be sufficient to detect a CP? To answer this, we took the GISTEMP global record and assumed a hypothetical climate in which temperatures have zero trend since 1998. The estimated trend line value for 1998 is 0.43°C (obtained by running the CP analysis on the original data up to and including 1998). Using this, we simulated 100 de-trended realizations for the period 1998-2014 that were centered around 0.43°C. We augmented the GISTEMP data with each hypothetical climate realization and ran the four CP model on the augmented data sets. This allowed us to observe how often a fourth CP could be detected if the underlying trend for this period was in fact zero. Results showed that 92% of the time the four CP model converged to indicate CPs in approximately 1912CPs in approximately , 1940CPs in approximately , 1970 and a fourth CP after 1998. Thus, we can be confident that if a significant 'pause' or 'hiatus' in global temperature did exist, our models would have picked up the trend change with a high probability of 0.92. Conclusion CP regression analysis does not detect a significant change or'pause' in global warming trends since ∼1970. Consistent with this, recent intervals of rapid warming like 1990-2006 have not been interpreted as significant acceleration (Rahmstorf et al 2007). Recent variations in short-term trends are fully consistent with an ongoing steady global warming trend superimposed by short-term stochastic variations. This conclusion is consistent with modelling (Kosaka andXie 2014, Risbey et al 2014) and statistical analysis (Foster and Rahmstorf 2011) suggesting that ENSO variability is the main physical reason for the observed variation in warming trends. Analysis of the extremes is also consistent with an ongoing warming trend. The hottest years on record were 2014, 2010 and 2005 (except in Cowtan and Way where 2014 ranks second); for a steady warming trend of 0.16°C per decade and the observed variance of the residual, a new record is expected on average every four years (Rahmstorf and Coumou 2011). While it has been shown that global temperature over the 21st century will potentially demonstrate periods of no trend or even slight cooling in the presence of longerterm warming (Easterling and Wehner 2009), since 1976 not even a ten-year cold record has been set. We conclude that, based on the available data, the use of the terms 'hiatus' or 'pause' in global warming is inaccurate. Acknowledgments We are grateful to the two anonymous reviewers for their comments that improved the early version of the paper. This research is supported by the Programme for Research in Third Level Institutions (PRTLI) Cycle 5 and co-funded by the European Regional Development Fund, and the Science Foundation Ireland Research Frontiers Programme (2007/RFP/ MATF281). Appendix A. Multiple CP model In the multiple CP case we have l γ , where l m 1, , = … , assuming x x m n 1 1 2 γ γ γ < < < … < < . We now have m 1 + independent data segments and y i is drawn from the probability density function of the jth segment for j m 1 ,..., 1 = + . Therefore we can write the model when i is in the jth segment as: 1, , 1} j α α = = … + . We can then use Bayes' theorem to find the posterior distribution of the parameters given the data: (1) For all models demonstrated in this paper, we use only vague N(0,1E + 6) prior distributions for the α parameters. Prior distributions are not necessary for j β where j = 2,…,m as these can be deterministically evaluated since neighbouring segments must join together. We use: The only free slope parameters 1 β and m 1 β + are given vague N(0,1E + 6) prior distributions. The CP l γ parameters are given uniform prior distributions over the entire time range of the data, U x x ( , ) l n 1 γ ∼ , l = 1, …,m. When l 1 > we set the condition that l γ are ordered, so that m 1 2 γ γ γ < < … < . Appendix B. Markov Chain Monte Carlo (MCMC) We provide a summary of the use of MCMC methods in Bayesian analysis. For those interested in the more technical details involved in MCMC we recommend the introduction chapter in the Handbook of Markov Chain Monte Carlo (Brooks et al 2011). MCMC is a technique used to solve the problem of sampling from complicated distributions. It is particularly useful for the evaluation of posterior distributions in Bayesian models. From Bayes' theorem we have: where X is our data and θ is an unknown parameter or vector of parameters. MCMC algorithms work by drawing values for θ from the posterior distribution (the probability distribution for θ given some observed data). The Markov chain component of the MCMC algorithm implies that a future value of our parameter (s) t 1 θ + only depends on the current value t θ and does not depend on any of the previous values , , In some MCMC algorithms the parameter value will either be accepted or rejected. This acceptance/rejection step is governed by a specified probability rule. We always accept values that are 'good' (i.e. that are supported by the data). However, occasionally we accept values that are 'worse' than the current value (although still supported by the data). It is this strategy that allows us to sample a probability distribution for our parameter(s) rather than only finding a point estimate. The algorithm converges when the sampled parameter values stabilize and if the algorithm is efficient then it will converge towards the parameter's posterior probability distribution. Often, during the warm-up phase of the algorithm, samples are discarded. This is known as the burn in period. Only samples beyond the burn in period are used as samples from the posterior distribution. The Monte Carlo step implies that by sampling enough times from the posterior distribution we can get a good estimate for our parameter(s) by taking an average of all the samples. Two popular MCMC algorithms are Metropolis-Hastings (M-H) and the Gibbs sampler. In the M-H algorithm, samples are selected from an arbitrary 'proposal' distribution and are retained or not according to an acceptance rule. The Gibbs sampler is a special case in which the distributions are conditional (or 'proposed' conditional) distributions of single components of a parameter vector. JAGS (Plummer 2003) is a program that was developed to perform these MCMC methods on Bayesian statistical models. An appropriate likelihood for the data and priors for the unknown parameter(s) are specified in a model file and JAGS generates MCMC samples from this model using Gibbs sampling. JAGS therefore produces sample values for our unknown parameter(s) and once we are happy that the algorithm has converged these samples can be used to obtain point estimates (means, medians) and uncertainties 3 .
2018-11-01T04:38:50.487Z
2015-08-03T00:00:00.000
{ "year": 2015, "sha1": "3da4ffebf97aff4a191920287a8992b28fb2529c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1748-9326/10/8/084002", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "d90b3cc362a57457cca4aa54ec8448a9f8079e5d", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
52264809
pes2o/s2orc
v3-fos-license
Exotic Higgs decays in the golden channel The Higgs boson may have decay channels that are not predicted by the Standard Model. We discuss the prospects of probing exotic Higgs decays at the LHC using the 4-lepton final state. We study two specific scenarios, with new particles appearing in the intermediate state of the 4-lepton Higgs decay. In one, Higgs decays to a Z boson and a new massive gauge boson, the so-called hidden photon. In the other, Higgs decays to an electron or a muon and a new vector-like fermion. We argue that the upcoming LHC run will be able to explore a new parameter space of these models that is allowed by current precision constraints. Employing matrix element methods, we use the full information contained in the differential distribution of the 4-lepton final state to extract the signal of exotic decays. We find that, in some cases, the LHC can be sensitive to new physics even when the correction to the total 4-lepton Higgs rate is of the order of a percent. In particular, for the simplest realization of the hidden photon with the mass between 15 and 65 GeV, new parameter space can be explored in the LHC run-II. Introduction The particle with mass m h ≈ 125.6 GeV discovered at the LHC is so far perfectly compatible with being the Standard Model (SM) Higgs boson [1,2]. It is nevertheless conceivable that more in-depth studies will reveal its non-standard properties. In particular, the Higgs may have exotic decay channels, that is channels not predicted in the SM or predicted to occur with a negligible branching fraction. Many scenarios beyond the SM predict new Higgs decay channels, especially in the presence of new degrees of freedom with m m h . The existing LHC searches for exotic Higgs decays cover decays to invisible particles [3,4], to 4 photons [5] or 4 muons via new [6,7] intermediate bosons, to electron jets [8], and to long-lived neutral particles [9,10]. However many more interesting final states and topologies exist [11][12][13][14][15][16]; see Ref [16] for a comprehensive review. It should be noted that the current Higgs data can easily accommodate an order 20% branching fraction for exotic decays, and even more if the Higgs production cross section is enhanced, and/or Higgs couplings to the SM matter are modified, see Fig. 1. Furthermore, the sizable Higgs production cross section at the LHC allows us to probe much smaller branching fractions: down to ∼ 10 −5 currently, and down to ∼ 10 −9 in the future 100 TeV collider, as long as the final state is experimentally clean. All this makes exotic Higgs decays an attractive direction to search for new physics. One very promising [13,16] signature for this kind of searches is the so-called golden channel: the 4 final state, = e, µ, with two opposite-sign same-flavor lepton pairs. Thanks to the fully reconstructible kinematics, low background, and small systematic errors it was one of the early Higgs discovery channels despite the small branching fraction. At the same time, order one new physics corrections to the SM rate in this channel can be accommodated at this point. Assuming the Higgs production cross section is unchanged from the SM, the event rates reported in Refs. [17,18] yield the 95% CL limits on the additional partial decay widths: For new physics contributing to all sub-channels the limit is Strictly speaking, the widths in Eq. (1) and Eq. (2) should be weighted by the efficiency to experimental cuts, which may differ in the presence new physics. Br exotic @%D Figure 1: Global fit to the Higgs data in the presence of an exotic contribution to the Higgs decay width δΓ h . The black curve assumes the Higgs production cross section and relative branching fraction to the SM matter are fixed at the SM values, which leads to the indirect limit Br(h → exotic) 18% at 95% CL. This limit takes into account the uncertainty on the SM prediction of the gluon-fusion production cross-section which we take as 14.7% [19]. Leaving as a free parameter in the fit the gluon fusion production cross section (purple curve), and/or the Higgs branching fraction to b-quarks (blue curve), the limit is relaxed to Br(h → exotic) 30%. If all effective Higgs couplings to the SM are left free then only the model independent bound Br(h → exotic) 80% applies, based on the direct Higgs width measurement in CMS [20]. Apart from the event rate, the 4 final state offers far more information in the form of the differential distribution in the decay angles and lepton pair invariant masses. In this paper we investigate the possibility of using this information to further constrain exotic decays of the Higgs boson. We employ the matrix element methods originally developed for the purpose of determining the structure of the Higgs couplings to the SM gauge bosons [21][22][23]. The starting point for our analysis is an analytic expression for the fully differential h → 4 matrix element, with and without the new physics contribution. Using this matrix element, we construct a likelihood function for a data set containing a number N of 4-lepton events. This likelihood function is then used to estimate the statistical significance for discrimination between the SM and exotic decays hypotheses as a function of N . We study two simple models that can accommodate sizable exotic branching fractions in the golden channel without violating current experimental constraints. The first one contains a new light gauge boson X coupled to the SM via the hypercharge portal X µν B µν [24]. The kinetic mixing induces the coupling of X to the electromagnetic current, and also the mixing between the Z boson and X. As a result, the Higgs boson can decay as h → XZ when it is kinematically allowed. When both X and Z decay leptonically, this new Higgs decay mode contributes to the 4 final state. Another model we study here contains a new heavy vector-like charged lepton E transforming as (1, 1) −1 under the SM gauge group. After electroweak symmetry breaking E mixes with one of the SM leptons via Yukawa couplings. As a result, one obtains non-diagonal couplings to the Z and Higgs boson of the form Z µĒL γ µ L + h.c. and hĒ R L h + h.c.. These couplings mediate the h → E → Z cascade decay that, for leptonic Z decays, again contributes to the 4-lepton final state. The paper is organized as follows. In Section 2 we describe our models in more detail. In Section 3 we review the matrix element methods to extract information from the golden channel. Our results regarding the sensitivity of the golden channel to exotic Higgs decays are contained in Section 4. Models In this section we study two scenarios where new light degrees of freedom can modify Higgs decays in the golden channel. One has a new light vector field (the hidden photon) kinetically mixing with the SM hypercharge. The other has a new vector-like fermion with quantum numbers of the SM right-handed electron that mixes via a Yukawa coupling with one of the SM charged leptons. We determine the region of the parameter space of these models allowed by precision measurements, and we discuss the limits on the branching fraction for exotic Higgs decays imposed by these constraints. Hidden Photon The first model we study has cascade decay h → ZX → 4 mediated by a new neutral vector boson. Consider a massive abelian gauge field X µ interacting with the SM only via the hypercharge portal: Here θ W is the Weinberg angle, and the non-standard normalization of the X kinetic term is introduced for future convenience. We assume 1 and determine the spectrum and couplings perturbatively in . The mass termm X could be generated via the Stückelberg mechanism, or via an expectation value of a hidden sector Higgs field; in the latter case we will assume the corresponding hidden Higgs boson is heavy enough such that it does not affect the hidden photon decays. We are interested inm X m Z , such that X can have a non-negligible effect on Higgs decays. To work out the model's phenomenology it is convenient to remove the kinetic mixing by redefining the hypercharge gauge field: B µ → B µ +cos θ −1 WX µ . The kinetic terms are now diagonal and canonically normalized, but after the EW breaking the Z and X bosons mix via the mass terms, wherem Z = g 2 L + g 2 Y v/2 and we denote g L , g Y the SM gauge couplings of SU (2) L × U (1) Y . To diagonalize the mass matrix we need the rotation Mixing between the Z and exotic bosons is constrained electroweak precision observables. In particular, it affects the mass of the Z boson, and the Z boson couplings to matter, whereĝ Z,f = g 2 is the Z boson coupling in the SM. Using the constraints from LEP-1 and SLC [25] and W mass [26] measurements for m X m Z we find in agreement with Ref. [27]. For m X below 9.3 GeV one gets a stronger limit | | 10 −3 [16,28] based on Υ(2S, 3S) → γµ + µ − searches in BaBar [29]. We turn to the couplings of the hidden photon. The couplings to the SM fermion are The new vector field couples to the electromagnetic current up to O(m 2 X /m 2 Z ) corrections, hence the name hidden photon. Assuming there's no other decay channels of X (in particular, there is no decay to other particles in the hidden sector), for m X m Z one finds Br(X → l + l − ) ≈ 0.15, Br(X → had) ≈ 0.55, while Br(X → νν) is negligible. Due to the mixing with Z, the hidden photon also acquires the coupling to the Higgs boson: Thus, all elements are in place for new contributions to the golden channel via the cascade decay h → ZX → 4 . However, the coupling in Eq. (10) is suppressed not only by but also by m 2 X /m 2 Z . For this reason, the maximum Br(h → ZX) does not exceed 2.5 × 10 −4 , as can be read off from the right panel of Fig. 2. Currently, such a small branching fraction is not constrained by the observed h → 4 event rate. Even scaling the present sensitivity to 300 fb −1 of data at 14 TeV LHC, the rate information alone does not allow one to explore the parameter space that is not excluded by precision measurements, see the left panel of Fig. 2. Somewhat stronger limits can be obtained when the input from the dilepton invariant mass distribution is used [16], but these limits are still weaker than the ones from electroweak precision tests. In Section 4 we will argue that the sensitivity can be further enhanced by using the full information contained in the differential distribution of h → 4 decays. A larger 4-lepton branching fraction can be obtained by modifying the model. One way is to introduce mixing between the SM and the hidden Higgs boson S that subsequently decays as S → XX [30]. Here we consider another simple modification. One can introduce additional couplings between the hidden photon and the SM sector [31]: whereB µν = µνρσ ∂ ρ B σ . The new terms in ∆L induce new couplings of the Higgs boson to the Z boson and the hidden photon: In principle, the parameters 2 and 3 are not constrained by precision observables (although | 2 | | | would be fine-tuning). 1 Furthermore, the Higgs couplings in Eq. (12) are not suppressed 1 Note that the CP-odd kinetic mixing termBµνXµν is a total derivative and has no physical consequences. by m 2 X /m 2 Z , unlike in the vanilla model. For these reasons, this deformation of the hidden photon model allows for a sizable branching fraction for h → XZ decay. In fact, the strongest constraints on 2 and 3 currently come from the h → 4 searches. We note that for 2,3 = 0 the model also contains the hXγ couplings: It leads to an additional contribution to the h → 4 decay, with an off-shell photon instead of Z. The size of this contribution strongly depends on the experimental cuts on the final state leptons. 2 We find that for the standard CMS cuts the photon mediated contribution affects the new physics corrections to the 4 event rate by an O(1) factor. Another consequence of the couplings in Eq. (13) is the presence of h → Xγ decays with an off-shell photon. The branching fraction is larger than that for h → XZ decays because the hXγ coupling is larger by tan −1 θ W , and because there is less phase space suppression. For example, for 2 = 0.02 or 3 = 0.02 one finds Br(h → Xγ) ≈ 10 %. Therefore this version of the hidden photon model can also be probed in the h → + − γ final state. We postpone to a future publication quantitative studies of the sensitivity of the h → + − γ channel to exotic Higgs decays. Vector-like Lepton The other scenario we study in this paper is the one where Higgs decays can proceed as h → El → Z + − → 4 , mediated by a new charged lepton mixing with the SM leptons. Consider the SM extended by a vector-like fermion E transforming under the SM gauge group as (1, 1) −1 , thus having quantum numbers of the right-handed electron. We assume E mixes with one of the SM charged leptons via Yukawa couplings. The part of the Lagrangian giving rise to the vector-like and SM lepton masses is given by where l L = (ν L , L ), and could be electron, muon, or tau. The first term is the usual SM lepton Yukawa coupling. The second is a vector-like mass M E of the heavy fermion. The last term leads to a mixing between the vector-like and the SM lepton after electroweak symmetry breaking. We assume Y v M E and yv M E , in which case the lepton mass eigenstates of the mass matrix can be worked out perturbatively in v. To diagonalize the mass matrix we make the rotation where the mixing angles are Thus, at the leading order, only left-handed charged leptons mix with the vector-like lepton. The mass of the heavy lepton is approximately M E , and the mass of the SM lepton is approximately Because E L and L have different quantum numbers under the EW group, the mixing affects the lepton couplings to W and Z. At the leading order one obtains non-diagonal lepton couplings to W and Z bosons, These couplings allow the heavy lepton to decay as E → Z or as E → W ν, and we assume here that E has no other decay channels. For M E close to m Z the branching fractions strongly depend on M E (due to the phase space suppression), and Br(E → Z ) varies between 10% and 25% for M E between 100 and 125 GeV. The Higgs boson also obtains non-diagonal couplings to the leptons: At the end of the day, for m Z < M E < m h , the Higgs boson can cascade decay as h → El → Z + − → 4 . The mass of the heavy lepton is constrained by direct LEP-2 searches M E 103 GeV [32]. So far the LHC experiments have not provided new limits on M E , while a recast of generic multilepton searches [33] concluded that and SU(2) singlet E with M E in the 100 GeV ballpark is not excluded [34]. Furthermore, the mixing angle α L is constrained by electroweak precision tests. At the second-order in v the couplings of the SM left-handed charged leptons to W and Z are modified as The precise constraint on α L somewhat depends on whether E mixes with e, µ, or τ . Using the electroweak precision measurements from LEP-1 and SLC [25] and the recent W mass measurements [26] we find the following 95% CL limits: (e) α L < 0.017, For a given M E this translates into upper limits on the Yukawa coupling Y , and in consequence into upper limits on Br(h → E ). The maximum allowed branching fractions in the electron, muon and tau channels are shown in the left panel of Fig. 3. These limits turn out to be weak enough to allow an observable signal in the golden channel. In fact, the limits on additional width in the golden channel in Eq. (1) already exclude a sizable chunk of otherwise viable parameter space. We conclude that vector-like leptons with mass M E 125 GeV can be meaningfully probed by exotic Higgs decays. Methods We are interested in estimating the potential of LHC Higgs searches in the 4-lepton final state to constrain or discover exotic Higgs decays in the models described in Section 2. To distinguish the SM h → ZZ * → 4 decays from those involving a new hidden photon or heavy fermion, we employ a simplified likelihood analysis following closely the procedure used in Ref. [35] and described in more detail in [36,37]. The h → 4 channel has a good signal-to-background ratio in the signal region m 4 ≈ m h , and is very well discriminated from the backgrounds due to the different shapes in the distributions of the various observables [38]. Of course, ideally one would include the dominant qq → 4 background as well in the discriminator in order to make a precise statement about the sensitivity. However, recent studies [21,22,38] indicate that the effects of including the background should be small enough that for the present purposes considering the signal only is sufficient. The starting point for our analysis is an analytic expression for the fully differential h → 2e2µ decay width. In the models we consider the decay amplitude receive interfering contributions from the h → ZZ * → 2e2µ diagram and from diagrams with an intermediate hidden photon or a vector-like charged fermion. We use it to build the probability density function (pdf ) Here M 1 , M 2 are the invariant masses of the opposite-sign same-flavor lepton pairs, and the decay angles Ω = (Θ, cos θ 1 , cos θ 2 , Φ 1 , Φ) are defined in [22]. The λ represent the parameters of the models to be considered. To compute the matrix element in the hidden photon model we modify the results of [38] to include the new gauge boson contribution. The matrix element in the vectorlike lepton model is computed in the FeynArts/FormCalc framework [39] using a custom model exported from Feynrules [40]. In all cases the interference between the new physics process and the SM is included. Throughout we fix the Higgs boson mass as m h = 125.6 GeV. With the pdfs at hand we can write the likelihood of obtaining a particular data set containing N events as, where O = (m 2 h , M 1 , M 2 , Ω). We then construct a simple hypothesis test [41] where as our test statistic we use the log likelihood ratio defined as, To estimate the expected significance of discriminating between two different hypotheses, we take one hypothesis as true, say λ 1 and generate a set of N λ 1 events. We then construct Λ for a large number of pseudo-experiments each containing N events in order to obtain a distribution for Λ. We repeat this exercise taking λ 2 to be true and obtain a different distribution for Λ. With the two distributions for Λ in hand we can compute an approximate significance by denoting the distribution with negative mean as f and the distribution with positive mean as g and finding a valueΛ such that We then interpret this probability as a one sided Gaussian p-value, which can be used to compute the expected significance for discriminating between hypotheses (see [35] for more details). For a simple hypothesis test, this Gaussian approximation is often sufficient [41]. This procedure is repeated many times for a range of numbers of events N to obtain a significance as a function of N for each hypothesis. In our simplified framework we have also neglected any detector or production effects, but these effects are small and are not needed for the level of precision we aim for in this study [21,22]. For the particular models considered here, λ corresponds to the mass of the new particle and the model parameters determining their coupling to the Higgs and leptons. Specifically, for the hidden photon model λ = (m X , , 2 , 3 ), and for the vector-like lepton model λ = (M E , Y ). Our aim is to estimate whether the golden channel can probe the parameter space of these models that is not excluded by precision tests and direct searches. Various hypothesis tests to this end are conducted in the following section. Results In this section we present our results concerning the sensitivity of the golden channel to exotic Higgs decays for the models described in Section 2. To this end we pick a number of benchmarks point near the boundary of the parameter space region allowed by current constraints. We employ the matrix element approach described in Section 3, where in our hypothesis tests we always compare our new physics model to the SM. For a given number N of events in the h → 2e2µ channel we perform 1000−10000 pseudo-experiments to estimate the discriminating power between the SM and hidden photon mediated Higgs decays. We repeat this procedure over a range of N to obtain an estimate for the discriminating power as a function of number of events. For these pseudoexperiments we use the full available information contained in the differential distribution of the 4-lepton final state except for the total integrated event rate -we refer to this as shape observables. The motivation for separating the total rate is that it is less robust as a discriminator, as it can be affected by physics that has nothing to do with exotic decays, for example by modification of the effective Higgs coupling to gluons. We find that the discriminating power between the pure SM and hidden photon hypotheses comes mostly from M 1 and M 2 distributions, whereas angular variables add some discriminating power only in the extended hidden photon model of Eq. (11). On the other hand, angular variables are important for separating the signal from the non-Higgs SM background. For a number of benchmark points we also show the results of combining the shape and the total rate observables. To reduce computing time, for large N we simply extrapolate our results obtained at lower N assuming the significance grows as √ N . With these tools, we estimate the number of h → 2e2µ events required to exclude our benchmark points at a given confidence level. Although we do not perform simulations in the h → 4µ and h → 4e channels we expect that, after combining all 4-lepton channels, the sensitivity will correspond roughly to doubling the number of h → 2e2µ events. To translate between the number of events and the LHC luminosity we assume the 27% efficiency of reconstructing 4-lepton Higgs decays (the one in CMS in the LHC run-I [18]). Thus, for example, 300 fb −1 at 14 TeV LHC corresponds to roughly 275 h → 2e2µ and 600 h → 4 expected events, where we take σ(pp → h) ≈ 56 pb, and Br(h → 4 ) = 1.3 × 10 −4 [19]. Table 1: Left: benchmarks point for the hidden photon model. The 4-lepton event rate relative to the SM one R = Γ(h→4 ) Γ(h→4 ) SM was computed using MadGraph 5 [42] after imposing the standard CMS cuts: p T, > 10 GeV, |η | < 2.5, and M 1 > 50 GeV, M 2 > 12 GeV for opposite-sign, sameflavor lepton pairs. For the m X = 10 GeV benchmark a weaker cut M 2 > 5 GeV is used, as the standard one cuts away most of the signal. For the benchmarks with non-zero 2 or 3 the rate includes the contribution of diagrams with an intermediate off-shell photon. Right: the same for the vector-like lepton mixing with the SM muon. We start with the vanilla version of the hidden photon model that corresponds to setting 2 = 3 = 0 in Eq. (11). 3 We fix = 10 −2 for all benchmarks and consider several values of the hidden photon masses in the range 10-60 GeV. The benchmark points we studied are summarized in Table 1 and our results concerning the LHC sensitivity are shown in Fig. 4. It is worth noting that for these points the total h → 4 rate is enhanced merely by a few percent compared to the SM. As this is within the uncertainty on the SM Higgs production cross section, the total rate information is not useful to discriminate between the SM and new physics in this case. Nevertheless, taking advantage of the full kinematic information contained in the 4-lepton event leads to a good sensitivity to new physics. We find that the parameter space of the hidden photon model allowed by electroweak precision observables can be probed already in the coming Run-II of the LHC. In particular, assuming 300 fb −1 at 14 TeV will be collected, m X in the range 15-65 GeV can be probed for near the boundary of the region allowed by precision observables. Further increase in sensitivity can be obtained in the high-luminosity phase of the LHC (assuming 3000 fb −1 at 14 TeV) or in the future 100 TeV collider. In particular, the reach can be extended 4 down to m X = 10 GeV, below which the strong bounds on the kinetic mixing from B-factories make it difficult to probe the simplest hidden photon model in high-energy colliders. Note that the case with m X + m Z > m h , where the strictly 2-body decay h → ZX is forbidden, can also be probed to some extent. In this case, the kinematic suppression due to the Z boson being strongly off-shell is partially offset by the fact that the hZX coupling increases with m X . On the other hand, for m X approaching m Z the electroweak precision bounds on become stronger (that's why for the benchmark point with m X = 60 GeV we had to choose a slightly smaller value of ). For this reason, in the allowed parameter space, the new physics corrections in the h → 4 channel quickly become unobservable for m X 70 GeV. Finally, we estimate the reach in the kinetic mixing parameter: at the most favorable hidden photon mass m X ≈ 30 GeV the high-luminosity LHC will be able to exclude down to 0.007. The bottom line is that the LHC is capable of exploring new interesting regions of the parameter space, even in the simplest version of the hidden photon model. The next step is to go beyond the simplest hidden photon model and to allow 2 = 0 and or 3 = 0 in Eq. (11). As explained previously, this extended model allows us to increase new physics corrections to the h → 4 rate, which greatly improves the sensitivity at the LHC. In fact, the strongest constraints on this model are currently provided by the LHC Higgs measurements, in particular for m X = 30 GeV we find 2 0.015, 3 0.02. In the left panel of Fig. 5 we show the results for a couple of scenarios with m X = 30 GeV. Our benchmark points are chosen such that the h → 4 rate is significantly enhanced, by 20-30%, which is not far from the current upper limit. For this reason the rate information alone should be enough to exclude these scenarios at the LHC run-II. Taking advantage of the shape information further improves the sensitivity. We find that also in this case the shape information has a much stronger discriminating power, as can be clearly seen in the right panel of Fig. 5. Combining the two, the LHC experiments should be able to comfortably exclude 5 our two benchmarks already after the first year of the coming LHC run. We note that the discriminating power is increased thanks to the hXγ couplings present in the extended model, see Eq. (13). This is partly due to the fact the diagrams with an off-shell photon increase the new physics contribution to the h → 4 rate. But on top off that the the photon contributions lead to larger shape differences with respect to the SM, primarily in the invariant mass distributions. See [23] for a study of this effect in a different context. Another consequence of the hXγ coupling is that the LHC is sensitive to larger values of m X which would be kinematically suppressed if only hZX couplings were present. This allows the golden channel to probe a larger range of hidden photon masses than might be naively expected, even up to m X ∼ 100 GeV. Finally, we point out that the golden channel is sensitive not only to the magnitude but also to the signs of 2 and 3 relative to that of . Indeed, we find that for the parameter space regions where there is sensitivity to exotic Higgs decays we can discriminate between the positive and negative 2 or 3 hypotheses. The final exotic Higgs scenario we study here is the vector-like lepton mixing with the SM muon. The benchmarks points we analyzed are summarized in Table 1, and the results are shown in Fig. 6. We find that in this case the LHC sensitivity is much weaker than in the hidden photon case if only the shape observables are used, see the left panel of Fig. 6. We also see that the sensitivity quickly decreases as the mass M E approaches the Higgs boson mass. One reason is that Br(h → Eµ) gets kinematically suppressed for M E ≈ m h . On top of that, the muon emitted in the h → Eµ decay is very soft, therefore it often does not pass experimental cuts. Finally, the differential spectrum is much more similar to the SM case than in the hidden photon model. All in all, discriminating the vector-like lepton model using shape observables and standard CMS cuts is possible only when large statistics is accumulated, and only in the narrow mass window 103 GeV ≤ m E 115 GeV. The sensitivity may be improved though by applying additional cuts that target this specific model. In particular, the invariant mass of the 3 leptons coming from E decay should reconstruct to M E . The combinatorial background can be reduced by constructing m 3 out of the 3 hardest leptons in the event, since the muon from h → Eµ decay is typically soft. On the other hand, the total event rate is in this case a much stronger discriminator, as shown in the right panel of Fig. 6. Thus, by simply counting the number of events in the 2e2µ and 4µ channels, we can explore new regions of the M E -α L parameter space for 103 GeV ≤ m E 115 GeV. In particular, for m E = 103 GeV we estimate the LHC experiments can probe α L down to ∼ 0.007. Observing an excess of 4µ and 2e2µ events would be a motivation to apply model-specific cuts, to isolate the vector-like lepton signal. Similar comments apply to a vector-like lepton mixing with the SM electron, except that then an excess is expected in the 4e and 2e2µ channels. Finally, we note E could mix predominantly with the τ lepton, which is in fact the most natural possibility from the point of view of models where vector-like leptons play a role in generating the SM fermion mass hierarchies. Thus, exploring also the 2 2τ final state would be advantageous in this context. Figure 6: Left: LHC sensitivity using the shape of the 4-lepton distribution alone for the vector-like lepton points labeled by the values of (ME, αL). The dots indicate the results obtained by conducting pseudo-experiments which are then extrapolated to larger N assuming the significance grows as √ N . Right: Comparison of the discrimination power using the shape (dashed), rate (dotted), and combined shape+rate information (solid) for the benchmark point with ME = 103 GeV, αL = 0.015. Summary In this paper we studied the prospects of constraining exotic Higgs decays using the 4-lepton final state. We picked two scenarios of more general interest: a hidden photon mixing with the SM via the hypercharge portal, and a vector-like charged lepton mixing with one of the SM leptons via Yukawa interactions. Using the rate information only, the LHC run-II is sensitive to exotic decays if the new contributions to the total h → 4 rate are larger than 10% of the SM rate. This is possible to arrange in the vector-like lepton scenario, and also in the non-minimal hidden photon scenario in the presence of direct Higgs interactions with the hidden sector. The main point of this paper is to argue that taking advantage of the full information contained in the differential distribution of the 4-lepton final state dramatically improves the LHC sensitivity. To extract that information, we employed the matrix element methods previously developed in the context of measuring the coupling strength and the tensor structure of Higgs interactions with the SM gauge fields. These methods can be carried over to our case in a straightforward way, as exotic Higgs decays may readily affect the shape of the 4-lepton differential distribution. The shape information is essential in constraining the minimal version of the hidden photon model, where corrections to the total h → 4 are not expected to exceed a few percent. We find that for the hidden photon masses between 15 and 65 GeV the run-II of the LHC will be able to probe a new parameter space of the hidden photon model that is currently allowed by all precision constraints. Likewise, in the non-minimal hidden photon scenario, the shape information allows one to significantly improve the sensitivity such that large chunks of the allowed parameter space can be explored already in the first year of the upcoming LHC run.
2014-05-27T19:31:27.000Z
2014-05-05T00:00:00.000
{ "year": 2014, "sha1": "c323bb9c393a94f73446ddb03ad6a9d0bd7f9470", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP12(2014)037.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "c323bb9c393a94f73446ddb03ad6a9d0bd7f9470", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
242961001
pes2o/s2orc
v3-fos-license
New quantum codes from constacyclic codes over the ring $ R_{k,m} $ For any odd prime \begin{document}$ p $\end{document} , we study constacyclic codes of length \begin{document}$ n $\end{document} over the finite commutative non-chain ring \begin{document}$ R_{k,m} = \mathbb{F}_{p^m}[u_1,u_2,\dots,u_k]/\langle u^2_i-1,u_iu_j-u_ju_i\rangle_{i\neq j = 1,2,\dots,k} $\end{document} , where \begin{document}$ m,k\geq 1 $\end{document} are integers. We determine the necessary and sufficient condition for these codes to contain their Euclidean duals. As an application, from the dual containing constacyclic codes, several MDS, new and better quantum codes compare to the best known codes in the literature are obtained. Introduction Quantum computing is a fascinating topic for present research with a higher ability to solve severe problems faster than classical computers. The quantum errorcorrecting codes are used in the quantum computer to protect the quantum information from the noises that occurred during communication. After the pioneering work of Shor [35] in 1995, Calderbank et al. [5] proposed a prominent method to obtain quantum error-correcting codes from the classical error-correcting codes. The primary goal of this area is to construct better quantum codes employing state-of-art. In this connection, many significant works have been reported in the literature which provides better quantum codes over the finite fields, see [14,15,16,26,32]. It is also observed that the linear (e.g., cyclic, constacyclic) codes over finite non-chain rings produced a huge amount of good quantum codes [1,2,8,11,13,19,12,27,29,30]. In 2015, Ashraf and Mohammad [1] studied quantum codes from cyclic codes over F p + vF p . Meantime, Dertli et al. [8] presented some new binary quantum codes obtained from the cyclic codes over F 2 + uF 2 + vF 2 + uvF 2 , and then Ashraf and Mohammad [2] generalized their work over the ring F q + uF q + vF q + uvF q to derive new non-binary quantum codes. There are a lot of articles in which good quantum codes are obtained from the cyclic codes on different finite rings, see [11,13,19,23,31,33,32,34]. On the other side, recently, Gao and Wang [12], Li et al. [27], Ma et al. [29,30] considered the constacyclic codes over finite non-chain rings and obtained many new and better codes compare to the known codes. Based on the above studies, one can say that the constacyclic codes are a great resource to supply good quantum codes over finite rings. Hence, it is logical to study the constacyclic codes over new and different non-chain rings to construct more new quantum codes. Towards this, we study the constacyclic codes over the family of commutative non-chain rings R k,m = F p m [u 1 , u 2 , . . . , u k ]/ u 2 i − 1, u i u j − u j u i i =j=1,2,...,k , where p is an odd prime and m, k are positive integers. Note that for k = 2, m = 1 the constacyclic codes over R k,m are studied in [21]. Further, authors constructed quantum codes based on cyclic codes over R 2,1 in [22] and over R 3,1 in [19], respectively. Therefore, the present article is a continuation and generalization of our previous studies in the context of new quantum codes construction. The main objective of the article is two-folded, first, we characterize the constacyclic codes over R k,m (Section 4), and then by utilizing the structures we obtain new and better quantum codes (Section 5). To do so, we define a new Gray map ψ which is different from the usual canonical map and capable to produce many quantum MDS codes (Table 1) and better quantum codes (Table [3][4][5] compare to the best-known codes in the literature. The presentation of the article is organized as follows: In Section 2, the results related to finite rings along with some basic definitions and properties have been discussed. Section 3 gives the structure of constacyclic codes, while Section 4 presents the construction of quantum codes and many examples of better codes. Section 5 concludes the article. Preliminary Throughout the article, we use R k,m := F p m [u 1 , u 2 , . . . , u k ]/ u 2 i − 1, u i u j − u j u i where 1 ≤ i, j ≤ k, p is an odd prime and k, m are positive integers. Thus R k,m is a finite commutative ring (with unity) of characteristic p and order p 2 k m . Note that any element r ∈ R k,m has the expression r = α 0 + [7], R k,m has 2 k number of maximal ideals w 1 , w 2 , . . . , w k , where w i ∈ {1 − u i , 1 + u i }, 1 ≤ i ≤ k. Also, R k,m is a principal ideal ring where any ideal I = v 1 , v 2 , . . . , v t is principally generated by the element formed by the sum of all v i and their products, see ([7], Theorem 2.6). Therefore, by comparing above maximal ideals, we conclude that R k,m is a nonchain semi-local Frobenius ring. For instance, if k = 1, then there are two maximal ideals I 1 = 1 − u 1 , I 2 = 1 + u 1 in R 1,m and I 1 = I 2 . Clearly, R 1,m is a non-chain semi-local ring of order p 2m . On the other hand, R k,m contains (p m − 1) 2 k units which is discussed in Lemma 3.2. Recall that a nonempty subset C of R n k,m is a linear code of length n over R k,m if it is an R k,m -submodule of R n k,m and each element of C is called a codeword. The rank of a code C over R k,m is the minimum number of elements which span C. If K is the rank, then the code C is said to be an [n, K] linear code. The Euclidean inner product of two elements a = (a 0 , a 1 , . . . , a n−1 ) and b = (b 0 , b 1 , . . . , b n−1 ) ∈ R n k,m is defined as a · b = n−1 i=0 a i b i . Let C be a linear code of length n over R k,m . Then the dual C ⊥ := {a ∈ R n k,m | a · b = 0 ∀ b ∈ C} is also a linear code. The code C is said to be self-orthogonal if C ⊆ C ⊥ and self-dual if C ⊥ = C. Let A = {i 1 , i 2 , . . . , i s } be a subset of the set S = {1, 2, . . . , k} where i 1 < i 2 < · · · < i s and ς ∈ F p m such that 2 k ς ≡ 1 (mod p). We define where |A| = ∆ (1 ≤ ∆ ≤ k), and ∆ = 0 for A = φ. We use e 0 0 for e 0 φ = ς k ij =1 (1 + u ij ), which can be obtained from above. From the definition of e ∆ A , it is clear that the superscript ∆ is used to count the number of factors, like (1 − u ij ), present in e ∆ A . Let B be a subset of S different from A. Without loss of generality, let i j ∈ A and i j ∈ B. Then, from the construction of e ∆ A , we must say 1 − u ij divides e ∆ A , Again, by induction on k in R k,m , we have A⊆S e ∆ A = 1. In the light of the above discussion, we conclude that Therefore, {e A | A ⊆ S} is a set of pairwise orthogonal idempotent elements in R k,m . Hence, by Chinese Remainder Theorem, we decompose the ring R k,m as Thus, any element r ∈ R k,m , where can be uniquely written as is the set of all 2 k × 2 k invertible matrices over F p m . Now, we define a Gray map where, 1 ≤ i j ≤ k. Here, we enumerate the vector (β 0 , β i1 , β i2 , . . . , β i k , β i1,i2 , β i1,i3 , . . . , β i k −1,i k , . . . , β i1,i2...i k ) as (r 1 , r 2 , . . . , r 2 k ) = r. Then the map ψ is linear and can be extended from R n k,m to F 2 k n p m componentwise. The Hamming weight of a codeword c = (c 0 , c 1 , . . . , c n−1 ) ∈ C is denoted by wt H (c) and defined as the number of non-zero components in the codeword c. The Hamming distance for the code C is defined by d H (C) = min{d H (c, c ) | c = c , for all c, c ∈ C}, where d H (c, c ) is the Hamming distance between c, c ∈ C and d H (c, c ) = wt H (c − c ). Also, the Gray weight of any element r ∈ R k,m is define as wt G (r) = wt H (ψ(r)) and Gray weight forr = (r 0 , r 1 , . . . , r n−1 ) ∈ R n k,m is wt G (r) = n−1 i=0 wt G (r i ). Further, the Gray distance between codewords c, c ∈ C is defined as d G (c, c ) = wt G (c − c ). It is worth mentioning that in earlier works [1,11,17,19], authors have used the canonical Gray maps which take every element into a vector consisting of its canonical components. But, we define the map ψ as the multiplication of a vector by an invertible matrix of order 2 k . Such type of Gray maps one can also find in [13,29,30] with respect to their setup. One of the main advantages to choose such Gray maps, like ψ, is to enhance the code parameters (particularly, dimension, and minimum distance, etc.) over the parameters obtained by the simple canonical Gray map. For example, using ψ we construct quantum code [ [22,2,7]] 5 in Example 4.6 whose minimum distance is larger than the quantum code [ [22,2,5]] 5 obtained in [17] under the usual canonical Gray map. Now, we present an example for k = 2 to understand the ring structure based on the set of pairwise orthogonal idempotent elements and Gray map discussed above. Then R 2,m is a semi-local ring with four maximal ideals where 4ς ≡ 1 (mod p). Then we can write r uniquely as In this case, the Gray map ψ : Now, we review some important results on linear codes over R k,m . One can find the similar results in [7,19,30,36]. Theorem 2.2. The Gray map ψ defined in equation (1) is linear and weight preserving from R n k,m (Gray weight) to F 2 k n p m (Hamming weight). Proof. Since the Gray map ψ is linear, ψ(C) is a linear code of length 2 k n. Also, the map ψ is distance preserving, hence ψ(C) is a [2 k n, K, d H ] linear code over the field F p m where d G = d H . Hence, x · y = 0, and consequently ψ(C) is a self-orthogonal linear code of length 2 k n over F p m . Let C be a linear code of length n over R k,m and for A ⊆ S, Then C A is a linear code of length n over F p m for all A ⊆ S. Also, C can be expressed as Moreover, the generator matrix for the code Proof. It follows the similar argument of ( [19], Theorem 5). Constacyclic codes over R k,m In this section, we discuss the structural properties of constacyclic codes over R k,m . These codes are used to obtain quantum codes in the subsequent section. Conversely, let C A be a δ A -constacyclic code of length n over F p m , for A ⊆ S. Let r = (r 0 , r 1 , . . . , Hence, C is a γ-constacyclic code of length n over R k,m . Proof. Let C = A⊆S e ∆ A C A be a γ-constacyclic code of length n over R k,m . Therefore, by Theorem 3.4, each C A is the δ A -constacyclic code of length n over F p m . Let Corollary 3.6. Every ideal of R k,m [x]/ x n − γ is principally generated. Proof. 1. Let C = A⊆S e ∆ A C A be a γ-constacyclic code of length n over R k,m . Then, by Theorem 3.4, C A is a δ A -constacyclic code of length n over F p m , for all A ⊆ S. Therefore, C ⊥ A is a δ −1 A -constacyclic code over F p m . Hence, New quantum codes and comparison Recall that a q-ary quantum code of length n and size K is a K-dimensional subspace of q n -dimensional Hilbert space (C q ) ⊗n , where q = p m . Precisely, a quantum code is represented as [[n, k, d]] q , where n is the length, d is the minimum distance and K = q k . The quantum code [[n, k, d]] q satisfies the singleton bound k + 2d ≤ n + 2, and known as quantum MDS (maximum-distance-separable) if it attains the bound. In this section, we construct several new q-ary quantum codes by using the structure of γ-constacyclic codes over R k,m . Also, the necessary and sufficient conditions for these codes to contain their duals are obtained. We first recall the CSS construction (Lemma 4.1) which plays an important role to obtain the quantum codes. where f * (x) is the reciprocal polynomial of f (x) and λ = ±1. In the light of Lemma 4.1, one must say that the dual containing linear code is the key to obtain quantum codes under the CSS construction. Therefore, by using Lemma 4.2, we present the necessary and sufficient conditions of the constacyclic codes to contain their duals in the next result. is a u 1 -constacyclic code of length 11 over R 1,1 . Let satisfying M M t = 2I 2 . Then the Gray image ψ(C) has the parameters [22,12,7]. Also, ( ), for i = 0, 1. Therefore, by Theorem 4.3, C ⊥ ⊆ C. Hence, by Theorem 4.5, there exists a quantum code [ [22,2,7]] 5 , which has the larger distance compare to the known code [ [22,2,5]] 5 given by [10,17]. Remark 1. In the above example, we have calculated that the Gray image ψ(C) is a [22,12,7] linear code over F 5 . Note that ψ(C) has length = 2 k n = 2 1 · 11 = 22 and dimension is equal to the sum of the dimensions of linear codes generated by polynomials f 0 (x) and f 1 (x)=6 + 6 = 12. Also, it has the generator matrix where M 0 , M 1 are generator matrices of linear codes generated by polynomials f 0 (x) and f 1 (x), respectively. Now, putting the generator matrix G of linear code ψ(C), we have computed the minimum distance 7 by the Magma computation system [4]. . Therefore, by Theorem 4.3, C ⊥ ⊆ C. Hence, by Theorem 4.5, there exists a quantum code [ [32,14,6]] 17 , which has larger dimension compare to the known code [ [32,12,6]] 17 given by [13]. Therefore, our code has larger code rate than the known. Then M satisfies M M t = 6I 4 and the Gray image ψ(C) has the parameters [16,12,4]. 17 given in [30], we conclude that our code has larger distance than that code. , we have x 6 − 1 =(x + 1)(x + 3)(x + 4)(x + 9)(x + 10)(x + 12) Then M satisfies M M t = 8I 4 and the Gray image ψ(C) has the parameters [24,16,6]. 13 , which has larger distance than the known code [ [24,8,4]] 13 appeared in [13]. Table 2 gives the set of matrices over the finite field F p m , which are used to compute the Gray images of constacyclic codes in Table 1 and Table 3-5, respectively. Also, Table 1 presents some quantum MDS codes while Table 3-5 include new and better quantum codes than previously known codes from the constacyclic codes over R 1,m = F p m [u 1 ]/ u 2 1 − 1 . In Table 1 and Table 3-5, we used different columns as below: 1 st column-values of p m , 2 nd column-lengths of the codes, 3 rd column-values of the unit γ, 4 th column-corresponding values of the units δ 0 , δ 1 , 5 th &6 th column-generator polynomials for the constacyclic codes, 7 th column-used matrices to compute parameters of Gray images, 8 th column-parameters of Gray images ψ(C) of the constacyclic codes, 9 th column-parameters [[n, k, d]] p m of the obtained quantum codes, 10 th column-parameters [[n , k , d ]] p m of the best-known quantum codes. Computation tables. The In order to compare our obtained quantum codes with best-known codes, we include the 10 th column from different references as mentioned in the column. We have seen that our obtained codes given in the 9 th column are better than the codes shown in the 10 th column by means of larger code rates and larger minimum distances. To represent the generator polynomials f 0 (x), f 1 (x), we write their coefficients in decreasing order, e.g., we use 124114 to represent the polynomial x 5 + 2x 4 + 4x 3 + x 2 + x + 4. Remark 2. Recall that a code [[n, k, d]] q satisfying n − k + 2 − 2d = t is known as a quantum code with singleton defect t. Obviously t ≥ 0 and when t = 0, it is a quantum MDS code. Also, t is the judgmental parameter to determine a code how much close to MDS. In fact, smaller t implies code is close to MDS. Hence, the main objective should be to obtain the code whose t is closer to zero, as much as possible. In the above tables, we have seen that the quantum codes with singleton defect t = 2 are Conclusion In this article, we studied the constacyclic codes over the family of commutative non-chain rings R k,m to obtain new non-binary quantum codes over finite fields. In the above tables, we have determined many new quantum codes which are superior to the best-known codes in the literature. Therefore, we believe that our work will motivate researchers to unfold the existence of many new quantum codes which can be obtained from this class of constacyclic codes.
2020-08-06T09:06:54.770Z
2022-01-01T00:00:00.000
{ "year": 2020, "sha1": "771fc489db4c204d4b87b3479dda852a00491d78", "oa_license": "CCBY", "oa_url": "https://www.aimsciences.org/article/exportPdf?id=6515214a-73fa-4e08-9699-4b8ca9952f51", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "5f4a4b5d24a15c75a8027d8ea27b272d3fbb5758", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
233907272
pes2o/s2orc
v3-fos-license
A Systemic Approach for Sustainability Implementation Planning at the Local Level by SDG Target Prioritization: The Case of Quebec City The success of the 2030 Agenda hinges on mobilization at the local level. The localization of sustainable development goals (SDGs) and their targets involves adapting them to local contexts. This case study of Quebec City, Canada, illustrates how the use of a systemic sustainability analysis tool can help integrate SDGs in the building of a sustainable development strategy at the local level. Our approach focuses on the use of an SDG target prioritization grid (SDGT-PG) and begins with the mobilization and training of a group of officers representing various city services. We first used an original text-mining framework to evaluate SDG integration within existing strategic documents published by the city. The result provides a portrait of existing contributions to SDG targets and identifies potential synergies and trade-offs between services and existing policies. A citywide prioritization workshop was held to assess the relative importance of SDG targets for the city. Priorities were then identified by combining the importance of the targets as viewed by stakeholders, the current level of achievement of SDG targets as determined by the analysis of existing documents, and the jurisdiction and responsibilities given to Quebec City in regard to federal and provincial legislation. We identified the main focus areas and related SDG targets. Furthermore, we observed whether actions needed to be consolidated or new actions needed to be implemented. The identification of synergies and trade-offs within the city service actions provides information on the links to be made between the different municipal services and calls for partnerships with other organizations. The use of the SDGT-PG allows the vertical and horizontal integration of the SDG targets and demonstrates how participation and inclusion facilitate stakeholders’ appropriation of the applied sustainable development strategy. Introduction In 2015, members of the United Nations unanimously adopted the 2030 Agenda for Sustainable Development [1]. The 17 Sustainable Development Goals (SDG) and 169 targets represent a global framework to guide the implementation of sustainable development (SD) by 2030 [2,3]. While the SDGs and targets were first designed for the global level [4], the 2030 Agenda is a universal program that applies to all governments and actors, regardless of their level of intervention [1,5]. Because cities represent the level of government closest to the population [6], they have the capacity to intervene quickly and 1. Level of urgency: identifying historical trends and comparing current baseline values against global benchmarks; 2. Systemic impact: identifying interlinkages between SDG targets, evaluated against a semi-quantitative cross-impact matrix assessment and network analysis; 3. Policy gap: assessing how SDGs align with existing strategies. The Stakeholder Forum [20] submitted a method of analysis addressed to developed countries to assist them in identifying the goals and targets representing the biggest transformational challenges. They proposed three criteria: 1. Applicability: evaluating the relevance of the goal/target; 2. Implementability: assessing the reality of attaining the goal/target within the time frame; 3. Transformationalism: determining whether the achievement of the goal/target requires new and additional policies beyond those currently in place. Finally, the Sustainable Development Solution Network (SDSN) [6] proposed broad guidelines to define local SDG targets: 1. Targets should be relevant and achievable; 2. Targets should correspond to local government mandates; 3. Priorities should be established on the basis of development gaps. This paper presents the case study of Quebec City (Quebec, Canada) using an original and adapted systemic sustainability analysis tool. Quebec City is located in the province of Quebec, Canada. Canada is a federal state where responsibilities are shared between federal and provincial jurisdictions. Local governments, such as cities, are a provincial responsibility; however, as issues related to sustainable development touch multiple jurisdictions, our analysis includes the federal, provincial, and local (city) levels. This approach aims to bolster the implementation of the SDGs at the local level and integrates the key elements of contextualization, adaptation, systemic thinking, subsidiarity, and policy coherence. The method focuses mainly on the use of the SDG target prioritization grid (SDGT-PG), a participatory prioritization tool for SDG targets applicable at local, national, and regional scales. The SDGT-PG methodology is inspired by the sustainable development analytical grid (SDAG) used at local and national levels since 1988 [33]. The SDAG methodology, which emphasizes participatory processes and scientific robustness, was developed in a partnership between academics (Université du Québec à Chicoutimi, Canada), an international organization (Organisation internationale de la Francophonie), and an international consulting firm (GlobalShift Institute Ltd., Quebec City, QC, Canada). This approach was tested in both developed and developing countries at national and local levels. Burkina Faso, Benin, Niger, and Togo refer to the use of the SDGT-PG as a prioritization tool in their Voluntary National Reviews presented at the United Nations Highlevel Political Forum on Sustainable Development. In this study, we test our central hypothesis that the use of the SDGT-PG allows the vertical and horizontal integration of the SDG targets. We also demonstrate how participation and inclusion permit stakeholder appropriation. Materials and Methods The applied approach is an iterative process (Figure 1) inspired by two analytical tools: the sustainable development analytical grid (SDAG) [33] and the rapid integrated assessment (RIA) [34]. Our approach also makes use of best-known practices and guidelines [6,20,31]. The different advancements of this action-research process, in co-construction with Quebec City officials, aimed to generate data and information related to the three main SDGT-PG criteria being evaluated: 1. Performance: Relying on SDG target indicators, what is the current level of achievement of the targets? 2. Importance: Given the specific context of the city, what is the significance level of the targets? 3. Governance: Knowing the constitutional division of powers, what level of governance (from national to local) holds the power and responsibilities associated with the targets? Mobilization and Capacity Building To eliminate silos and apply a systemic approach, we took the initial step of forming a group of leaders. We mobilized 27 leaders from 19 Quebec City's administrative units. To be selected, a leader needed to represent one of the main administrative units and embody the concepts of sustainability through their values, interests, personal, and/or professional activities. The selected leaders were readily available and also had the authorization of their respective managers. The 27 leaders included 17 advisers, 6 managers, 2 engineers, 1 analyst, and 1 police officer. The leaders comprised 15 men and 12 women. The city manager's office coordinated the project. The project team, established by the office, worked in partnership with our research team to organize and structure the complete process. A series of workshops was co-managed by the partners. The main objective of these workshops was to raise awareness about the SDGs [6]. The specific objectives were to: 1. Understand the concepts of sustainable development; 2. Become familiar with the 2030 Agenda and the SDGs; 3. Understand systemic sustainability analysis; 4. Share practices between city services; 5. Identify potential synergies and trade-offs; 6. Prepare the prioritization activity; 7. Produce and validate proposals for the sustainable development strategy. Diagnosis-Performance The internalization of the SDGs requires identifying those already-existing actions that can be linked to the SDGs [6,14]. To carry out this diagnosis, we analyzed 89 strategic documents produced by Quebec City (A list of the analyzed documents is available in the Supplementary Materials). We aimed to: 1. Identify already-implemented SD initiatives; 2. Align the identified SD initiatives with SDG targets; 3. Assess the potential achievement of the SDG targets (document the performance criteria of the SDGT-PG). We identified the initiatives using WordStat, a content-analysis and text-mining software within ProSuite (Provalis Research, Montreal, QC, Canada), a collection of integrated text-analysis tools [35]. For our analyses, we developed a specific dictionary linked to the content of the 2030 Agenda. Our dictionary included 1602 expressions found in the labels of the SDGs, SDG targets, and SDG indicators (The full dictionary is available in the Supplementary Materials). First, we prepared each document for analysis by removing figures, hyphens, brackets, and braces. We imported the strategic documents into QDA Miner, a qualitative data analysis component of ProSuite. We conducted content analysis using WordStat for each document separately. Each SDG target represented a category in the dictionary. To transform textual data into keywords or content categories, we used a lemmatization substitution process. Lemmatization is a "process by which various forms of words are reduced to a more limited number of canonical forms like conversion of plurals to singulars and past tense verbs to present tense verbs" [36]. For each occurrence identified by the software, an expert in charge of the processing validated the result to retain only relevant occurrences. These retained occurrences were then classified within a matrix where they were associated with corresponding targets. The matrix is based on the rapid integrated assessment developed by the United Nations Development Programme [34] (Table 1). We processed the data from the matrix to obtain a portrait of the SDG target coverage by the existing strategic documents and to group those documents influencing the same targets. Through identifying occurrences between the strategic documents and the SDG targets, we could assess, in light of the identified actions, the degree to which SDG targets had been achieved (performance). We assessed performance on a four-level scale: 1. The target was not at all achieved; 2. The target was partially achieved: there is much room for improvement, although some results are visible; 3. The target is in the process of being achieved: improvements remain possible; 4. The target has been achieved. We automatically assigned a performance score of 1 to targets having no occurrences. We awarded a performance score of 2 or 3 when occurrences existed between the city's strategic documents and a target. Our assessment of performance varied in accordance with the number of strategic documents associated with a target and the quality of actions mentioned in those documents. We never assigned a maximum score of 4, as we found no indicator, with verified metrics, for which we could attribute this performance. Identification of Synergies and Trade-Offs To apply systems thinking, we organized a workshop with the aim of identifying the potential interactions between the applied activities within the various city services. The research team identified themes for the 107 operational targets on the basis of target content and their indicators. SDG targets fall into two categories: "operational" and "means of implementation" (MoI). Operational targets relate to those to be achieved, whereas MoI refers to conditions that help attain targets [37]. MoI includes the mobilization of financial resources, technology development and transfer, capacity building, inclusive and equitable trade, regional integration, and the creation of a national environment conducive to the implementation of the sustainable development agenda [1]. MoI targets apply to national competences, which deviate from those at the local level. We therefore discarded MoI targets and SDG 17 [32,38]. As an initial step, the 27 leaders identified areas of activity undertaken both inside and outside of their administrative units and associated these areas with the themes of the SDG targets. Then, they identified potential interactions with other SDG targets. All interactions are directional; hence, for each interaction, there is a source target and an impacted target. In the case of bidirectional interactions, we used two-directional interactions, reversing sources and impact. City leaders characterized interactions as synergies or tradeoffs. A member of the research team validated each of the interactions and completed the exercise by adding interactions and adjusting some interactions associated erroneously with targets. Once the validation was completed, we analyzed the synergies and tradeoffs by SDG and target. We used a cross-impact matrix [28,31,32,39] where the weight given to an interaction corresponds to the number of activities associated with the interaction. Importance and Prioritization Localizing the SDGs requires adapting the global reference framework to ensure that it is relevant to the local context [3,6]. This contextualization of the 2030 Agenda will promote ownership and mobilization of stakeholders [14,40]. To increase the understanding of the SDGs and their targets for the stakeholders in our study, we adapted the wording of the targets without changing the original meaning. The target labels were adjusted to change references from a national scale to a local scale. For example, Target 1.2: "By 2030, reduce at least by half the proportion of men, women and children of all ages living in poverty in all its dimensions according to national definitions" becomes "By 2030, reduce at least by half the proportion of men, women and children of all ages living in poverty in all its dimensions according to the city definitions." Targets adapted to the local context were prioritized during a workshop that brought together 182 city employees. The employees occupied positions at various levels throughout the city services. We sampled city employees to ensure the representativeness of employment sectors, age group, gender, and workplace location. The selected employees did not need any particular skills or knowledge to participate in the workshop. Prioritization is an essential stage in identifying the relevant actions to be implemented at the local level [4]. Twenty-four tables of seven to eight employees, separated across two three-hour sessions, discussed the level of importance of SDG targets in the context of Quebec City. The table composition was predetermined to maximize diversity. Each table weighted the targets of three to four SDGs, for a total of 21 to 22 targets per table. An animator facilitated the discussion, while a second person recorded notes on a prepared canvas. Each participant was provided access to a set of four cards, representing the four levels of importance, to help them judge the importance of a target: N/A: Not applicable; 1: Unimportant: Not important and not a priority; 2: Important: Priority in the medium to long term; 3: Essential: Priority in the short term. For each target, (1) the animator announced and explained the appropriate target; (2) the employees expressed their views on the level of importance to be given to the target; (3) the animator initiated a dialogue in regard to the employees' justifications for this importance; and (4) the employees then expressed their final scoring for the importance of the target after these discussions. We recorded both employee assessments of importance (before and after discussions), and we noted the employees' justifications. To define the final level of importance of the targets to be entered in the SDGT-PG, we averaged (rounded to the nearest unit) the importance score of the final results. Using the SDGT-PG, we produced a priority index for each target. The more participants that deemed a target to be significant and the poorer the target's performance, the greater the priority given to the target in question. The priority level corresponds to the table shown in Figure 2. . Prioritization index grid in which an urgent target requires immediate intervention: a priority target should be addressed within a three-year horizon, a medium-term target should be addressed within seven years, a long-term target should be addressed within a 10-to-15-year period, and a target to be consolidated requires interventions that make it possible to maintain the current level of performance. The other priority levels do not require specific actions. Governance This stage aims to determine, for each target, the level of governance, from local to national, legally responsible for implementing actions required to achieve the target. In our case, the national level includes both the provincial and federal governments. The project team evaluated the target with reference to legislation at the Quebec (provincial) and Canada (federal) levels. To identify the governance level, we classified governance on a scale from 1 to 4: 1. Exclusive responsibility of the local level. The local level has complete authority to act on this target. 2. Responsibility shared between the local and national levels. The local level has a certain authority to act on this target; however, these competencies are also shared with the national level. 3. National-level responsibility supported by the local level. The national level has the main responsibilities necessary to act on this target; however, it can delegate to the local level for implementing an action. The local level has a certain authority for ensuring action on the ground, but it does not hold decision-making power. 4. Exclusive national-level responsibility. The national level has the full authority to act on this target. The local level does not have the authority to intervene, although it can sometimes influence priorities through representations at the national level. Localization The final information produced in the SDGT-PG considered the role of the different levels of governance in implementing initiatives; this governance level can affect whether a target can be achieved. Combining the priority level ( Figure 2) and the governance assessment, we could determine what should be considered by local and national planners and what targets can be achieved jointly, in some form of multilevel governance ( Table 2). Table 2. Initiatives to be undertaken according to the level of priority and our governance assessment. These are proposals aimed at local (Quebec City) and national (Quebec, Canada) levels. We used the potential coverage of SDG targets to assess performances in the SDGT-PG. Except for SDG 2 and 14, at least 60% of the targets showed some performance in terms of potentially achieving the target (Figure 4). No target could be labeled as "achieved" because we found no indicators confirming this level of performance. Targets of SDG 6 (Clean water and sanitation) and 11 (Sustainable cities and communities) showed more than 80% of the targets had a performance ranking of 3 "in the process of being achieved" (Figure 4). Synergies and Trade-Offs The city officers and our research team identified 687 potential interactions, including 638 synergies and 49 trade-offs. These interactions involve 86 targets and 16 SDGs. Table 4 shows the targets that most influenced other targets on the basis of the number of times they are the source of interaction. The table also reveals the targets most influenced by other targets according to the number of times they are impacted by other targets. All these targets exhibited positive and negative interactions. Among all the analyzed interactions, the most influencing and most influenced targets were all strongly positive (Table 4). Influencing targets came from various SDGs. The exception was SDG 9, for which two targets were found in the five most influencing targets (Table 4). In the context of applying the SDGs at the municipal level, we expected and noted Target 11.3 (Sustainable urbanization; participatory and integrated planning and management) to be one of the most (the third) influential targets ( Table 4). The most impacted targets came from six SDGs. The most impacted target is 3.4 (Non-communicable diseases, mental health, and well-being). We found a single target (10.2. Empowerment; social, economic, and political inclusion) in both the most influencing and the most influenced targets ( Table 4). The targets included in Table 4 have the highest positive results. In terms of negative impacts, targets 7. 3, 9.4, 9.5, and 15.1 most negatively affected other targets (sum: -4). The most often negatively impacted targets were targets 8.1 (sum: -7) and 10.2 (sum: -5). The highest positive interaction was between targets 11.3 and 10.2 (sum: 5). We observed the highest negative results (sum: -2) for interactions between targets 9.4 and 3.4, between 9.5 and 10.3, and between 15.1 and 11.1. Importance, Prioritization, and Governance The participants at the prioritization workshop assessed the level of importance of the 107 operational targets ( Figure 6). They found all targets to be relevant. The participants considered most targets as important (56.1%), with 32 targets deemed essential (29.9%) and 15 as unimportant (14%). The SDGs having the highest percentage of essential targets were SDG 6 (66.7%) and SDGs 5, 12, and 16 (50%) ( Figure 5). SDGs 3 and 14 had the highest percentage of unimportant targets (44.4% and 57.1%, respectively). We obtained a prioritization index by crossing performance with importance. Eight targets, among eight different SDGs, were prioritized as urgent (Figure 7) targets were deemed a priority. We noted five priority targets in SDG 15, four in SDG 12, three each in SDGs 4, 14, and 16, two each in SDGs 2, 5, and 8, and one each in SDG 1, 10, and 13. SDGs 3, 7, 9, and 11 did not have any urgent or priority targets ( Figure 6). Thirtyfour targets were prioritized in the medium term and fifteen in the long term. Additionally, Quebec City needed to consolidate 23 targets. SDG 11, with four targets, and SDG 8 and 9, each with three targets, showed the most targets to be consolidated. The governance assessment showed that the project team members considered Quebec City to have exclusive power over six targets (5.6%) (Figure 8). These targets are found in SDGs 6, 11, and 12 (each having two targets). On the other hand, they assessed 29 targets (27.1%) as being exclusively national (provincial or federal) jurisdiction and responsibility. Among the SDG targets most associated with the national level, we noted five of the seven targets of SDG 14 (71%), two of the three targets of SDG 7 (67%), and four of the seven targets of SDG 4 and 10 (57%). Twenty-six targets (24.3%) represented a shared responsibility, and 46 targets (43%) were primarily national competence, although supported at the local level. Overall, the national level was better positioned to intervene on 75 targets (70%); for instance, the national level holds most of the authority to intervene in regard to all targets of SDGs 4 and 7 (Figure 7). Discussion The success of implementing the 2030 Agenda requires the mobilization of all actors at all levels. Our SDG localization approach focuses on the local level and includes an original systemic tool to identify priorities in a context of strategic planning. We used parameters found in the literature [6,20,31]; they were evaluated separately but integrated to define the priorities. In our study case, assessing the current sustainability context for Quebec City is necessary to clarify the starting point and to develop a sustainable development strategy based on achievements [14]. The development and application of our dictionary of expressions linked to the SDG targets identified the targets considered (or not) within the city's strategic documents. A proper analysis of performance requires contextualizing performance in terms of governance level. Local governments, depending on the effective distribution of powers in a given country, have varying levers on the SDGs. Quebec City is located in the province of Quebec and also falls within the Canadian national governance. The responsibility for municipalities resides with the provinces under the Canadian constitution. In the province of Quebec, cities have the legislative powers of development and urban planning, housing, roads, community and cultural development, recreation, urban public transport, and wastewater treatment [41]. We strongly recommend that an expert assessment of governance parameters, in accordance with the national/provincial legislative texts, be undertaken when applying an SDGT-PG. Examining the distribution of powers among government levels allows an analysis of performance crossed with an evaluation of governance (Table 5). In this study, we observed that no exclusively local responsibility target was achieved. Among the exclusively national targets (at the provincial and/or federal level), however, 69% of the targets had not been achieved. In Canada, navigation, coasts, and inland fisheries are a federal responsibility. Five of the seven SDG 14 targets related to oceans and marine resources are exclusively a national responsibility and have not yet been achieved. The two other targets are considered as a shared responsibility. In contrast, Quebec City was on track to achieve 83% of the targets under its responsibility. These are targets of SDG 6 (Clean water and sanitation) and 11 (Sustainable cities and communities), which correspond to the fields of competence given to municipalities in provincial legislation. The other targets of exclusive local responsibility were partially achieved. Agenda 2030 states that the SDGs and their targets are global and that [national] governments should define their priorities according to their particular contexts [1]. This contextualization applies to all levels of governance, from local to national. The successful implementation of SDGs requires multilevel governance implemented with communication channels that promote vertical integration [7,42]. Although cities have extremely varied contexts, they encounter common obstacles, such as issues of power [25], and can seize specific opportunities addressed by our approach. Moreover, localization allows local authorities to participate more effectively to achieve national SDGs. Obstacles, Limitations, and Challenges of SDG Localization The scope of the 2030 Agenda limits its localization. The formulation of targets is addressed at the national and global levels, and their text-based interpretation can have a demobilizing effect on the local-level actors. Local actors may see this agenda as being focused on global issues and, thus, they may ultimately reject the agenda outright [43]. Implementing SDGs at the local level requires localizing the targets by adjusting the labels without distorting their meaning. In our case study, the Quebec City project team modified the wording of targets, for which the scope was explicitly national, to provide a localscale feel to the target. This adaptation increases the tangibility of targets for local actors, who must assess the importance of the targets and ensure that the targets are implemented at the appropriate-local-level. One could assume that targets explicitly mentioning a national scope would be assessed as less important or not applicable for local actors. For the MoI targets, however, adapting these targets to local contexts is difficult, as these targets often involve international partnerships for implementing the 2030 Agenda. From the governance parameter of the SDGT-PG, responsibility for the MoI targets occurs exclusively at the national level. For the sake of adaptation and contextualization, and not to give the impression to local actors that the 2030 Agenda is addressed only at the national level, we chose to exclude the MoI targets from our prioritization approach. Localizing the SDGs involves implementing the SDGs in the logic of vertical, horizontal, and territorial integration. A siloed approach predominates, and moving toward an integrated approach is not straightforward. Forming a group of leaders from different municipal services promoted horizontal integration. The leaders were not used to working in a multiservice group. Their collective work and dialogue broke down existing silos. The multiservice workshops greatly helped identify potential synergies and trade-offs. This horizontal integration occurred at several stages of our approach. During the diagnosis stage, our analysis of strategic documents, using the dictionary of expressions related to the SDG targets, identified the initial potential synergies. For example, we identified that the following targets touched all services: 4.4 (Skills for employment and entrepreneurship), 8.2 (Economic productivity), 9.1 (Sustainable infrastructure, economic development, well-being), 9.5 (Research, technological capabilities, innovation), 16.6 (Efficient, accountable, and transparent institutions), and 16.7 (Participation in decision-making). These shared targets do not systematically imply synergistic actions, but the diagnosis identified those actions carried out by several municipal services sharing common objectives. Our dictionary has proven to be a highly relevant and effective tool for undertaking this diagnosis. The identification of 687 potential interactions formalized the links between city services and contributed to horizontal integration. The in-depth analysis and articulation of interactions illustrated the integrated nature of the actions of all services to members of the leader group. We observed that 92.8% of the interactions were positive by nature. This result closely matches the systemic analysis applied to the case of Sweden by Weitz et al. [32], where 96% of interactions were synergies. Referring to the classification of SDG targets to the five pillars of the 2030 Agenda (population, planet, prosperity, peace, and partnership) in Tremblay et al. [15], we observed that 83% of positive interactions (sum of +2 and greater) were linked to the same pillars. Half of the negative interactions related to different pillars. This illustrates the complexity of SDG targets and their interactions, and how the different pillars are integrated and indivisible. The limits and challenges of vertical and territorial integration are multiple and complex. These types of integration refer to the principle of subsidiarity, "the search for the 'optimal scale of government,' [29] and the concept of multilevel governance, a system of continuous negotiation among nested governments at several territorial tiers" [44]. These limits and challenges are universal but vary depending on the context. Thus, there is not a single solution, but it is possible to provide adaptable reflections from our approach. The actors of governance, at different scales, have variable levels of control and power over their context. This control varies from none (e.g., distribution of natural resources across the territory) to full (e.g., the adoption of policies). In addition, the actors interact according to different paradigms, at their level, in a complex system where the dominant paradigm of economic neoliberalism is omnipresent and, sometimes, underground [45][46][47][48]. It is well known that states tend to protect their powers despite the recognized importance of applying the principle of subsidiarity for implementing sustainable development [11]. The application of the principle of subsidiarity is linked directly to power issues, a very sensitive subject [25]. Local governments, to respond effectively to their sustainability challenges, must have the corresponding powers. From this perspective, Jones [49] writes, "Where national and state/provincial governments fail to act, city governments are severely limited in the implementation of [sustainability] policy." To address these issues, governments must collaborate. Using the governance assessment in the SDGT-PG, we guided local governments on the types of actions available to them on the basis of their specific governance context and target priority while also proposing actions at the national level. The terms "Search for opportunities" and "Advocacy at the appropriate governance level" apply to targets having a high priority level (urgent or priority) and whose governance is at the national level. This intersection between three parameters of the SDGT-PG helps guide the advocacy that local governments must undertake at higher levels. This observation does not guarantee success and an openness to dialogue; however, it provides guidelines for a structured argument based on an inclusive approach. The aim is to reduce what the OECD identified as the "policy gap" [50]. To achieve this, we must establish mechanisms for collaborating between the levels of governance to make the implementation of public policies relevant and effective. Localizing the SDGs requires an integrated commitment of human and financial resources [3]. SDSN [6] observed that, despite the importance of localizing SDGs, questions regarding capacities and mobilizing resources remain unanswered. Thus, the major constraints that cities face relate primarily to their limited political and fiscal powers, their lack of access to finance, the low levels of institutional capacity, the lack of multilevel government cooperation and integration, and the difficulty in establishing multi-stakeholder partnerships [6]. Becoming aware of these constraints is, however, a necessary step. Cities can act directly on a few aspects of sustainability, but they need the collaboration and openness of higher levels of governance to tackle the ensemble of issues. Open and empowered multilevel governance is essential for localizing SDGs horizontally, vertically, and territorially within an integrated approach [51]. Opportunities The 2030 Agenda is mobilizing an enormous quantity of resources across the globe, and actors at all levels are developing appropriate tools and approaches. The number of scientific articles having "2030 Agenda" as a keyword has increased rapidly from 44 in 2015 to 246 in 2017 to 632 in 2020 (Scopus, search results using "2030 Agenda" as a keyword, 5 November 2020). The SDGs and their targets provide a relevant framework at all scales and are internationally recognized. The principle of integration is increasingly applied, and organizations (national, local, private) increasingly choose the SDG framework for the sake of multilevel consistency. This willingness to join the SDG movement must be supported politically. In Quebec City, the mayor undertook the process, leading to a strategy and an action plan for sustainable development. This engagement at the highest levels of local government is essential for committing all necessary resources to ensure the success of the process [49]. Thus, the mayor's office established a competent project team that mobilized stakeholders, coordinated and analyzed activities, and developed the necessary strategy. Furthermore, a team of leaders, mobilized within all of the city's administrative units-because of the support of the directors of the various units mobilized by the mayor's office-has been trained in sustainable development issues. The team members communicated the progression of the approach and raised awareness with their colleagues. They sought their views at various stages of the process [49,52,53]. This multiservice mobilization was achieved through the mayor's commitment, through a top-down approach, to provide the means for achieving the results. The presence of a city councilor of the executive committee at every stage of the process testified to this political will. Mobilization at the highest level facilitates awareness of the efforts and actions to be implemented to vertically integrate the process. In our case study, Quebec City does not hold all the necessary powers to respond to the priorities that emerged from the prioritization exercise. City officers will be obliged to develop partnerships with higher governance bodies. As the mayor is the process holder in this case study, he will feel all the more invested and convinced of the need to carry out this task and to use the right communication channels to develop a multilevel collaboration. However, it is important to reiterate that the mobilization of the mayor alone cannot guarantee a successful implementation of sustainable development. It is also essential for all stakeholders to rally and face the challenges related to sustainability. Cities must build on existing structures and actions already underway that fit within the sustainability framework to ensure optimal localization of the SDGs [14]. Our diagnosis provides a relatively rapid portrait of the situation, an exercise that can often be tedious. In our case study, we included the diagnosis at the stage of evaluating the performance parameter of the SDGT-PG. The use of the dictionary made it possible to undertake rigorous work with a minimum mobilization of human resources. It provided a solid starting point on which to build the remainder of the process and made it possible to identify a common starting point for all actors involved. Crises can constitute opportunities to introduce a sustainability approach. Some previous crises (climate, financial, energy, etc.) have been drivers of change. For example, the 2008 financial crisis motivated some countries to embark on a transition movement [54,55]. The COVID-19 pandemic may also turn out to be an opportunity to provide arguments that favor the implementation of a sustainable development strategy. Quebec City, as most other local and national governments, must implement a post-containment/COVID-19 recovery strategy. This recovery strategy, linked to a sustainable development strategy, could offer an opportunity to facilitate ownership of the shift and the actions proposed by the city. In terms of sustainability, however, not all crises become opportunities. As stated in the 2030 Agenda: "There can be no sustainable development without peace and no peace without sustainable development" [1]. Thus, crises such as armed conflicts remain major obstacles to sustainability. Local governance is the closest level of government to citizens and their issues. This reality allows, in theory, to quickly implement measures to respond effectively to identified problems. The local level involves fewer actors and fewer divergent issues than at the national level. This difference could explain why differing from "business as usual" can be easier to implement at the local level [56]. For example, in the context of local actions, actors are less influenced by the dominant paradigm of neoliberalism and thus allows the emergence of approaches considered more radical when compared with "business as usual" actions [48,56,57]. Cities should support grassroots initiatives [43] and socio-ecological transition projects [58] undertaken by local community groups in their territories. These partnerships are much easier to support by local governments that are in direct contact with these groups. Leadership at the top of the city hierarchy (top-down) and support of bottom-up initiatives are not contradictory and mutually reinforce each other [25]. In this sense, Quebec City has opened a dialogue with local partners from various civil society organizations with the objective of identifying challenges, issues, and opportunities, as well as proposals for action. The identified limitations and opportunities routinely brought us back to the need for multilevel governance to ensure implementation of the 2030 Agenda [7,11]. The national level of governance, although holding most of the powers (Figure 7), must be aware that the national level is not always the most appropriate level in regard to local actors and issues [29]. The motivation of local governments can be hampered by the lack of collaboration of higher governance bodies [43]. The evaluation of the governance parameters shows higher authorities must collaborate with local governments. As Meuleman and Niestroy state [25], the issues and contexts differ at all levels, and a lack of integration and collaboration can lead to failure. A multilevel governance approach that relies on collaboration and cooperation will help promote vertical integration and policy coherence [51]. Our analyses identified targets representing opportunities to build such collaborations, allowing local and national authorities to optimize their contributions for achieving the goals of the 2030 Agenda. Conclusions Our approach aligns with the best practices for localizing SDGs and includes the concepts of contextualization, localization, systems approach, and integration. Although we apply this approach to the local level, it is flexible and adjustable enough to be applied at all levels of governance. Our approach provides a procedure that empowers sustainability actors in line with vertical and horizontal integration through capacity building, awareness, and direct participation, a procedure that, to our knowledge, has not been provided in previous studies focused on the local level. Each application of our approach should be contextualized, as the opportunities and limitations differ from place to place. In our case, we were limited by a lack of data; the indicators of the SDG targets had yet to be assessed. Therefore, it was impossible to accurately assess performance. We stated that they were potential performances, and we remain conservative in our assessments by not describing any targets as being fully achieved. The systemic tools and approach presented in our study will help planners develop strategies and action plans for implementing the 2030 Agenda. Although our approach is complete, it can only be implemented with a mobilization at the highest level and with the involvement of stakeholders who represent the complexity of the system in which the agenda is being implemented. SDG localization faces other challenges, in particular the adaptation of SDG tools and approaches to the private sector, where each particular sector comprises its challenges, contexts, opportunities and specific scopes of organizations governance. Future research could help define, as in the present study, good practices in localizing the SDGs, and methodologies for adapting the 2030 Agenda to the private sector. Supplementary Materials: The following are available online at www.mdpi.com/2071-1050/13/5/2520/s1, Table S1: List of analyzed documents, Table S2: Dictionary linked to the content of the 2030 Agenda, Table S3: Matrix of links between SDG targets and the analyzed Quebec City strategic documents inspired by the Rapid integrated assessment (RIA), Table S4: Cross-impact matrix, Table S5
2021-05-08T00:02:43.241Z
2021-02-26T00:00:00.000
{ "year": 2021, "sha1": "5800b1d220ce265ceb14fd5b04c06998fd6a8f55", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/13/5/2520/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "0d669b7feaf0419baa75ea7fa515269d80a37257", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Business" ] }
245214811
pes2o/s2orc
v3-fos-license
TWENTY-YEAR TRENDS IN ANTIMICROBIAL RESISTANCES AMONG PSEUDOMONAS AERUGINOSA CLINICAL ISOLATES We retrospectively analyzed the antimicrobial data of P. aeruginosa strains isolated from hospitalized subjects and outpatients over a 20-year period (2000-2019). A total of 2,588 unique P. aeruginosa strains, 588 from outpatients (23 %) and 2,000 from hospitalized subjects (77 %), were retrieved. Except gentamicin and ciprofloxacin, which showed significant antibiotic decreasing trends, all the antimicrobial agents tested did not show significantly changes in both groups (p < 0.01). There were significant increasing resistance trends for all antibiotics, except gentamicin and ciprofloxacin in P. aeruginosa strains isolated from respiratory tract samples (p < 0.05), and for meropenem and piperacillin-tazobactam in urine samples from subjects with and without urinary catheter (p < 0.05). Moreover, there was a significant increase in multidrug resistant isolates (p < 0.05). Monitoring antibiotic resistances at local and regional levels are required in order to reduce inappropriate antimicrobial consumption, to increase the focus on antimicrobial stewardship. KEYWORD: Antibiotic resistance, Pseudomonas aeruginosa, Epidemiology, Carbapenem resistance, Surveillance activity of antibiotics. [4] Therefore, P. aeruginosa is inserted by World Health Organization (WHO) in the priority list of microorganisms for which it is mandatory to research and develop new antibiotics. [4,7] The pathogenicity of P. aeruginosa comprises different features, such as the production of several virulence factors, metabolic versatility, and the formation of biofilms, which are all controlled by the transcriptional, posttranscriptional, and post-translational regulation of numerous systems. [4] The increasing trends of antibiotic resistance of P. aeruginosa strains have contributed to a higher mortality rate of infected subjects, a longer hospitalization, and higher costs of treatment. [4,6] A few works on antibiotic resistances of P. aeruginosa have been published, particularly over a long-time period. [4][5][6] In this work, we aimed to retrospectively investigate the antimicrobial data of P. aeruginosa strains isolated from the Italian Hospital of Desio over a 20-year period, 2000-2019. The antimicrobial resistance trends were assessed to provide useful information to clinicians to prescribe a more appropriate therapy. Study design and setting In this retrospective observational study, antibiotic resistance patterns of P. aeruginosa strains were analyzed. Data were retrieved from the database of the Laboratory of Microbiology of Desio Hospital, Italy, over a 20-year period (from January 1, 2000 to December 31, 2019). In the case of multiple P. aeruginosa isolates in one subject, showing the same antibiotic resistance pattern, only the first one was used for the analysis. Specimens presenting multiple isolates other than P. aeruginosa were excluded. Bacterial isolates and antimicrobial susceptibility testing The antimicrobial susceptibility of P. aeruginosa isolates was determined by the VITEK ® 1 and 2 systems (bioMérieux, Marcy l'Étoile, France) using Antimicrobial Susceptibility Testing (AST) cards. For this retrospective study, resistances to the following 13 antibiotics were analyzed: piperacillin/tazobactam, amikacin, ciprofloxacin, cefepime, ceftazidime, Fosfomycin, gentamicin, imipenem, and meropenem. From 2000 to 2010, the results were interpreted using the criteria recommended by the Clinical & Laboratory Standards Institute (CLSI). [8] From June 2011 to December 2019, results were interpreted using the criteria recommended by the European Committee on Antimicrobial Susceptibility Testing (EUCAST). [9] The identification of bacteria was performed by VITEK ® 1 and 2 systems, and from 2014 by Vitek ® matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS). Escherichia coli ATCC ® 8739 was used as a control. Definition We defined a P. aeruginosa isolate as Multi-Drug Resistant (MDR) if it exhibited a non-susceptibility to at least one agent in three or more antimicrobial categories. Resistant and intermediate resistant P. aeruginosa isolates were combined, as previously reported. [10] Statistics All statistical analyses were performed using Stata (Stata Statistical Software: Release 16). [11] A chisquare test was applied to compare the antimicrobial susceptibilities among inpatient and outpatient results over 20 years, and to determine whether there were statistically significant trends over the study period, which was divided into four intervals of time RESULTS: We identified a total of 2,588 unique P. aeruginosa strains from positive samples, 588 from outpatients (23 %) and 2,000 from hospitalized subjects (77 %). The median age of patients was 64 years (interquartile range (IQR): 55-78 years). The majority of isolates were from males (64.4 % compared to 35.6 % females). The most common specimen type from which P. aeruginosa strains were isolated was bronchoalveolar lavage (BAL) (24 %, n = 614), followed by urine samples from subjects with catheter (18 %, n = 459), sputum (15 %, n = 401), midstream urines (13 %, n = 337), skin swabs (11 %, n = 288), and ear swabs (8 %, n = 200) (Figure 1). Concerning these results, we analyzed the relationship between the data of the first, 2000-2004, and the last period, 2015-2019, with the aim to assess the antibiotic resistance trends in the most common specimens positive to P. aeruginosa infection isolated in hospitalized and non-hospitalized subjects. Tables 1 and 2 show the antibiotic resistance rates of P. aeruginosa strains isolated from respiratory tract samples, particularly bronchoalveolar lavage (BAL) and sputum, among hospitalized subjects and outpatients. Our data reported statistically significant increasing trends in resistance rates for all antibiotics both for hospitalized and community-related subjects (p < 0.05), except for gentamicin and ciprofloxacin which presented decreasing trends in hospitalized subjects (p < 0.05). Tables 3 and 4 show the antibiotic resistance rates of P. aeruginosa strains isolated from urine specimens, except those from subjects with urinary catheter, among hospitalized subjects and outpatients. Our data reported statistically significant decreasing trends in resistance rates for all antibiotics in hospitalized subjects (p < 0.05), except for meropenem and piperacillin-tazobactam which presented increasing trends (p < 0.05). In P. aeruginosa strains isolated from community-related subjects we observed the same results obtained in hospitalized patients. Table 5 shows the antibiotic resistance rates of P. aeruginosa strains isolated from urine specimens of hospitalized subjects with urinary catheter. Our data reported statistically significant decreasing trends in resistance rates for gentamicin, ceftazidime, and ciprofloxacin (p < 0.05), while increasing trends for cefepime, meropenem and piperacillin-tazobactam (p < 0.05). DISCUSSION: The inappropriate prescription and use of antibiotics have promoted the spread of antimicrobial resistance in most bacteria, causing nearly 700,000 people death every year worldwide. [4] Surveillance studies have key importance in the identification of bacterial changes in susceptibility patterns, to critically review the empiric treatment protocols. The present study is one of the largest database of susceptibility data of P. aeruginosa clinical isolates over a long period time, thus allowing for reliable assessments of the resistance trends. Switching from CLSI to EUCAST criteria, most antimicrobial susceptibility percentages did not change, although a few works reported a decrease in aminoglycoside susceptibility of P. aeruginosa in the application of the EUCAST guidelines. [12] P. aeruginosa is intrinsically resistant to several antimicrobials, mainly thanks to a combination of intrinsic, acquired and adaptive systems, such as low outer membrane permeability, expression of efflux pumps, AmpC overexpression, and biofilm formation. [4] Eight classes of antibiotics are most frequently administered to treat P. aeruginosa infections: penicillins with β-lactamase inhibitors (ticarcillin-clavulanic acid, piperacillintazobactam), cephalosporins (ceftazidime, cefepime), carbapenems (imipenem, meropenem, doripenem), fluoroquinolones (ciprofloxacin, levofloxacin), aminoglycosides (gentamicin, amikacin, netilmicin, tobramycin), monobactams (aztreonam), phosphonic acids (fosfomycin) and polymyxins (colistin and polymyxin B). [4] Over the whole study period, we did not observe significant increasing trends of antibiotic resistance rates of P. aeruginosa clinical isolates. However, considering the comparison between 2000-2004 and 2015-2019 periods, resistance rates of P. aeruginosa strains isolated from the respiratory tract and urine specimens increased for β-lactams, third-generation cephalosporins, and carbapenems, particularly meropenem, both in community and hospital-related infections, as previously observed. [4] Conversely, it was important to observe that there were small but significant decreasing trends of resistance rates in the hospitalized population for fluoroquinolones, and aminoglycosides, while in outpatients the trends for most of antimicrobial agents markedly increased, more likely as a consequence of the different therapies administered. Our data agree with the Italian surveillance report 2015-2019 which described that the resistance trends decreased for all the antibiotics used in P. aeruginosa infections, and that the greater values of non-susceptibility were observed for penicillins with β-lactamase inhibitors, followed by fluoroquinolones, cephalosporins, carbapenems, and aminoglycosides. [13] Moreover, the European surveillance report of antimicrobial resistance in the same period described that the highest EU/EEA resistance percentages were also observed for fluoroquinolones, followed by penicillins with β-lactamase inhibitors, carbapenems, and cephalosporins. [14] In Italy, fluoroquinolones were the most common antibiotics prescribed in 2019, preceded only by βlactams and macrolides. [15] In 2018, following the Pharmacovigilance Risk Assessment Committee (PRAC) recommendations, the European Medicines Agency (EMA) suspended the marketing authorization of quinoline-containing medicines, such as cinoxacin, flumequine, nalidixic acid and pipemidic acid, and restricted the fluoroquinolonecontaining antibiotics usage, such as ciprofloxacin, due to serious, disabling and potentially permanent side effects. [16] In 2019, Italy implemented these recommendations and our data confirmed the decreasing resistance trends for fluoroquinolones due to a diminished clinical usage in the last period, 2015-2019. On the other hand, as a consequence, the greater administration of β-lactams and cephalosporins increased resistance rates to these drugs, particularly ceftazidime, a fourth-generation cephalosporin, and piperacillin-tazobactam. These antibiotic resistances are interrelated, since the inducible over-expression of AmpC and efflux pumps, due to the adaptive ability of P. aeruginosa, is responsible not only for the resistance to penicillins and cephalosporins, but also to carbapenems, mainly imipenem. Moreover, further specific mutations which induce over-expression of efflux pumps reduce susceptibility to another carbapenem, meropenem. [4] Carbapenems are very important in human health and are considered the last choice for the treatment of multi-drug resistant Gram-negative bacteria, particularly in ICU. [4] Carbapenemases are not intrinsically produced by P. aeruginosa, but rather expressed by genes acquired by horizontal gene transfer. [4] Therefore, the presence of carbapenem-resistant P. aeruginosa strains represents a serious health problem. Our data did not show any significant change over the study period, with mean resistance percentages in accordance with the data of Italian and European surveillance reports. [13,14] However, the increasing carbapenem resistance trends observed in P. aeruginosa strains isolated in non-hospitalized subjects highlight the importance to follow national and international guidelines for the prudent use of antimicrobials in human health. [17] The present study analyzed the MDR P. aeruginosa strains isolated over 20 years. A significant increasing trend was observed, as previously reported in other countries. [18][19][20] A recent European survey, including Italy, provided targets for the reduction of unnecessary and inappropriate antibiotic use in human healthcare, to reduce the development and spread of multi-resistant strains. [21] It is noteworthy to report that, based on ESAC-Net 2018, Italy presented a statistically significant decreasing trend on antimicrobial consumption during the period 2009-2018. [22] As far as the association between antibiotic resistance rates and hospital wards is concerned, most of P. aeruginosa strains was isolated in Medicine and ICU, where serious ill patients are subjected to a long length of stay and often to invasive medical procedures, such as mechanical ventilation, central venous and arterial catheter, urinary catheterization, which are known to be a source of several infections. [23,24] A strong modulation and adequacy of antimicrobial therapy based on host characteristics, and more attention to the different routes of transmission that include (I) from environment to patient, (II) from colonized patients to the environment and (III) between patients, are needed. Our study presents a few limitations that should be considered: (A) the work is retrospective and was performed in a single hospital; (B) the lack of clinical data cannot provide a more comprehensive representation of resistance trends; (C) the lack of a comparative analysis with the antibiotic consumption. Collectively, the major strength of our work is the large sample size and long study period with which we performed our analyses. We demonstrated that P. aeruginosa resistance rates did not significantly change during the 20 years considered, except for decreased values for fluoroquinolones and aminoglycosides, and increased values for carbapenems in strains isolated from outpatients. Therefore, it is important to continuously study and monitor antibiotic non-susceptibilities at local and regional levels, being essential in order to reduce antibiotic consumption, to detect alarming resistance mechanisms, and to contribute to new antimicrobial stewardships.
2021-12-16T17:09:39.412Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "06d4743abefbaf53858ba14d5cb8aae1c70df647", "oa_license": null, "oa_url": "https://doi.org/10.35503/ijmlr.2021.6304", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "e5eca4e6f0bf9410105053f3279b82e7f36bf838", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
266728492
pes2o/s2orc
v3-fos-license
Detrimental effects of PCSK9 loss-of-function in the pediatric host response to sepsis are mediated through independent influence on Angiopoietin-1 Background: Sepsis is associated with significant mortality, yet there are no efficacious therapies beyond antibiotics and supportive care. In adult sepsis studies, PCSK9 loss-of-function (LOF) and inhibition has shown therapeutic promise, likely through enhanced low-density lipoprotein receptor (LDLR) mediated endotoxin clearance. In contrast, we previously demonstrated higher mortality in septic juvenile hosts with PCSK9 LOF. In addition to direct influence on serum lipoprotein levels, PCSK9 likely exerts pleiotropic effects on vascular endothelium. Both mechanisms may influence sepsis outcomes. We sought to test the influence of PCSK9 LOF genotype on endothelial dysfunction in pediatric sepsis. Methods: Secondary analyses of a prospective observational cohort of pediatric septic shock. Single nucleotide polymorphisms of PCSK9 and LDLR genes were assessed. Serum PCSK9, lipoprotein, and endothelial marker concentrations were measured. Multivariable linear regression tested the influence of PCSK9 LOF genotype on endothelial markers, adjusted for age, complicated course, and low- and high-density lipoproteins (LDL and HDL). Causal mediation analyses assessed impact of select endothelial markers on the association between PCSK9 LOF genotype and mortality. Juvenile Pcsk9 null and wildtype mice were subject to cecal slurry sepsis and endothelial markers were quantified. Results: 474 patients were included. PCSK9 LOF was associated with several markers of endothelial dysfunction, with strengthening of associations after exclusion of patients homozygous for the rs688 LDLR variant that renders it insensitive to PCSK9. Serum PCSK9 levels did not correlate with endothelial dysfunction. PCSK9LOF significantly influenced concentrations of Angiopoietin-1 (Angpt-1) and Vascular Cell Adhesion Molecule-1 (VCAM-1). However, upon adjusting for LDL and HDL, PCSK9LOF remained significantly associated with low Angpt-1 alone. Causal Mediation Analysis demonstrated that the effect of PCSK9 LOF on mortality was partially mediated by Angpt-1 (p=0.0008). Murine data corroborated these results with lower Angpt-1 and higher soluble thrombomodulin among knockout mice with sepsis relative to the wildtype. Conclusions: PCSK9 LOF independently influences serum Angpt-1 levels in pediatric septic shock. Angpt-1 likely contributes mechanistically to the effect of PCSK9 LOF on mortality in juvenile hosts. Mechanistic studies on the role of PCSK9-LDLR pathway on vascular homeostasis may lead to the development of novel pediatric-specific sepsis therapies. Introduction Sepsis is a major pediatric health problem resulting from a dysfunctional host response to an infection, which can further drive multiple organ dysfunctions and death. Recent studies suggest that up to 40% of global sepsis cases occurred under the age of 5, with more than 20 million cases reported worldwide in 2017. 1 Moreover, the World Health Organization rst global report on sepsis estimate that it accounts for 20% of all deaths and is the leading cause of under-5 mortality. 2 Further, the economic burden of sepsis is staggering, with more than $7 billion spent on pediatric cases in the U.S alone. 3 Despite this burden of disease, sepsis care remains limited to early antibiotics and organ support, with no e cacious biological therapies available. Within the previous decade, Proprotein Convertase Subtilisin/Kexin type 9 (PCSK9) has been recognized to play a critical role in sepsis pathobiology. 4,5 PCSK9 loss-of-function (LOF) or pharmacologic inhibition has been demonstrated to result in increased hepatocyte low-density lipoprotein receptor (LDLR) mediated bacterial and endotoxin clearance. 6,7 Based on these data, ongoing clinical trials will test the e cacy of commercially available PCSK9 inhibitors as novel sepsis therapeutics (NCT03869073 and NCT03634293). More recent observational data among adults and children, however, have shown contradictory results, with both PCSK9 LOF genotype 8, 9 and very low serum PCSK9 concentrations [9][10][11] being associated with equivocal or worse septic shock outcomes. Thus, it is likely that the biology of the PCSK9-LDLR pathway among critically ill patients remains incompletely understood. Endothelial dysfunction is a key putative mechanism of organ failure in critical illness including septic shock. 12 PCSK9 was recently shown to have pleiotropic effects on endothelial in ammation, 13,14 in addition to its impact on the bleeding and coagulation cascades. 13,14 It remains unknown whether these are a direct effect or are mediated through their in uence on circulating lipoprotein pro les, which are also known to modulate endothelial function. 15 A major limitation, however, is that much of the extant literature on the in uence of PCSK9 on the endothelium has focused on patients and disease models of dyslipidemia. On the contrary, critical illness is associated with drastic shifts in serum lipoprotein pro les, with very low rather than high concentrations common among adults and children. 16,17 Accordingly, given its respective contribution to sepsis pathobiology and the potential for interaction during systemic in ammation among critically ill patients, we sought to test 1) whether PCSK9 LOF genotype was independently associated with markers of endothelial dysfunction after accounting for serum lipoprotein concentrations and 2) whether these effects had a causal impact on mortality outcomes in a large pediatric cohort of septic shock. Lastly, we sought to corroborate the association between PCSK9 LOF genotype and endothelial markers in a juvenile murine model of sepsis. Methods Study design and patient selection: The study protocol was approved by Institutional Review Boards of participating institutions. 19,20 Brie y, patients under the age of 18 years were recruited from multiple pediatric ICUs (PICU) across the U.S. between 2003 and 2019. There were no study related interventions except for blood draws. Clinical and laboratory data were available between day 1 through 7. Inclusion criteria were 1) patients meeting pediatric-speci c consensus criteria for septic shock, 18 2) available data on existing PCSK9-LDLR single nucleotide polymorphisms. 9 Patients with both LOF and gain-of-function (GOF) mutations (n=20) and missing endothelial marker data (n=29) were excluded. Serum PCSK9 concentrations: PCSK9 concentrations were measured in serum samples collected within 24 hours of admission to the PICU (day 1) of septic shock by ELISA (R&D Systems, USA, DPC900) according to the manufacturers' speci cations, as previously detailed. 20 and were approved by the Institutional Animals Care and Use Committee (IACUC). Established colonies of constitutive Pcsk9 null mice with C57BL/6 genetic background (Jackson Laboratory, Pcsk9−/−; B6;129S6-Pcsk9tm1Jdh/J)) and wildtype mice (C57BL/6) were utilized. Mice were maintained with standard housing, food, and day/night regulation. Juvenile (14-day-old) mixed sex mice were used for experiments. Cecal slurry (0.8 mg/gram body weight prepared in D5W solution) was administered via intraperitoneal (I.P.) injection with a 27gauge needle. Sham animals received I.P. injections with equal volume of D5W. Animals received neither antibiotics nor uid resuscitation, and were housed with dams. For sample collection, all animals were anesthetized, followed by cervical dislocation, and blood drawn by terminal cardiac puncture 16 hours after cecal slurry or sham injections -a time point before early sepsis deaths occurred in prior survival studies. Serum was stored at -80°C for molecular assays. Similarly, we did not have su cient serum to measure lipoprotein concentrations to test their effect as mediators. Statistical analyses: Statistical analyses were performed using R software (version 4.2.2). Demographic and clinical data were summarized with percentages or median with outer limits for interquartile ranges (Q1 and Q3). Differences between groups were determined by the χ2 test for categorical variables and Kruskal Wallis non-parametric test for continuous variables. Relationship between endothelial dysfunction markers and serum PCSK9 concentrations was determined by simple linear regression. Age-related changes and a higher burden of death and multiple organ dysfunctions may potentially in uence the association between patient genotype and endothelial dysfunction markers. Accordingly, multi-variable linear regression models were developed to test the in uence of age, complicated course, PCSK9 LOF genotype on endothelial markers among patients. In addition, we adjusted for low and high-density lipoprotein (LDL and HDL) concentrations in separate models. Causal Mediation Analyses: To assess the causal impact of PCSK9 LOF on mortality via either its canonical effect on LDL cholesterol or via novel endothelial pathways, as marked by Angpt-1 and VCAM-1, we used Causal Mediation Analysis (R package Mediate v4.5.0). 21 Effect sizes were reported as the average causal mediation effects (ACME), average direct effects (ADE), and the total effect which is the sum of ACME and ADE. To estimate parameters, bootstrapping with 5000 simulations was used. Signi cance was declared when two-sided P-values for ACME were ≤ 0.05. Murine endothelial markers: Two-way ANOVA was used to test the in uence of genotype (Pcsk9 null vs wildtype) and condition (sepsis vs sham) with post-hoc pairwise contrasts corrected for multiple comparisons using the Tukey HSD method. P-value of < 0.05 was considered statistically signi cant. Results A total of 474 patients were included in this study. One hundred and ninety-ve patients carried at least one PCSK9 LOF variant. The remaining 279 patients carried either GOF variants or neither LOF nor GOF variants and served as the reference group. Table 1 shows demographic and clinical characteristics comparing patients with PCSK9 LOF variants relative to those without. A signi cantly higher proportion of patients who self-identi ed as having Caucasian ancestry carried LOF variants. There were no differences in baseline illness severity nor co-morbidities between groups. As previously detailed, 9 those with PCSK9 LOF variants had signi cantly higher rates of complicated course, 28-day mortality, and burden of organ failures. Figure 1 shows the association between PCSK9 LOF genotype and markers of endothelial dysfunction tested after exclusion of patients homozygous for the rs688 LDLR variant, which renders the LDLR insensitive to PCSK9 signaling. Concentrations of Angpt-1 and Tie-2 were lower, while VCAM-1, sTM, and ratios of Angpt-2/Angpt-1 and Angpt-2/Tie-2 were higher, among those with PCSK9 LOF genotype relative to those without. These data are summarized in tabular format in Additional File 1. Results of multivariate regression analyses to test the in uence of PCSK9 LOF genotype on markers of endothelial dysfunction are presented in Table 2. Correcting for patient age and complicated course and excluding patients with the rs688 LDLR variant, the PCSK9 LOF genotype signi cantly in uenced only Angpt-1 and VCAM-1 levels. sTM showed only a trend toward association with the LOF genotype. Serum PCSK9 concentrations, however, did not correlated with any endothelial dysfunction marker as shown in Additional File 2. A total of 326 patients had available data on serum LDL and HDL concentrations in addition to genotyping and endothelial marker data. The multivariate models testing the in uence of serum LDL and HDL concentrations on the association between PCSK9 LOF genotype and endothelial dysfunction markers are shown in Table 3 and Additional File 3 respectively. Both serum LDL and HDL were independently associated with several endothelial dysfunction markers. However, after adjusting for age, complicated course, HDL, and LDL in separate models, only Angpt-1 was signi cantly associated with PCSK9 LOF. Figure 2 shows the association between PCSK9 LOF genotype, concentrations of Angpt-1 and VCAM-1, across the range of serum LDL and HDL. Angpt-1 levels were consistently lower among patients with LOF genotype irrespective of lipoprotein concentrations. However, VCAM-1 levels increased among patients with LOF genotype only at low lipoprotein concentrations. We used Causal Mediation Analysis to determine whether the relationship between PCSK9 LOF and previously published association with increased mortality in this cohort 9 was a result of the known effects of PCSK9 on serum LDL concentrations or whether it was mediated by a novel endothelial pathway involving Angpt-1 or VCAM-1. We found that in each analysis the direct relationship between PCSK9 LOF and increased mortality persisted as shown in Table 4. Angpt-1 was found to be a signi cant mediating variable, contributing over 12% of the effect of PCSK9 LOF on mortality (p = 0.0008). In contrast, neither VCAM-1 nor, surprisingly, LDL levels contribute signi cantly to the effect of PCSK9 LOF on mortality (p=0.17 and 0.94 respectively). Figure 3 shows concentrations of endothelial dysfunction markers among experimental groups in juvenile murine sepsis studies. Unsurprisingly, septic animals had higher endothelial dysfunction relative to sham animals. However, genotype speci c differences in endothelial markers among septic animals were observed only for Angpt-1 and sTM, with lower and higher levels respectively noted among Pcsk9 null mice relative to the wildtype Discussion In this study, we build upon our previous observations that PCSK9 LOF genotype among children with septic shock and genetic ablation in juvenile mice is independently associated with increased odds of mortality and organ dysfunctions. 9 Here, we report on the novel independent association between PCSK9 LOF genotype and endothelial dysfunction markers in the pediatric host with sepsis. Although several endothelial dysfunction markers were associated with LOF genotype, only Angpt-1 and VCAM-1 were independently associated after adjusting for age and complicated course. Furthermore, while the in uence of PCSK9 LOF genotype on VCAM-1 appears to be mediated through indirect effects on serum LDL and HDL, the association with Angpt-1 was independent of changes in serum lipoproteins concentrations. Finally, the effect of PCSK9 LOF genotype on study mortality was not mediated by the canonical effect of patient genotype on LDL cholesterol but rather mediated by a non-canonical effect on Angpt-1. Our data are strengthened by the observation that the in uence of PCSK9 LOF genotype on markers of endothelial dysfunction were more evident after excluding patients homozygous for an LDLR variant that renders it insensitive to PCSK9. Our data suggest the possible existence of an alternate role for the PCSK9-LDLR pathway that is critical to the host response in critical illness beyond hepatocyte-mediated bacterial and/or endotoxin clearance. We have previously demonstrated that juvenile Pcsk9 null mice, challenged with sepsis, had a trend towards lower lipoprotein concentrations, higher bacterial burden in blood, and lower bacterial burden in the liver relative to the wildtype. Given the observational nature of this study, we were unable to ascertain whether the proclivity for greater endothelial dysfunction in the developing host with PCSK9 LOF genotype is driven by a higher bacterial burden and related endothelial injury or a direct effect of PCSK9-LDLR pathway on the vascular endothelium. The literature on the in uence of PCSK9 inhibition on the vascular endothelium suggests both protective and potentially detrimental effects. Studies in macrovascular aortic endothelial cells (ECs) suggest that silencing PCSK9 may result in rescue of endothelial nitric oxide synthase (eNOS) production induced by lipopolysaccharide (LPS). 13 Interestingly, the opposite was demonstrated in human umbilical vein endothelial cells (HUVECs). 22 More recently Leung et al. demonstrated that PCSK9, in a dose-dependent manner through the LDLR, decreases the pro-in ammatory response to LPS in HUVECs. 14 Our observational data demonstrate an association with Angpt-1, a key molecule involved in stabilizing endothelial barrier integrity. 23 This nding warrants further study to elucidate the biological mechanisms at play. Taken together, the PCSK9-LDLR pathway, may have a potentially paradoxical response on vascular homeostasis, which may be high relevant to the host response among critically ill patients. The lack of signi cant correlation of serum PCSK9 with endothelial dysfunction markers is consistent with our previous report where we noted only a weak association with the risk of complicated course in children with septic shock. Potential explanations for this discordance between patient genotype and serum protein concentrations with endothelial dysfunction markers include 1) although 90% of circulating PCSK9 is secreted by the liver, another major source of PCSK9 is vascular smooth muscle cells. 24 Thus, it is conceivable that PCSK9 LOF genotype results in lower local levels of PCSK9 essential to endothelial health, which are unmeasurable when sampling patient serum. 2) PCSK9 LOF genotype may encode for different organ-and tissue-level receptor density of key downstream targets including LDLR. It is plausible that such variation may have a more signi cant effect on organ homeostasis and sepsis survival than serum PCSK9 concentrations during sepsis. Our study has several limitations including 1) the observational nature of the study, 2) potential for linkage disequilibrium, 3) lack of assessment of dynamic changes in serum lipoproteins, PCSK9, and endothelial dysfunction markers, 4) potential for unadjusted confounders, and 5) fundamental biological differences with regard to lipoprotein metabolism such as the lack of cholesteryl ester transfer protein (CETP) among mice. Despite these limitations, our study highlights a novel association that warrants further study with due consideration of potential for host-developmental age and gene-environment interactions. First, increasing evidence in murine models suggest that downstream targets of PCSK9 including intra-cellular lipid transporters (LDLR) and vasculogenesis (Angpt-1) show a signi cant downregulation with increasing age. 25 Accordingly, PCSK9 LOF or pharmacological inhibition may have a considerably different effects according to the patient age. Second, adults may have a higher degree of circulating lipoproteins and comorbidities including dyslipidemia (oxidized HDL and LDL) at baseline. Accordingly, PCSK9 LOF or pharmacological inhibition during sepsis may lead to the signi cant reduction in these dysfunctional lipids with consequent bene cial effects on endothelial dysfunction. 26 On the contrary, further lowering of already low HDL and LDL among children, may result in drop below a critical threshold of these lipoproteins, which are essential for maintenance of vascular health and clearance of bacteria and endotoxin. Recent results from a pilot trial testing the 'Impact of PCSK9 Inhibition on Clinical Outcome in Patients During the In ammatory Stage of the COVID-19' (IMPACT-SIRIO 5; NCT04941105) demonstrate a survival bene t among adult patients. 27 It is conceivable that such therapies will be trialed in other critically ill cohorts including sepsis and acute respiratory distress syndrome. Our genetic data indicate that PCSK9 inhibitors may not be biologically appropriate for use among critically ill children. Future mechanistic studies that investigate the PCSK9-LDLR-ANGPT-1 axis in the pediatric host may lead to the development of novel therapies aimed at restoring vascular homeostasis. Conclusions We report on the independent association between PCSK9 loss-of-function genotype and markers of endothelial dysfunction in a large cohort of critically ill children with septic shock with corroborative evidence in juvenile murine sepsis. After adjusting for the confounders, PCSK9 LOF genotype was associated with lower Angpt-1 and higher VCAM-1 concentrations. After accounting for LDL and HDL concentrations, only Angpt-1 was signi cantly associated with PCSK9 LOF genotype in pediatric septic shock, with evidence for causal mediation on sepsis mortality. Future mechanistic studies on the role of PCSK9-LDLR-ANGPT-1 pathway on vascular homeostasis may lead to the development of sepsis therapies speci c to children. Figure 1 Box and whisker plots of median concentrations of serum markers of endothelial dysfunction among pediatric septic shock patient with PCSK9 loss-of-function variants relative to those without. Associations shown after exclusion of patients homozygous for rs688 LDLR variant, which renders it insensitive to PCSK9.
2023-02-12T05:15:05.544Z
2023-02-03T00:00:00.000
{ "year": 2023, "sha1": "b5b92e5b21ca48e72d2cffae92d8debb87fc6f20", "oa_license": "CCBY", "oa_url": "https://ccforum.biomedcentral.com/counter/pdf/10.1186/s13054-023-04535-1", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "b5b92e5b21ca48e72d2cffae92d8debb87fc6f20", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
202141580
pes2o/s2orc
v3-fos-license
Green functions and propagation in the Bopp-Podolsky electrodynamics In this paper, we investigate the so-called Bopp-Podolsky electrodynamics. The Bopp-Podolsky electrodynamics is a prototypical gradient field theory with weak nonlocality in space and time. The Bopp-Podolsky electrodynamics is a Lorentz and gauge invariant generalization of the Maxwell electrodynamics. We derive the retarded Green functions, first derivatives of the retarded Green functions, retarded potentials, retarded electromagnetic field strengths, generalized Lienard-Wiechert potentials and the corresponding electromagnetic field strengths in the framework of the Bopp-Podolsky electrodynamics for three, two and one spatial dimensions. We investigate the behaviour of these electromagnetic fields in the neighbourhood of the light cone. In the Bopp-Podolsky electrodynamics, the retarded Green functions and their first derivatives show fast decreasing oscillations inside the forward light cone. Introduction Generalized continuum theories such as gradient theories and nonlocal theories are exciting and challenging research fields in physics, applied mathematics, material science and engineering science (see, e.g., [1,2,3,4,6,5,7,8,9,10]). Gradient theories and nonlocal theories possess characteristic internal length scales in order to describe sizeeffects. Such generalized continuum theories are able to provide a regularization of the singularities present in classical continuum theories which are not valid at short distances. Generalized continuum theories are continuum theories valid at small scales. In physics, an important and useful gradient theory is the so-called Bopp-Podolsky electrodynamics, which is the gradient theory of electrodynamics containing one length scale parameter, ℓ, the so-called Bopp-Podolsky parameter. Bopp [1] and Podolsky [3] have proposed such a gradient theory representing a classical generalization of the Maxwell electrodynamics towards a generalized electrodynamics with linear field equations of fourth order in order to avoid singularities in the electromagnetic fields and to have a finite and positive self-energy of point charges (see also [4,11,12]). Due to its simplicity, the Bopp-Podolsky theory can be considered as the prototype of a gradient theory. Therefore, the Bopp-Podolsky electrodynamics represents the simplest, physical gradient field theory with weak nonlocality in space and time. Nowadays there is a renewed interest in the Bopp-Podolsky electrodynamics (e.g., [13,14]), in particular to solve the long-outstanding problem of the electromagnetic self-force of a charged particle present in the classical Maxwell electrodynamics which goes back to Lorentz, Abraham and Dirac trying to formulate a classical theory of the electron. The equation of motion in the classical theory of the electron, often called the Lorentz-Dirac equation, is of third order in the time-derivative of the particle position, and as result it shows unphysical behaviour such as run-away solutions and pre-acceleration (see, e.g., the books by Rohrlich [15] and Spohn [16]). Therefore, the classical Maxwell electrodynamics in vacuum does not lead to a consistent equation of motion of charged point particles and a generalized electrodynamics could solve this problem. In the static case, gradient electrostatics with generalized Coulomb law was given by Bopp [1], Podolsky [3], Landé and Thomas [12] and gradient magnetostatics including the generalized Biot-Savart law was given by Lazar [17]. Such generalized electrostatics and generalized magnetostatics have a physical meaning if the classical electric and magnetic fields are recovered in the limit ℓ → 0. In gradient electrostatics, for a point charge the electric potential is finite and non-singular, but the electric field strength is finite and discontinuous at the position of the point charge. The Bopp-Podolsky theory has many interesting features. It solves the problem of infinite self-energy in the electrostatic case, and it gives the correct expression for the self-force of charged particles at short distances eliminating the singularity when r → 0 as shown by Frenkel [18], Zayats [13], Gratus et al. [14]. In this manner, the Bopp-Podolsky electrodynamics is free of classical divergences. Using the Bopp-Podolsky electrodynamics, Frenkel [18] solved the so-called 4/3 problem of the electromagnetic mass in the Abraham-Lorentz theory, and Frenkel and Santos [19] eliminated runaway solutions from the Lorentz-Dirac equation of motion. These features allow experiments that could test the generalized electrodynamics as a viable effective field theory (e.g., [20]) and the Bopp-Podolsky electrodynamics offers the possibilities of the physical modeling at small scales. Iwanenko and Sokolow [11], Kvasnica [21] and Cuzinatto et al. [20] argued that the Bopp-Podolsky length scale parameter ℓ is in the order of ∼ 10 −15 m, that means femtometre (fm), or even smaller. From the mathematical point of view, the length parameter ℓ plays the role of the regularization parameter in the Bopp-Podolsky electrodynamics. This length scale is associated to the massive mode, m BP , of the Bopp-Podolsky electrodynamics through m BP = /(cℓ). Moreover, it is interesting to note that the Bopp-Podolsky electrodynamics is the only linear generalization of the Maxwell electrodynamics whose Lagrangian, containing second order derivatives of the electromagnetic gauge potentials, is both Lorentz and U(1)-gauge invariant [22]. The Bopp-Podolsky electrodynamics is akin to the Pauli-Villars regularization procedure used in quantum electrodynamics (see, e.g., [21,23,13,24]). Therefore, the Bopp-Podolsky electrodynamics provides a regularization of the Maxwell electrodynamics based on higher order partial differential equations. On the other hand, Santos [25] analyzed the wave propagation in the vacuum of the Bopp-Podolsky electrodynamics and two kinds of waves were found: the classical non-dispersive wave of the Maxwell electrodynamics, and a dispersive wave reminiscent of wave propagation in a collisionless plasma with plasma (angular) frequency ω p = c/ℓ, described by a Klein-Gordon equation. In the Maxwell electrodynamics, quantities like the retarded potentials, the retarded electromagnetic field strengths, the Liénard-Wiechert potentials and the electromagnetic field strengths in the Liénard-Wiechert form are the basic fields and quantities for the classical electromagnetic radiation (see, e.g., [26,27,28]). In particular, the Liénard-Wiechert form of the electromagnetic field strengths is important for the calculation of the self-force of a charged point particle. In the Bopp-Podolsky electrodynamics, only a little is known for such fields necessary for the electromagnetic radiation and radiation reaction in the generalized electrodynamics of Bopp and Podolsky and their behaviour on the light cone (see, e.g., [29,13,14]). Only, the three-dimensional generalized Liénard-Wiechert potentials were given by Landé and Thomas [29] and the corresponding threedimensional electromagnetic fields of a point charge have been recently given by Gratus et al. [14], for the first time. The aim of the present work is to close this gap and to give a systematic derivation and presentation of all important quantities in three, two and one spatial dimensions (3D, 2D, 1D). In particular, this work gives, for the first time, the analytical expressions for the retarded potentials and retarded electromagnetic fields in 2D and 1D, and for the generalized Liénard-Wiechert potentials and corresponding electromagnetic fields of a non-uniformly moving charge in 2D and 1D in the framework of the Bopp-Podolsky electrodynamics. This completes the library of all important field solutions needed in the Bopp-Podolsky electrodynamics in 3D, 2D, and 1D, which is a necessary step towards completing the study of the Bopp-Podolsky electrodynamics. In particular, we investigate the behaviour of these fields near and on the light cone. The purpose of this paper is to add relevant results of the Green functions, retardation and wave propagation in the Bopp-Podolsky electrodynamics. In Section 2, we review the basic equations of the Bopp-Podolsky electrodynamics. In Section 3, we give a systematic derivation and collection of the (dynamical) Bopp-Podolsky Green function and its first derivatives in 3D, 2D and 1D in the framework of generalized functions. The retarded potentials and retarded electromagnetic field strengths are given in Section 4 for 3D, 2D and 1D. In Section 5, we present the generalized Liénard-Wiechert potentials and electromagnetic field strengths in generalized Liénard-Wiechert form. The paper closes with the conclusion in Section 6. Basic framework of the Bopp-Podolsky electrodynamics In the Bopp-Podolsky electrodynamics [1,3], the electromagnetic fields are described by the Lagrangian density with the notation ∇E : (1) corresponds to Bopp's form of the Lagrangian [1]. Here φ and A are the electromagnetic gauge potentials, E is the electric field strength vector, B is the magnetic field strength vector, ρ is the electric charge density, and J is the electric current density vector. ε 0 is the electric constant and µ 0 is the magnetic constant (also called permittivity of vacuum and permeability of vacuum, respectively). The speed of light in vacuum is given by Moreover, ℓ is the characteristic length scale parameter in the Bopp-Podolsky electrodynamics, ∂ t denotes the differentiation with respect to the time t and ∇ is the Nabla operator. From the mathematical point of view, the characteristic length parameter ℓ plays the role of a regularization parameter in the Bopp-Podolsky theory. In addition to the classical terms, first spatial-and time-derivatives of the electromagnetic field strengths (E, B) multiplied by the characteristic length ℓ and a characteristic time T = ℓ/c, respectively, appear in Eq. (1) which describe a weak nonlocality in space and time. The limit ℓ → 0 is the limit from the Bopp-Podolsky electrodynamics to the Maxwell electrodynamics. The electromagnetic field strengths (E, B) can be expressed in terms of the electromagnetic gauge potentials (scalar potential φ, vector potential A) Due to their mathematical structure, the electromagnetic field strengths (3) and (4) satisfy the two electromagnetic Bianchi identities (or electromagnetic compatibility conditions) which are known as homogeneous Maxwell equations. The Euler-Lagrange equations of the Lagrangian (1) with respect to the scalar potential φ and the vector potential A give the electromagnetic field equations respectively. The d'Alembert operator is defined as where ∆ is the Laplace operator. Eqs. (7) and (8) represent the generalized inhomogeneous Maxwell equations in the Bopp-Podolsky electrodynamics. In addition, the electric current density vector and the electric charge density fulfill the continuity equation If we use the variational derivative with respect to the electromagnetic fields (E, B), we obtain the constitutive relations in the Bopp-Podolsky electrodynamics for the response quantities (D, H) in vacuum where D is the electric displacement vector (electric excitation), H is the magnetic excitation vector. The second terms in Eqs. (11) and (12) describe the polarization of the vacuum present in the Bopp-Podolsky electrodynamics. The vacuum in the Bopp-Podolsky electrodynamics is a classical vacuum plus vacuum polarization that behaves like a plasma-like vacuum [25]. Using the constitutive relations (11) and (12), the Euler-Lagrange equations (7) and (8) can be rewritten in the form of inhomogeneous Maxwell equations From Eqs. (7) and (8), inhomogeneous Bopp-Podolsky equations, being partial differential equations of fourth order, follow for the electromagnetic field strengths Using the generalized Lorentz gauge condition [30] 1 + ℓ 2 1 the electromagnetic gauge potentials fulfill the following inhomogeneous Bopp-Podolsky equations Note that the generalized Lorentz gauge condition (17) is as natural in the Bopp-Podolsky electrodynamics as the Lorentz gauge condition is in the Maxwell electrodynamics [30]. As shown by Galvão and Pimentel [30], the usual Lorentz gauge condition, 1 c 2 ∂ t φ+∇·A = 0, does not satisfy the necessary requirements for a consistent gauge in the Bopp-Podolsky electrodynamics: it does not fix the gauge, it is not preserved by the equations of motion, and it is not attainable. The generalized Lorentz gauge condition is also necessary in the quantization of the Bopp-Podolsky electrodynamics leading to a generalized quantum electrodynamics [31,32]. Bufalo et al. [31] found that in such a generalized quantum electrodynamics, using the one-loop approximation, the electron self-energy and the vertex function are both ultraviolet finite. Green function of the Bopp-Podolsky equation The Bopp-Podolsky electrodynamics is a linear theory with partial differential equations of fourth order. Therefore, the powerful method of Green functions (fundamental solutions) can be used to construct exact analytical solutions. The Green function G BP of the Bopp-Podolsky equation, which is a partial differential equation of fourth order, is defined by where τ = t − t ′ , R = r − r ′ and δ is the Dirac δ-function. Therefore, the Green function, G BP , is the fundamental solution of the linear hyperbolic differential operator of fourth order, [1 + ℓ 2 , in the sense of Schwartz' distributions (or generalized functions) [33]. Because we are only interested in the retarded Green function, the causality constraint must be fulfilled As always for hyperbolic operators, the Green function G BP (R, τ ) is the only fundamental solution of the (hyperbolic) Bopp-Podolsky operator with support in the half-space τ ≥ 0 (see, e.g., [34]). The Bopp-Podolsky equation (20) can be written as an equivalent system of partial differential equations of second order where G is the Green function of the d'Alembert equation (24) and G KG is the Green function of the Klein-Gordon equation (25). It can be seen that the Bopp-Podolsky equation (20) [35]) or in the (formal) operator notation using the partial fraction decomposition Using Eq. (26), the Green function of the Bopp-Podolsky equation can be derived by means of the expressions of the Green function of the d'Alembert equation (see, e.g., [36,37,38,39]) and the Green function of the Klein-Gordon equation (see, e.g., [11,39,40]). Therefore, the Bopp-Podolsky field is a superposition of the Maxwell field and the Klein-Gordon field. On the other hand, the Green function of the Bopp-Podolsky equation can be written as convolution of the Green function of the d'Alembert operator and the Green function of the Klein-Gordon operator satisfying Eqs. (20), (22) and (23). The symbol * denotes the convolution in space and time. It can be seen in Eq. (28) that the Green function G KG of the Klein-Gordon operator plays the role of the regularization function in the Bopp-Podolsky electrodynamics, regularizing the Green function G of the d'Alembert operator towards the Green function G BP of the Bopp-Podolsky operator. On the other hand, the limit of G BP as ℓ tends to zero reads (see Eq. (26)) In this work, we only consider the retarded Green functions which are zero for τ < 0. 3D Green functions The three-dimensional Green functions (fundamental solutions) of the wave (d'Alembert) operator (24), the Klein-Gordon operator (25) and the Bopp-Podolsky (Klein-Gordond'Alembert) operator are the (generalized) functions (τ > 0): where H is the Heaviside step function and J 1 is the Bessel function of the first kind of order one. Eq. (32) is obtained from Eq. (26) using the Green functions (30) and (31). The Green function (32) is in agreement with the expression given earlier in [41,18,19]. on the light cone, cτ = R, the Green function (32) is discontinuous (see Fig. 1a) and reads as Furthermore, the Green function (32) shows a decreasing oscillation (see Fig. 1b) and does not have a δ-singularity unlike the Green function (30). One can say, Eq. (32) describes a wake in a plasma-like vacuum. Due to the Bessel function term J 0 , the Green function (41) shows a decreasing oscillation around the classical Green function (39) (see Fig. 3b). Derivatives of the Bopp-Podolsky Green function In this subsection, we derive the first time-derivative and first gradient of the Bopp-Podolsky Green function. 3D The first time-derivative and first gradient of the three-dimensional Bopp-Podolsky Green function (32) read for τ > 0: using H ′ (z) = δ(z), δ(z)f (z) = δ(z)f (0), and (J 1 (z)/z) ′ = −J 2 (z)/z. J 2 is the Bessel function of the first kind of order two. Thus, Eqs. (43) and (44) consist of two terms, namely a Dirac δ-term on the light cone plus a Bessel function term inside the light cone. The second parts (regular parts) of Eqs. (43) and (44) are discontinuous and show a decreasing oscillation. On the light cone, the derivatives of the Green function G BP (3) possess a singularity of Dirac δ-type. This is exhibited by the first term in Eqs. (43) and (44). The second term in Eqs. (43) and (44) is discontinuous on the light cone (see Fig. 1c), since lim z→0 1 z 2 J 2 (z) = 1 8 . In the neighbourhood of the light cone, Eqs. (43) and (44) have the form It can be seen in Fig. 1d that the second parts (regular parts) of Eqs. (43) and (44) show a decreasing oscillation. 2D The first time-derivative and first gradient of the two-dimensional Bopp-Podolsky Green function (37) read for τ > 0: On the light cone, the derivatives of the Green function G BP (2) possess a 1/z-singularity (see Fig. 2c they are discontinuous. Of course, the 1/z-singularity is weaker than the non-integrable 1/z 3 -singularity. In the neighbourhood of the light cone, Eqs. (48) and (49) have the form Furthermore, Eqs. (48) and (49) show a decreasing oscillation around the classical singularity (see Fig. 2d). Retarded potentials and retarded electromagnetic field strengths Solutions based on retarded Green functions lead to retarded fields (like retarded potentials and retarded electromagnetic field strengths) in the form of retarded integrals. Retarded integrals are mathematical expressions reflecting the phenomenon of "finite signal speed" (e.g. [42]). Retarded potentials The retarded electromagnetic potentials are the solutions of the inhomogeneous Bopp-Podolsky equations (18) and (19) and for zero initial conditions they are given as convolution of the (retarded) Green function G BP and the given charge and current densities (ρ, J ) Explicitly, the convolution integrals (58) and (59) read as where r ′ is the source point and r is the field point. Here n denotes the spatial dimension. Substituting Eqs. (58) and (59) into the generalized Lorentz gauge condition (17) and using Eqs. (22) and (10), it can be seen that the generalized Lorentz gauge condition is satisfied 3D Substituting the Bopp-Podolsky Green function (32) into Eqs. (60) and (61), the threedimensional retarded electromagnetic potentials read as and since H(cτ − R) = 0 for t ′ > t − R/c. In the Bopp-Podolsky electrodynamics, the three-dimensional retarded potentials (63) and (64) possess an afterglow, since they draw contribution emitted at all times t ′ from −∞ up to t − R/c. The retarded time is a result of the finite speed of propagation for electromagnetic signals. 1D In the version of the Bopp-Podolsky electrodynamics in one spatial dimension, the potentials φ (1) and A (1) are both a scalar field, and the current density J is also a scalar field. Substituting the Bopp-Podolsky Green function (41) into Eqs. (60) and (61), the onedimensional retarded electromagnetic potentials read and since H(cτ − |X|) = 0 for t ′ > t − |X|/c. It can be seen that the one-dimensional retarded potentials (67) and (68) draw contribution emitted at all times t ′ from −∞ up to t−|X|/c. In the Bopp-Podolsky electrodynamics, the retarded potentials possess an afterglow in 1D, 2D and 3D since they draw contribution emitted at all times t ′ from −∞ up to t−R/c unlike in the classical Maxwell electrodynamics where only the retarded potentials possess an afterglow in 1D and 2D (see, e.g., [36,38]). Retarded electromagnetic field strengths Substituting Eqs. (58) and (59) into the electromagnetic fields (3) and (4) or solving Eqs. (15) and (16), the electromagnetic fields (E, B) are given by the convolution of the Green function G BP and the given charge and current densities (ρ, J ) and read as 3D Substituting the derivatives of the Bopp-Podolsky Green function (43) and (44) into Eqs. (69) and (70), the three-dimensional retarded electromagnetic field strengths read as and In the first part of Eqs. (71) and (72), the δ-function in Eqs. (43) and (44) picked out the value of ρ and J at the retarded time, t − R/c, which is earlier than t by as long as it takes a signal with speed c to travel from the source point r ′ to the field point r. 43) and (44) and they draw contribution emitted at all times t ′ from −∞ up to t − R/c. It can be seen that Eqs. (71) and (72) have some similarities but also differences to the so-called Jefimenko equations in Maxwell's electrodynamics [42] (see also [43]). The differences are based on the appearance of the Bopp-Podolsky Green function (32) in the Bopp-Podolsky electrodynamics instead of the Green function of the d'Alembert operator (30) in the Maxwell electrodynamics. 2D In two-dimensional electrodynamics, the magnetic field strength is a scalar field B (2) = ∇ × A (2) = ǫ ij ∂ i A j , where ǫ ij is the two-dimensional Levi-Civita tensor, and the electric field strength E (2) = (E x , E y ) is a two-dimensional vector field (see, e.g., [44]). Substituting the derivatives of the Bopp-Podolsky Green function (48) and (49) into Eqs. (69) and (70), the two-dimensional retarded electromagnetic field strengths become and where R×J = ǫ ij R i J j . The two-dimensional retarded electromagnetic field strengths (73) and (74) show an afterglow, since they draw contribution emitted at all times t ′ from −∞ up to t − R/c. 1D This version of the Bopp-Podolsky electrodynamics in one spatial dimension has a scalar electric field and no magnetic field (see, e.g., [45] for classical electrodynamics in one spatial dimension). Substituting the derivatives of the Bopp-Podolsky Green function (54) and (55) into Eqs. (69) and (70), the one-dimensional retarded electromagnetic field strengths read as The one-dimensional retarded electric field strength (75) possesses an afterglow, because it draws contribution emitted at all times t ′ from −∞ up to t − |X|/c. 3D Substituting Eq. (77) into Eqs. (63) and (64) and performing the spatial integration, the three-dimensional generalized Liénard-Wiechert potentials read as and where R(t ′ ) = r − s(t ′ ) and the retarded time t R being the root of the equation 2D Substituting Eq. (77) into Eqs. (65) and (66) and performing the spatial integration, the two-dimensional generalized Liénard-Wiechert potentials become and where R(t ′ ) = r − s(t ′ ) and the retarded time t R being the root of the equation It can be seen that the two-dimensional generalized Liénard-Wiechert potentials (81) and (82) draw contributions emitted at all times t ′ from −∞ up to t R . 1D Substituting Eq. (77) into Eqs. (67) and (68), the spatial integration can be performed to give the one-dimensional generalized Liénard-Wiechert potentials and where X(t ′ ) = x − s(t ′ ) and t R is the retarded time, which is the root of the equation Also the one-dimensional generalized Liénard-Wiechert potentials (84) and (85) 71) and (72) and performing the spatial integration (see, e.g., [46,47,43]), the three-dimensional electromagnetic fields in the generalized Liénard-Wiechert form read as and where In the first part of Eqs. (87) and (88), the expression inside the brackets has to be taken at the retarded time t ′ = t R , which is the unique solution of Eq. (80). The second part of Eqs. (87) and (88) draws contribution emitted at all times t ′ from −∞ up to the retarded time t R . Note that the term R(t ′ )/P (t ′ ) in the first part of Eqs. (87) and (88) possesses a (directional) discontinuity (see also [14]). 2D Substituting Eq. (77) into Eqs. (73) and (74) and performing the spatial integration, the two-dimensional electromagnetic fields in the generalized Liénard-Wiechert form become and It can be seen that the two-dimensional electromagnetic fields (90) and (91) draw contributions emitted at all times t ′ from −∞ up to t R , being the unique solution of Eq. (83). 1D Substituting Eq. (77) into Eqs. (75) and (76), the spatial integration can be performed to give the one-dimensional electromagnetic fields in the generalized Liénard-Wiechert form Thus, the one-dimensional electric field (92) draws contributions emitted at all times t ′ from −∞ up to t R , which is the unique solution of Eq. (86). Conclusion We have investigated the Bopp-Podolsky electrodynamics as prototype of a dynamical gradient theory with weak nonlocality in space and time. The retarded potentials, retarded electromagnetic field strengths, generalized Liénard-Wiechert potentials and electromagnetic field strengths in generalized Liénard-Wiechert form have been calculated for 3D, 2D and 1D and they depend on the entire history from −∞ up to the retarded time t R . The Table 1: Behaviour of the Green function of the Bopp-Podolsky electrodynamics and its first derivatives on the light cone. Spatial dimension Green function G BP First derivatives of G BP 3D finite and discontinuous singular and discontinuous 2D approaching zero singular and discontinuous 1D approaching zero finite and discontinuous Bopp-Podolsky field is a superposition of the Maxwell field describing a massless photon and the Klein-Gordon field describing a massive one. In particular, the Klein-Gordon part of the Bopp-Podolsky field gives rise to a decreasing oscillation around the classical Maxwell field. The Green function of the Bopp-Podolsky electrodynamics and its first derivatives have been calculated and studied in the neighbourhood of the light cone (see table 1). It turned out that the Bopp-Podolsky Green function is the regularization of the Green function of the d'Alembert operator: corresponding to the simplest case of the Pauli-Villars regularization with a single "auxiliary mass" proportional to 1/ℓ. The Green function of the Klein-Gordon operator plays the mathematical role of the regularization function in the Bopp-Podolsky electrodynamics. Moreover, the retarded Bopp-Podolsky Green function and its first derivatives show decreasing oscillations inside the forward light cone. The behaviour of the electromagnetic potentials and electromagnetic field strengths on the light cone is obtained from the behaviour of the Green function and its first derivatives in the neighbourhood of the light cone. Only in 1D the electric field strength of the Bopp-Podolsky electrodynamics is singularity-free on the light cone. In 2D and 3D, the electromagnetic field strengths in the Bopp-Podolsky electrodynamics possess weaker singularities than the classical singularities of the electromagnetic field strengths in the Maxwell electrodynamics. In order to regularize the 2D and 3D electromagnetic field strengths in the Bopp-Podolsky electrodynamics towards singular-free fields on the light cone, generalized electrodynamics of higher order might be used.
2019-09-10T09:09:33.853Z
2019-11-01T00:00:00.000
{ "year": 2020, "sha1": "0033c62384f1b7a3b4fed3a9f18a357329cfb03c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2005.02874", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1872a4cd2c622ebff8be0f2e699ac7a3dcd4e43a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
19673466
pes2o/s2orc
v3-fos-license
Complex Formation between Glutamyl-tRNA Reductase and Glutamate-1-semialdehyde 2,1-Aminomutase in Escherichia coli during the Initial Reactions of Porphyrin Biosynthesis* In Escherichia coli the first common precursor of all tetrapyrroles, 5-aminolevulinic acid, is synthesized from glutamyl-tRNA (Glu-tRNAGlu) in a two-step reaction catalyzed by glutamyl-tRNA reductase (GluTR) and glutamate-1-semialdehyde 2,1-aminomutase (GSA-AM). To protect the highly reactive reaction intermediate glutamate-1-semialdehyde (GSA), a tight complex between these two enzymes was proposed based on their solved crystal structures. The existence of this hypothetical complex was verified by two independent biochemical techniques. Co-immunoprecipitation experiments using antibodies directed against E. coli GluTR and GSA-AM demonstrated the physical interaction of both enzymes in E. coli cell-free extracts and between the recombinant purified enzymes. Additionally, the formation of a GluTR·GSA-AM complex was identified by gel permeation chromatography. Complex formation was found independent of Glu-tRNAGlu and cofactors. The analysis of a GluTR mutant truncated in the 80-amino acid C-terminal dimerization domain (GluTR-A338Stop) revealed the importance of GluTR dimerization for complex formation. The in silico model of the E. coli GluTR·GSA-AM complex suggested direct metabolic channeling between both enzymes to protect the reactive aldehyde species GSA. In accordance with this proposal, side product formation catalyzed by GluTR was observed via high performance liquid chromatography analysis in the absence of the GluTR·GSA-AM complex. In Escherichia coli the first common precursor of all tetrapyrroles, 5-aminolevulinic acid, is synthesized from glutamyl-tRNA (Glu-tRNA Glu ) in a two-step reaction catalyzed by glutamyl-tRNA reductase (GluTR) and glutamate-1-semialdehyde 2,1-aminomutase (GSA-AM). To protect the highly reactive reaction intermediate glutamate-1-semialdehyde (GSA), a tight complex between these two enzymes was proposed based on their solved crystal structures. The existence of this hypothetical complex was verified by two independent biochemical techniques. Co-immunoprecipitation experiments using antibodies directed against E. coli GluTR and GSA-AM demonstrated the physical interaction of both enzymes in E. coli cell-free extracts and between the recombinant purified enzymes. Additionally, the formation of a GluTR⅐GSA-AM complex was identified by gel permeation chromatography. Complex formation was found independent of Glu-tRNA Glu and cofactors. The analysis of a GluTR mutant truncated in the 80-amino acid C-terminal dimerization domain (GluTR-A338Stop) revealed the importance of GluTR dimerization for complex formation. The in silico model of the E. coli GluTR⅐GSA-AM complex suggested direct metabolic channeling between both enzymes to protect the reactive aldehyde species GSA. In accordance with this proposal, side product formation catalyzed by GluTR was observed via high performance liquid chromatography analysis in the absence of the GluTR⅐GSA-AM complex. Recently the catalytic mechanism of GluTR and its structural basis have been elucidated in a combined biochemical and structural investigation using the recombinant enzyme from the extreme thermophilic archaean Methanopyrus kandleri (4,5). The crystal structure of GluTR reveals an unusual extended V-shaped dimer with each monomer consisting of three distinct domains arranged along a curved "spinal" ␣-helix. The N-terminal catalytic domain specifically recognizes the glutamate moiety of the substrate. The active site was identified by cocrystallization of the competitive inhibitor glutamycin representing the 3Ј-terminal end of the natural substrate. During catalysis a nucleophilic cysteine residue attacks the aminoacyl linkage of the glutamate to its cognate tRNA and generates an enzyme-bound thioester intermediate. This intermediate has been biochemically trapped and visualized. For this purpose a new purification strategy for the Escherichia coli enzyme has been developed (6,7). The thioester intermediate gets finally reduced by direct hydride transfer from NADPH to form GSA and to release tRNA Glu . The nucleotide cofactor is supplied by the second distinct domain, the NADPH binding domain. An additional C-terminal domain of GluTR is responsible for the dimerization of the unusual V-shaped molecule. Structurebased alignments of amino acid sequences from different sources have revealed a high degree of sequence identity (8). These findings along with the biochemical data for the E. coli enzyme indicate that the M. kandleri enzyme can be regarded as a model system representing all GluTR enzymes. ALA synthesis requires the concerted action of the two enzymes GluTR and GSA-AM that are metabolically linked by the highly reactive aldehyde GSA. A half-life of less than 4 min was determined for GSA at physiological pH in aqueous solution (9). Based on the three-dimensional structures of GluTR from M. kandleri and GSA-AM from Synechococcus sp. (10) a hypothetical model ensuring efficient ALA synthesis was proposed (5). We realized that the open space delimited by the GluTR monomers is remarkably similar to the volume occupied by GSA-AM. In silico the dimeric GSA-AM was placed into the open space of the V-shaped GluTR-dimer. Both enzymes were docked along their 2-fold symmetry axes leading to a model complex with a high degree of surface complementarity (in Fig. 1 the analogous E. coli model complex is shown). Independently, tRNA Glu was docked in a single plausible position on the * This work was supported by grants from the Deutsche Forschungsgemeinschaft and the Fonds der Chemischen Industrie (to D. W. H. and D. J.). The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. ʈ To whom correspondence should be addressed. Tel.: 49-531-391-5808; Fax: 49-531-391-5854; E-mail: j.moser@tu-bs.de. 1 The abbreviations used are: ALA, 5-aminolevulinic acid; DTT, 1,4dithio-D,L-threitol; GluRS, glutamyl-tRNA synthetase; GluTR, glutamyl-tRNA reductase; Glu-tRNA Glu GluTR protein. The resulting combined model of the ternary complex (GluTR, tRNA, and GSA-AM) did not lead to steric clashes. Additional strong evidence for the model complex came from the observation that the putative active site entrance of each GSA-AM monomer is positioned opposite a partly opened depression of the catalytic domain of GluTR. This depression and the GluTR active site pocket are separated from each other only by the conserved arginine 50 (M. kandleri GluTR numbering). In our current hypothesis the GluTR product GSA leaves the enzyme via this "back door" of the GluTR active site pocket and subsequently enters the active site of GSA-AM. This way direct channeling of labile GSA to the active site of GSA-AM without exposure to the aqueous environment is possible. Here we provide the first experimental evidence for the GluTR⅐GSA-AM complex using two independent biochemical techniques. Additionally, we created the homologous E. coli model complex in an in silico experiment ( Fig. 1) supporting our results. EXPERIMENTAL PROCEDURES Overexpression and Purification of E. coli GluTR-Details for recombinant production, refolding, and purification of recombinant E. coli GluTR have been published elsewhere (7). Construction of the Gene for the GluTR Mutant A338Stop by Sitedirected Mutagenesis-A deletion mutant lacking the dimerization domain (GluTR-A338Stop) was generated using the plasmid pBKCwt (6) and the QuikChange TM kit (Stratagene, La Jolla, CA) according to the manufacturer's instructions. The following oligonucleotide was employed to introduce a stop codon into the E. coli GluTR sequence: 5Ј-GCGTG-GCTGCGATAACAAAGCGCCAGCGAAAC-3Ј (stop-codon underlined). Purification and Characterization of the E. coli GluTR Deletion Mutant A338Stop-Purification and refolding of the truncated protein was performed in analogy to the wild type enzyme (7). The yield of refolded protein was 4 mg from 0.6 g of inclusion bodies. In the final concentrated fraction a single protein band on a SDS-polyacrylamide gel was visible after Coomassie Blue staining. The calculated molecular mass of the mutant enzyme deduced from the gene sequence (40,112 Da) was experimentally confirmed using electrospray ionization mass spectrometry (4) to be 40,110 Ϯ 5 Da (data not shown). Edman degradation revealed an identical N-terminal amino acid sequence (first 15 amino acids) as derived from the cloned gene sequence. N-terminal protein sequencing was performed using an Applied Biosystems 454 sequencer (Applied Biosystems). To provide further evidence for proper refolding of GluTR-A338Stop circular dichroism spectroscopy was employed. CD spectroscopy was carried out on a Jasco J-810 spectropolarimeter (Jasco, Gross-Umstadt, Germany). Protein solutions of GluTR and GluTR-A338Stop were dialyzed against 20 mM Tris-HCl, pH 8.0, containing 10 mM NaCl. The concentration of the protein solution was 500 g/ml. In quartz cuvettes of 1-mm path length, CD spectra over a range of 190 -250 nm were recorded at room temperature as an average of 5 scans. Thermal unfolding was carried out in the range of 20 -100°C at a rate of 1°C/min. The molar ellipticity was measured at 220 nm every 2°C. Samples were allowed to equilibrate for at least 20 min at 20°C before starting each scan. Both GluTRwt and GluTR-A338Stop were used for the following measurements. Using CD spectroscopy in the far-UV spectral region, we detected no significant differences compared with the wild type enzyme. Thermal unfolding experiments followed by CD spectroscopy indicated a comparable temperature range (80 -85°C) for the unfolding of the mutant protein (data not shown). These data demonstrate that the C-terminal domain of GluTR comprising residues 338 -418 is not essential for the proper folding of the catalytic and the NADPH-binding domain. For the GluTR-A338Stop mutant a residual activity of Ͻ5% compared with wild type enzyme was detected using the standard GluTR depletion assay (4). Large Scale Overproduction, Purification, and Characterization of E. coli GSA-AM-E. coli BL21 (DE3) carrying pLIpopC (11) was cultivated in 1 liter of LB medium containing 100 g/ml ampicillin to an A 578 nm of 0.6. After the addition of another 100 g/ml ampicillin and 400 M isopropyl-␤-D-thiogalactopyranosid, cells were grown for an additional 3 h and harvested by centrifugation. The bacterial cell pellet (4 g) was resuspended in 20 ml of 100 mM PIPES-NaOH, pH 6.8, containing 5 mM DTT, 1 mM EDTA (buffer A). Cells were disrupted by sonication, and cell debris was removed by centrifugation for 45 min at 50,000 ϫ g at 4°C. The supernatant was loaded onto a 25-ml DEAE-Sepharose Fast Flow column (XK 16 column, Amersham Biosciences, Freiburg, Germany) equilibrated with buffer A. After washing the column with 2 column volumes of buffer A, proteins were eluted with a linear gradient of 5 column volumes ranging from 0 to 1 M NaCl in buffer A. Fractions containing GSA-AM were pooled (ϳ200 mg of total protein) and dialyzed against 20 mM HEPES-NaOH, pH 7.9, 10 mM NaCl, 5 mM DTT (buffer B) at 4°C. This solution (40 ml) was subsequently loaded on a MonoQ HR 10/10 column (Amersham Biosciences) equilibrated with buffer B. The column was washed with 2 column volumes of buffer B. Bound proteins were eluted using a linear gradient (150 ml) with a concentration of 10 mM to 1 M NaCl in buffer B. GSA-AM-containing fractions were pooled and concentrated by ultrafiltration using a Vivaspin-15 centrifugal concentrator with a molecular weight cut-off of 10,000 (Vivascience, Hannover, Germany). A final volume of 2 ml with a protein concentration of 45 mg/ml was chromatographed on a Superdex 75 prep grade, high load 26/60 gel filtration column (Amersham Biosciences) equilibrated previously with 20 mM HEPES-NaOH, pH 7.9, 100 mM NaCl, 10 mM DTT at a flow rate of 2.0 ml/min. Fractions containing GSA-AM were pooled and concentrated to 30 mg/ml (Vivaspin-15 concentrator, molecular weight cut-off 30,000). The newly established purification procedure for the E. coli GSA-AM yielded ϳ50 mg of protein/liter of bacterial culture purified to apparent homogeneity as judged by SDS-PAGE. The integrity of the enzyme preparation was experimentally verified by electrospray ionization mass spectrometry and by N-terminal protein sequencing as described above (data not shown). Analytical gel filtration chromatography, performed as described previously (12) resulted in a single, well resolved peak. The cofactor absorption spectrum indicated a peak at 330 nm and another peak at 430 nm as described previously (3). The CD spectrum of the E. coli GSA-AM was comparable with that of the Synechococcus enzyme (10). Co-immunoprecipitation Experiments Using Cell-free E. coli Extracts-Polyclonal rabbit antibodies against recombinant E. coli GluTR and GSA-AM were generated by Eurogentec (Seraing, Belgium). For co-immunoprecipitation using cell free extracts, a total of 0.5 g of aerobically grown E. coli BL21 (DE3) cells were harvested in the early exponential phase. The bacterial cell pellet was resuspended in 5 ml of Tris-HCl buffer, pH 8.0, containing 150 mM NaCl, 10 mM DTT, and 0.5% (v/v) of the detergent Nonidet P-40 (lysis-buffer). Cells were disrupted by sonication, and cell debris was removed by centrifugation for 30 min at 40,000 ϫ g at 4°C. From the supernatant 300 l were incubated with 2.5 l of anti-GluTR (5 mg/ml) or anti GSA-AM serum (8 mg/ml), respectively, by gentle shaking for 90 min at 4°C. After the addition of 15 l of a 1:1 slurry of protein A-Sepharose CL-4B (Amersham Biosciences) in lysis buffer, the mixture was further incubated for 90 min at 4°C. The immunoadsorbent was recovered by centrifugation for 5 min at 500 ϫ g and washed three times by resuspension in Tris-HCl buffer, pH 8.0, containing 150 mM NaCl, 1 mM EDTA, and 0.5% (v/v) Nonidet P-40 and centrifugation for 5 min at 500 ϫ g. The samples were eluted into 20 l of SDS loading buffer (Sigma). Immunoblot Analysis-Protein samples eluted from protein A-Sepharose were heated for 5 min at 99°C and subjected to 9% SDS-PAGE using standard techniques (13). The electrophoretically FIG. 1. Model complex of glutamyl-tRNA reductase (dark gray) and glutamate-1-semialdehyde 1,2-aminomutase (light gray) from E. coli. The docking model was generated based on the x-ray structures of GluTR from M. kandleri (10) and GSA-AM from Synechococcus (15). separated proteins were transferred onto polyvinylidene difluoride membranes using a Trans-Blot apparatus (semi-dry transfer cell, Bio-Rad) according to the manufacturer's instructions. The membrane was first incubated with anti-GSA-AM or anti-GluTR rabbit antibodies (1:30,000 in phosphate-buffered saline (13) with 3% bovine serum albumin), washed three times with phosphate-buffered saline, and then incubated with alkaline phosphatase-conjugated sheep antirabbit antibodies (1:20,000 in phosphate-buffered saline with 3% bovine serum albumin) from Pierce (Bonn, Germany). The detection of immunoreactive bands was performed using the nitro blue tetrazolium/5-bromo-4-chloro-3-indoyl phosphate color developing method from Promega (Mannheim, Germany). Co-immunoprecipitation Experiments Using Purified Recombinant E. coli GluTR and GSA-AM-For co-immunoprecipitation using purified enzymes 1 M wild type GluTR or GluTR deletion mutant A338Stop, respectively, and 1 M of GSA-AM were analyzed in 50 mM HEPES-NaOH, pH 8.0, 150 mM NaCl, 10 mM MgCl 2 , 5 mM DTT, 10% (v/v) glycerol, 0.1% (w/v) bovine serum albumin, and 0.05% (v/v) Tween 20 (assay buffer) containing 500 M L-glutamate, 4 mM ATP, 2 mM NADPH, 500 M PLP, 1 M E. coli glutamyl-tRNA synthetase (GluRS), and 20 M E. coli tRNA preparation containing ϳ37% tRNA Glu acceptor activity prepared as described elsewhere (14). 100 l of this assay mixture were incubated for 10 min at 4°C or alternatively for 2 min at 37°C. After dilution with 300 l of assay buffer, 1 l of anti-GluTR (5 mg/ml) or anti-GSA-AM serum (8 mg/ml), respectively, was added to the assay mixture and incubation was continued for 30 min at 4°C. Coprecipitation and immunodetection was then performed analogous to the experiments using cell-free extracts as described above. Analysis of GluTR⅐GSA-AM Interaction by Gel Filtration according to Hummel and Dreyer-The Hummel and Dreyer method (15) Reaction products were analyzed via HPLC on a Waters Bondapack TM C 18 reversed phase column (3.9 ϫ 150 mm, 125 Å pore size, 10 m particle diameter) as described previously (4). The ratio of reaction products was estimated by peak integration. For coupled enzyme assays the GluTR standard assay described above additionally contained 8 M E. coli GSA-AM. After incubation for 8 min HPLC analysis revealed the ratio of reaction products. Reactions without GluTR for the different types of enzyme assay served as background controls. Structure Modeling-The sequences of the E. coli enzymes under study were obtained from the PubMed data base. The coordinates for the M. kandleri GluTR (5) were obtained from the Protein Data Bank data base ID code 1GPJ, and coordinates for the Synechococcus GSA-AM (10) were obtained from the ID code 2GSA. Sequence align-ments between GluTR from M. kandleri and E. coli, as well for GSA-AM from Synechoccus and E. coli, were carried out using the program ClustalW (18). The modeled E. coli structures were generated using the program BRAGI (19). The model complex was created by placing GSA-AM in the open space delimited by GluTR and docking them along their 2-fold symmetry axes as described previously (5). The image was generated by using PyMOL (20). E. coli GluTR and GSA-AM form a Complex in Cell-free Extracts-In silico experiments suggested a complex between GluTR and GSA-AM to protect the labile GSA from hazardous exposure to the aqueous environment. To analyze for the presence of the proposed complex, co-immunoprecipitation experiments were conducted. For this purpose rabbit anti-GluTR and anti-GSA-AM antibodies were generated. The employed strategy involved the recognition of one protein with the specific antibody, immobilization of the antibody-antigen complex on Protein A-Sepharose, and its isolation via centrifugation and washing. In the case of a co-immunoprecipitated interaction partner, this protein was visualized in Western blot experiments using a second antibody directed against it. Co-immunoprecipitation experiments with anti-GluTR and anti-GSA-AM antibodies were performed first with cell-free extracts prepared from wild type E. coli BL21 (DE3) cultures harvested in the early exponential growth phase. These cells contained the natural amounts of both enzymes because none of the corresponding genes was overexpressed. Both of the complementary co-immunoprecipitation experiments resulted in the precipitation of the postulated GluTR⅐GSA-AM complex. Significant amounts of complexed protein were detected with the corresponding anti-GSA-AM antibody (Fig. 2A, lane 2) and anti-GluTR antibody, respectively (Fig. 2B, lane 2). In the supernatant of the co-precipitates, only residual amounts of the interacting protein partner were detected using Western blot analyses (data not shown). In a control experiment E. coli strain EV61, which carries a disrupted gene for GluTR, was cultivated (21), and a cytosolic extract was prepared analogously to E. coli BL21 (DE3). No immunoprecipitation of GluTR or of GSA-AM by anti-GluTR antibodies was observed (data not shown). In agreement, neither of the pre-immune serums taken prior to the immunization of the rabbits reacted with E. coli GluTR or GSA-AM, respectively (Fig. 2, A and B, lane 1) Because of the known low cellular concentration of both enzymes, only highly concentrated extracts from 20 to 40 mg/ml protein resulted in clear co-immunoprecipitation results. Interestingly, cell-free extracts prepared from stationary phase-grown E. coli did not contain detectable amounts of the GluTR⅐GSA-AM complex (data not shown). Possibly because of lower heme requirements in the stationary phase, heme-induced GluTR proteolysis decreased cellular GluTR concentrations (22). Clearly, a stable GluTR⅐GSA-AM complex detectable via co-immunoprecipitation is present in E. coli cell-free extracts. Complex Formation between Purified E. coli GluTR and GSA-AM Is Glutamyl-tRNA-and Cofactor-independent-To further study the prerequisites for the observed interaction between E. coli GluTR and GSA-AM, co-immunoprecipitation experiments using recombinant purified enzymes at protein concentrations of 1 M were performed. Incubation of the assay mixture prior to immunoprecipitation was carried out in the presence and absence of Glu-tRNA Glu and catalytically important cofactors such as NADPH and PLP at both 4°C and 37°C. At both preincubation temperatures the GluTR⅐GSA-AM complex was precipitated from the assay mixture independently of the addition of the substrate Glu-tRNA Glu and the NADPH and PLP cofactors. Identical results were obtained by using either anti-GluTR or anti-GSA-AM antibodies for the precipitation. The data are shown for anti-GSA-AM antibodies in Fig. 3A, lane 3. The pre-immune serum control is shown in Fig. 3A, lane 1. From these results we conclude that complex formation at the employed protein concentrations is not dependent on the flow of metabolites through the GluTR⅐GSA-AM complex. GluTR Dimerization Enhances GluTR⅐GSA-AM Complex Formation-To study the role of the C-terminal domain in complex formation in vitro co-immunoprecipitation experiments were performed using the GluTR-A338Stop mutant lacking the dimerization domain. Gel filtration chromatography indicated a native relative molecular mass of 39,000 Ϯ 3,000 Da for the truncated protein. Based on these results it was concluded that the GluTR-A338Stop-mutant (40,112 Da calculated molecular mass) is a globular monomeric two-do-main GluTR variant. The function of the 80 C-terminal residues (representing 19% of the overall GluTR sequence) is the formation of the dimerization domain responsible for the Vshaped quaternary structure of the wild type GluTR. An intact dimerization domain is also important for the enzymatic activity of the E. coli GluTR as indicated by only 5% residual activity of the mutant enzyme compared with the wild type enzyme. The monomeric enzyme was tested for complex formation at a concentration of 1 M in the presence or absence of Glu-tRNA Glu and associated cofactors. In comparison with the dimeric wild type enzyme the amount of precipitated enzyme was significantly reduced, but there was still a detectable interaction (Fig. 3B). The amount of wild type GluTR co-immunoprecipitated in the GluTR⅐GSA-AM complex was ϳ50% of GluTR precipitated using anti-GluTR antibodies (Fig. 3A, compare lanes 2 and 3). However, only about 20% of complex bound GluTR-A338Stop was detected compared with freely precipitated GluTR-A338Stop (Fig. 3B, compare lanes 2 and 3). These experiments suggested an important role of the C-terminal dimerization domain in the facilitation of complex formation. From these data we conclude that the dimeric V-shaped GluTR structure is an important prerequisite for the interaction of both enzymes. Nevertheless, complex formation was not solely dependent on those C-terminal residues but also required the residual GluTR molecule as indicated by the small amount of observed co-precipitate using GluTR-A338Stop. Gel Filtration Analysis of the GluTR⅐GSA-AM Complex-With the newly established purification procedure for E. coli GSA-AM sufficient quantities of highly pure enzyme were obtained to apply a second independent biochemical method, the Hummel-Dreyer gel filtration chromatography, for complex formation analysis (15). On the basis of our results from the in vitro co-immunoprecipitation experiments we decided to investigate complex formation at room temperature in the absence of substrate and cofactors. A gel filtration column was calibrated for the elution position of GluTR (1.54 ml) and GSA-AM (1.72 ml), respectively. The same gel filtration column was then equilibrated with buffer containing 10 M GSA-AM until a stable absorption at 280 nm was observed. Subsequently, a sample containing 10 M GluTR in addition to the 10 M GSA-AM was injected. The elution profile followed at 280 nm showed in addition to the expected peak for GluTR (1.54 ml) a trough (1.72 ml) in the basal absorbance (Fig. 4). This trough is the result of GSA-AM depletion from the running buffer caused by the binding of GSA-AM to the injected GluTR protein. Analogous chromatographies were conducted with increasing amounts of GluTR ranging from 2 to 20 M. The size of the trough increased with the concentration of injected GluTR. No troughs were observed when the applied samples did not contain GluTR or when 10 -50 M bovine serum Toward a Function of the GluTR⅐GSA-AM Complex by Way of Sequential Versus Coupled ALA Formation-To identify a function for the GluTR⅐GSA-AM complex, the enzymatic conversion of [ 14 C]Glu-tRNA Glu into ALA via the highly reactive GSA intermediate was compared for the two consecutive enzymatic reactions and the coupled reactions. Resulting reaction products were identified and quantified by scintillation counting after reversed phase HPLC chromatographic separation. In vitro reactions of GluTR alone with the substrate [ 14 C]Glu-tRNA Glu led to the formation of [ 14 C]GSA, with a retention time of 7.5 min and, because of spontaneous substrate hydrolysis, to a [ 14 C]Glu peak at 5 min. Besides those two well characterized products a third 14 C-labeled compound, with a retention time of 2.6 min, was reproducibly detected as the result of GluTR catalysis (Fig. 5A). When GSA-AM was added to this product mixture only [ 14 C]GSA was converted into [ 14 C]ALA. The additional compound at 2.6 min was no substrate for GSA-AM catalysis (Fig. 5B). In coupled in vitro assays, allowing complex formation prior to substrate addition, this additional compound was not detectable (Fig. 5C). On the basis of these observations one might speculate that one essential role of GluTR⅐GSA-AM complex formation is to prevent the side reaction of the reactive GSA aldehyde species, possibly with cellular compounds or the solvent. Another possible reaction has been described earlier during the chemical synthesis of GSA in which a cyclization of GSA to 2-hydroxy-3-aminotetrahydropyran-1-one was described (23). To date no physical characterization of that compound has been reported. Because of the minimal amounts of intermediate formed in the assay mixture (Ϸ5 pmol) no further characterization of this side product was possible. The experiments clearly demonstrated that the semialdehyde species was protected from an inefficient side reaction by the presence of GSA-AM. However they do not rule out the possibility that a very rapid GSA-AM reaction in the coupled assay might also result in the protection of GSA. Nevertheless, these findings are in clear agreement with the postulated substrate channeling pathway as indicated by x-ray crystallography and by modeling experiments (5) in which the intermediate aldehyde is prevented from exposure to the aqueous environment. The current investigation was one of the rare cases in which the structural biology of related enzymes from different organisms directly give the answer to a metabolic question. The present investigation demonstrates the existence of a GluTR⅐GSA-AM complex in E. coli, which might indicate that the original structure-based complex model can be regarded of general significance for the GluTR⅐GSA-AM interaction in plants, archaea, and all bacteria synthesizing ALA from Glu-tRNA Glu .
2018-04-03T05:29:31.386Z
2005-05-13T00:00:00.000
{ "year": 2005, "sha1": "37cb222e744503839b488a9defde2e412493f9de", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/280/19/18568.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "230fa95280f0d65452a52d1abf30564a8cc6ef14", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
253734347
pes2o/s2orc
v3-fos-license
Unadjusted Hamiltonian MCMC with Stratified Monte Carlo Time Integration A novel randomized time integrator is suggested for unadjusted Hamiltonian Monte Carlo (uHMC) in place of the usual Verlet integrator; namely, a stratified Monte Carlo (sMC) integrator which involves a minor modification to Verlet, and hence, is easy to implement. For target distributions of the form $\mu(dx) \propto e^{-U(x)} dx$ where $U: \mathbb{R}^d \to \mathbb{R}_{\ge 0}$ is both $K$-strongly convex and $L$-gradient Lipschitz, and initial distributions $\nu$ with finite second moment, coupling proofs reveal that an $\varepsilon$-accurate approximation of the target distribution $\mu$ in $L^2$-Wasserstein distance $\boldsymbol{\mathcal{W}}^2$ can be achieved by the uHMC algorithm with sMC time integration using $O\left((d/K)^{1/3} (L/K)^{5/3} \varepsilon^{-2/3} \log( \boldsymbol{\mathcal{W}}^2(\mu, \nu) / \varepsilon)^+\right)$ gradient evaluations; whereas without additional assumptions the corresponding complexity of the uHMC algorithm with Verlet time integration is in general $O\left((d/K)^{1/2} (L/K)^2 \varepsilon^{-1} \log( \boldsymbol{\mathcal{W}}^2(\mu, \nu) / \varepsilon)^+ \right)$. Duration randomization, which has a similar effect as partial momentum refreshment, is also treated. In this case, without additional assumptions on the target distribution, the complexity of duration-randomized uHMC with sMC time integration improves to $O\left(\max\left((d/K)^{1/4} (L/K)^{3/2} \varepsilon^{-1/2},(d/K)^{1/3} (L/K)^{4/3} \varepsilon^{-2/3} \right) \right)$ up to logarithmic factors. The improvement due to duration randomization turns out to be analogous to that of time integrator randomization. Introduction Consider a 'target' probability distribution of the form µ(dx) ∝ e −U (x) dx where U ∶ R d → R ≥0 is continuously differentiable. Hamiltonian Monte Carlo (HMC) is an MCMC method aimed at µ that incorporates a measure-preserving Hamiltonian dynamics per transition step [18,36]. The dynamics are typically discretized using a deterministic time integrator, and the discretization bias can either be borne (unadjusted HMC or uHMC for short) or eliminated by a Metropolis-Hastings filter (adjusted HMC). In this work, a new time integrator is suggested for Hamiltonian MCMC, one that is better suited for the probabilistic aims of MCMC. The basic idea is to use a simple, randomized method to time discretize the Hamiltonian dynamics. This strategy turns out to improve upon the current state of the art for uHMC. Throughout this work, we focus on K-strongly convex and L-gradient Lipschitz U , and state complexity guarantees in terms of the L 2 -Wasserstein distance W 2 . The strong convexity assumption on U can be relaxed to, e.g., asymptotic strong convexity -as in [6,15,8]. However, the resulting contraction rates, and in turn, asymptotic bias and complexity estimates will then depend on model and hyperparameters in a more involved way. On the other hand, the dependence on model and hyperparameters is more clear under global strong convexity, and therefore, the strongly convex setting allows for more precise mathematical comparisons between algorithms. Consequently, as we review below, the strongly convex setting has been the focus of much of the existing literature [12,29,35,39]. State of the Art. At present, the Verlet time integrator is the method of choice for time integrating the Hamiltonian dynamics in both unadjusted and adjusted HMC [4]. This is not without reason. Indeed, the Verlet integrator is cheap; like forward Euler it requires only one new gradient evaluation per integration step. At the same time, the Verlet integrator is also second-order accurate under sufficiently strong regularity assumptions -more on this point below. Remarkably, the Verlet integrator also has the maximal stability interval for the simple harmonic model problem [2,11]. These properties are quite relevant to uHMC [8]. Moreover, the geometric properties of the Verlet integrator (symplecticity and reversibility) are key to obtaining an evaluable Metropolis-Hastings ratio in adjusted HMC [21,4]. Not surprisingly, most of the research work on HMC has been devoted to the study of unadjusted and adjusted HMC with Verlet time integration. A notable exception is the work of Lee, Song and Vempala [29]. In that work, the authors suggest to use uHMC with a collocation method for the Hamiltonian dynamics to resolve the asymptotic bias. This collocation method relies on a choice of basis functions (usually polynomials up to a certain degree) to represent the exact solution, and uses a nonlinear solver per uHMC transition step. General complexity guarantees are given for their ODE solvers, and as an application of their ideas to Hamiltonian MCMC, they consider the special case of a basis of piecewise quadratic polynomials defined on a time grid of step size h. In this case, uHMC with collocation can in principle produce an ε-accurate approximation of the target µ in W 2 distance using (1) when initialized at the minimum of U and run with duration T ∝ K 1 4 L 3 4 . In each transition step of uHMC, h is chosen to satisfy h −1 ∝ LT 3 ( v + ∇U (x) T )K 1 2 (L K) 3 2 ε −1 , where x, v ∈ R d are respectively the initial position and velocity in the current transition step. See [29,Theorem 1.6] for a detailed statement. Remarkably, the dimension dependence is d 1 2 and the condition number dependence is (L K) 1.75 . Moreover, this complexity guarantee requires no regularity beyond L-gradient Lipschitz-ness and K-strong convexity of U . In practice, because uHMC with collocation requires a nonlinear solve per transition step, its widespread use is limited. For comparison, in the absence of any higher regularity, the corresponding complexity for uHMC with Verlet time integration is when initialized from a distribution ν with finite second moment and run with time step size h ∝ (L K) −3 2 d −1 2 ε and duration T ∝ L −1 2 . See [3,Chapter 5] for detailed statements and proofs of (2); which is based on [8,Appendix A]. Note that uHMC with Verlet substantially underperforms uHMC with collocation in terms of dimension and condition number dependence. This substandard performance is because the theoretical second order accuracy of Verlet integration typically requires U to be thrice differentiable with bounded third derivative [5,Lemma 23]. Indeed, under only the assumption that U is L-gradient Lipschitz, the order of accuracy of Verlet integration often drops to first order [8,Theorem 3.6]. This drop in accuracy is ultimately due to the fact that Verlet uses a trapezoidal approximation of the integral of the force −∇U , and it is well known that the trapezoidal rule typically drops an order of accuracy if the integrand does not have a bounded second derivative. In turn, the accuracy of the integration scheme affects the asymptotic bias between the invariant measure of uHMC and the target distribution. Thus, a smaller time step size is needed to resolve the asymptotic bias of uHMC, which in turn requires more gradient evaluations. In principle, adjusted HMC can filter out all of the asymptotic bias due to time discretization. One would therefore hope that the dependence of the complexity on the accuracy parameter ε becomes logarithmic with adjustment. This result has recently been demonstrated under higher regularity conditions and some restrictions on the initial conditions; specifically, assuming U is strongly convex, gradient Lipschitz and Hessian Lipschitz, Chen, Dwivedi, Wainwright, and Yu use a clever conductance argument to prove that adjusted HMC with Verlet from a 'warm' start can achieve ε accuracy in total variation distance using O(d 11 12 (L K) log(1 ε) + ) gradient evaluations [12]. In the same setting, implementable starting distributions are also considered; specifically, from a 'feasible' start, the complexity becomes O(max(d(L K) 3 4 , d 11 12 (L K), d 3 4 (L K) 5 4 , d 1 2 (L K) 3 2 )) log(1 ε) + ) gradient evaluations to reach ε accuracy. In both complexity estimates, note the remarkable logarithmic dependence on ε. At present, it remains an open problem to relax the Hessian Lipschitz regularity assumption and the warm/feasible start conditions. In very recent work, Monmarché considers a parameterized family of algorithms which include as special cases uHMC with Verlet time integration and a class of time discretizations for Underdamped Langevin Diffusions (ULD) [35]. For Gaussian target measures, the optimal algorithms within this family are remarkably ULD based algorithms and uHMC with partial velocity refreshment. In the strongly convex and gradient Lipschitz case, and for a wide range of parameters, dimension free lower bounds on the convergence rate of the corresponding algorithms are provided. Specializing to the case where U is additionally Hessian Lipschitz, unified complexity guarantees are given for suitably tuned versions of both ULD based and uHMC-like algorithms within the class under consideration. In particular, focussing only on the dependence in dimension d and accuracy ε, an ε-accurate approximation of the target distribution µ can be achieved by these algorithms in O(d 1 2 ε −1 2 log(d ε)) gradient evaluations. Moreover, when the condition number is large, Monmarché notes the superiority of partial momentum refreshment over complete refreshment. In particular, in the Gaussian setting, an improved contraction rate for uHMC with partial momentum refreshment is observed, i.e., from K L to (K L) 1 2 -more on this point below. Our work is strongly inspired by the recent success of the Randomized Midpoint Method (RMM) due to Shen and Lee [39]. The RMM method is obtained by using a randomized time integrator applied to ULD. Synchronously coupling RMM with exact ULD, Shen and Lee prove that RMM can produce an ε-accurate approximation of the target in W 2 distance using gradient evaluations when initialized at rest within (d K) 1 2 of the minimum of U and run with time step size satisfying h ∝ min K 1 3 d 1 6 L 1 6 ε 1 3 log d 1 2 , friction 2, and mass L. The proof uses a perturbative approach that leverages the contractivity of exact ULD [14,16] to bound the W 2 distance between the distribution of the corresponding RMM chain and the target distribution. Ergodicity of the RMM chain and a 3 2-order of accuracy for the W 2 -asymptotic bias of RMM was subsequently proven in [23]. Additionally, Cao, Lu, and Wang demonstrate the optimality of RMM among a class of ULD based sampling algorithms [10]. Specifically, the authors show that any randomized algorithm for simulating ULD which makes N combined queries to ∇U , the driving Brownian motion, and the weighted integral of Brownian motion will suffer worst case L 2 -error of order at least Ω(d 1 2 N −3 2 ). Thus, to guarantee an ε-accurate approximation of ULD in W 2 distance, one requires N to be at least Ω(d 1 3 ε −2 3 ) -matching the upper bound on the L 2 -error for RMM in dimension d and accuracy ε. Ref. [10] also contains many references to the existing literature on information theoretic lower bounds for randomized simulation of ODEs and SDEs, such as [27], [26], [17]. Another related work is the shifted ODE method for ULD due to Foster, Lyons and Oberhauser [22]. In the gradient Lipschitz and strongly convex case, the shifted ODE method produces in principle an ε-accurate approximation of the target distribution in W 2 distance using O(d 1 3 ε −2 3 ) gradient evaluations with even better complexity guarantees under stronger smoothness assumptions. The shifted ODE method is inspired by rough path theory, in which SDEs are realized as instances of Controlled Differential Equations (CDE). In particular, the shifted ODE method is constructed by tuning a controlling path such that the Taylor expansion of the CDE solution has the same low order terms as ULD. In practice, the ODE they obtain cannot be time integrated exactly, so they propose two implementable methods based on a third order Runge-Kutta method and a fourth order splitting method. Numerical results for the discretizations are promising. The challenge is that the discretizations are trickier to analyze than the exact shifted ODE method. In view of the complexity guarantees (1), (2), and (3), it is natural to ask: Is it possible to construct a randomized time integrator for unadjusted Hamiltonian MCMC that does not require a nonlinear solver or higher regularity of U and that confers a better complexity guarantee for the corresponding uHMC algorithm? Since Hamiltonian dynamics does not explicitly incorporate friction or diffusion like ULD does [32,7], it is not at all obvious that uHMC with an RMM-type method would have a provably better complexity guarantee than uHMC with Verlet. At a technical level, understanding the contractivity and asymptotic bias of the corresponding uHMC algorithm requires developing new mathematical arguments to quantify the effects of randomization in the approximation of the Hamiltonian dynamics. This paper answers the above question in the affirmative by suggesting a simple stratifed Monte Carlo time integrator for Hamiltonian MCMC and carefully analyzing the properties of the corresponding uHMC algorithm. To be sure, while Monte Carlo methods are generally intended for high-dimensional integration, stratified Monte Carlo methods are actually better suited for one-dimensional integration, such as time integration. Short Overview of Main Results. We now outline our main contributions. As above, we consider a target distribution In this context, we introduce the stratified Monte Carlo (sMC) time integrator. Let h > 0 be a time step size and {t k ∶= kh} k∈N0 be an evenly spaced time grid. This grid partitions time into subintervals {[t k , t k+1 )} k∈N0 termed 'strata'. One step of the sMC time integrator from t i to t i+1 is given bỹ whereF ti = −∇U (Q ti + (U i − t i )Ṽ ti ) and (U i ) i∈N0 is a sequence of independent random variables such that U i ∼ Uniform(t i , t i+1 ). In words, U i is a random temporal sample point sampled uniformly from the i-th stratum, and independent of the sample points in the other strata. Note that the sMC time integrator is explicit in the sense that (Q ti+1 ,Ṽ ti+1 ) is an explicit function of (Q ti ,Ṽ ti ). One intuition behind this scheme is as follows. Like Verlet integration, the sMC integrator updates the position variable on the i-th stratum [t i , t i+1 ) by a constant forceF ti . However, unlike Verlet integration, the update rule involves the force evaluated at a random temporal sample point U i sampled from the i-th stratum, rather than always the left temporal endpoint t i . Moreover, unlike Verlet integration, the sMC integrator updates the velocity variable on the i-th stratum [t i , t i+1 ) by the same constant forceF ti . Thus, this scheme uses only one new gradient evaluation per sMC integration step. This scheme is probably the simplest randomized time integrator for the Hamiltonian dynamics, but it certainly is not the only strategy. For example, one could approximate the force over the i-th stratum as −∇U (q Ui (Q ti ,Ṽ ti )). However, as the true dynamics are unknown, this is not implementable. Choosing instead to first approximate q Ui (Q ti ,Ṽ ti ) using the forward Euler method, and then using the force at the resulting (random) point to approximate the dynamics for both position and velocity over the i-th stratum [t i , t i+1 ) results in the sMC method described above. Replacing Verlet integration in this way, we obtain the uHMC algorithm with complete momentum refreshment described in Algorithm 1. A main result of this paper states that uHMC with sMC time integration produces an ε-accurate approximation of the target distribution using when initialized from an arbitrary distribution ν with finite second moment and run with the optimal hyperparameters detailed below. The proof of this complexity guarantee follows from two theorems, which we briefly describe. Firstly, assuming that LT 2 ≤ 1 8 and h ≤ T , Theorem 6 uses a synchronous coupling of two copies ofπ to demonstrate W 2 -contractivity where ν, η are arbitrary probability measures on R d with finite second moment. The W 2 -contraction coefficient e −c has the nice feature that it is uniform in the time step size hyperparameter. The proof of Theorem 6 relies on almost sure contractivity of two realizations of the sMC time integrator starting with the same initial velocities and with synchronous random temporal sample points; see Lemma 5. The proof of this lemma crucially relies on K-strong convexity of U and the co-coercivity of ∇U ; see Remark 4 for background on the latter. The proof involves a careful balance of these competing effects at the random positions where the force is evaluated. We emphasize that an analogous W 2 -contractivity result can be proven for uHMC with Verlet time integration without assuming higher regularity [3,Chapter 5]. As a corollary,π admits a unique invariant measureμ, but in general, due to time discretization biasμ ≠ µ. Secondly, we upper bound the W 2 -asymptotic bias ofπ, which quantifies W 2 (µ,μ). To this end, let π denote the transition kernel of exact HMC, which uses the exact Hamiltonian flow per transition step and satisfies µπ = µ. Theorem 9 uses a coupling ofπ and π to prove: for LT 2 ≤ 1 8 and h ≤ T , Remarkably, this upper bound only requires the assumption that U is L-gradient Lipschitz and K-strongly convex. The proof of Theorem 9 rests on the proof of L 2 -accuracy of the sMC integration scheme in Lemma 7. Note the improvement in the W 2 -asymptotic bias over uHMC with Verlet time integration, which in the absence of higher regularity is only first-order accurate. This improvement can be understood via the classical Fundamental Theorem of L 2 -Convergence of Strong Numerical Methods for SDEs [34, Theorem 1.1.1]. This theorem highlights that the expansion of the squared L 2 -error of a stochastic numerical method can lead to cancellations (to leading order) of cross terms. For strong numerical methods for SDEs, this cancellation (to leading order) of cross terms occurs because of the independence of the Brownian increments used at each integration step. Consequently, the expectation of cross terms that involve the Brownian increments can vanish to leading order because they have zero mean. For the sMC time integrator, a similar cancellation (to leading order) can occur, which is analogously due to independence of the sequence of random temporal sample points. Specifically, what happens is that the random potential forceF ti appearing in the expectation of cross terms turns into an average of the potential force over the i-th stratum, which confers higher accuracy in these cross-terms. Turning this heuristic argument into a rigorous proof relies crucially on comparison to a 'semi-exact' flow, which uses the mean ofF ti to update the position and velocity; see Lemmas 13 and 14 for details. The semi-exact flow is somewhat related to the Average Vector Field (AVF) method [37], which suggests that sMC might also be a useful tool for AVF. To obtain the stated complexity guarantee, the choice of hyperparameters is optimized as follows. For clarity, numerical prefactors are suppressed here, but are fully worked out in Theorem 10 and Remark 11. Consider uHMC with sMC time integration initialized from a distribution ν with finite second moment and operated with hyperparameters satisfying: , and With this selection of hyperparameters, the W 2 -contraction rate in (6) reduces to c ∝ K L, and we find that ≤ ε . Therefore, with the above choice of hyperparameters, we find the total number of gradient evaluations to be m × T h -as stated in (5). Why duration randomization? So far, we have summarized the benefit of time integration randomization for uHMC with complete momentum refreshment. Motivated by the findings of Monmarché [35], it is interesting to develop corresponding results for the case of uHMC with partial momentum refreshment. Since after a random number of uHMC transition steps with partial momentum refreshment a complete velocity refreshment occurs, one can equivalently consider uHMC with duration randomization, as in the randomized uHMC process introduced in §6 of [7]. This duration-randomized uHMC process is a fully implementable pure jump process on phase space R 2d that iterates between two types of jumps: (i) a single step of a time integrator for the Hamiltonian dynamics; or (ii) a complete momentum refreshment. There are again two hyperparameters entering the algorithm: a time step hyperparameter h and a mean duration hyperparameter λ −1 . Importantly, because the duration-randomized uHMC process admits an infinitesimal generator, this process is amenable to detailed contractivity and asymptotic bias analyses. Theorem 17 states that the W 2 -contraction rate of this randomized uHMC process on phase space is γ = Kλ −1 10. The proof of this theorem is based on a synchronous coupling of two copies of the randomized uHMC process. If the duration hyperparameters are selected optimally, then the contraction rate becomes γ ∝ K 1 2 (K L) 1 2 . At first glance, this rate looks better than the corresponding rate for non-duration randomized uHMC, i.e., c ∝ K L. However, viewed on the same infinitesimal time scale, these rates are essentially identical in the sense that c T ∝ γ. Therefore, duration randomization does not lead to an improvement in terms of contractivity. So then what is the benefit of duration randomization? A reason may be found in the asymptotic bias. To quantify the asymptotic bias, a coupling of the duration-randomized uHMC process with an 'exact' counterpart is considered. This coupling is itself a jump process with generator A C defined in (67). This coupling admits a Foster-Lyapunov function given by a quadratic 'distorted' metric on phase space [1]; this function ρ 2 is defined in (59). In particular, the coupling satisfies the following infinitesimal drift condition for all y = ((x, v), (x,ṽ)) ∈ R 4d . By Itô's formula for jump processes, the corresponding finite-time drift condition is used in Theorem 19 to quantify the W 2asymptotic bias of the duration-randomized uHMC process. Choosing the hyperparameters optimally, an improvement in the complexity of the randomized uHMC process is found; see Remark 20 for details. The improvement in complexity due to duration randomization is analogous to the improvement found by time integration randomization. Indeed, in both cases randomization leads to an improvement in the W 2 -asymptotic bias. For time integration randomization, this can be traced back to an improvement in the finite-time L 2 -accuracy of the sMC time integrator. For duration randomization, the improvement is due to how the time discretization errors accumulate over an infinite-time interval. We remark that in the presence of higher regularity (e.g., U is Hessian Lipschitz), a modification of the sMC scheme that recruits an antithetic sample point per integration step is expected to be O(h 5 2 ) L 2 -accurate. Thus, for the purpose of unadjusted Hamiltonian MCMC, a randomized time integrator is generally preferable to Verlet time integration. Ideas for future work include (i) develop contractivity and asymptotic bias estimates in total variation distance for uHMC with sMC time integration by means of one-shot couplings [5]; (ii) incorporate time integration randomization into adjusted HMC and extend the conductance-type argument in [12] to obtain a complexity guarantee under weaker regularity conditions than for adjusted HMC with Verlet; and (iii) combine randomized time integration with time step or duration hyperparameter adaptivity to deal with multiscale features in the target distribution -as in [28,25,24]. Notation Let P(R d ) denote the set of all probability measures on R d , and denote by P p (R d ) the subset of probability measures on R d with finite p-th moment. Denote the set of all couplings of ν, η ∈ P(R d ) by Couplings(ν, η). For ν, η ∈ P(R d ), define the L p -Wasserstein distance by Definition of the uHMC Algorithm with sMC time integration Unadjusted Hamiltonian Monte Carlo (uHMC) is an MCMC method for approximate sampling from a 'target' probability distribution on R d of the form where U ∶ R d → R ≥0 is assumed to be a continuously differentiable function such that Z < ∞. The function U is termed 'potential energy' and −∇U is termed 'potential force' since it is a force derivable from a potential. The standard uHMC algorithm with complete velocity refreshment generates a Markov chain on R d with the help of: (i) a deterministic Verlet time integrator for the Hamiltonian dynamics corresponding to the unit mass Hamil- To be sure, there are only two hyperparameters that need to be specified in this algorithm: the duration T > 0 of the Hamiltonian dynamics and the time-step-size h ≥ 0; and for simplicity of notation, we often assume T h ∈ Z when h > 0, which implies that h ≤ T . Let {t k ∶= kh} k∈N0 be an evenly spaced time grid. This grid partitions time into subintervals {[t k , t k+1 )} k∈N0 termed 'strata'. In the (k+1)-th uHMC transition step, a deterministic Verlet time integration is performed with initial position given by the k-th step of the chain and initial velocity given by ξ k . The (k + 1)-th state of the chain is then the final position computed by Verlet. Verlet integrates the Hamiltonian dynamics by using: (i) a piecewise quadratic approximation of positions which can be interpolated by a quadratic function of time on each stratum [t i , t i+1 ); and (ii) a deterministic trapezoidal quadrature rule of the time integral of the potential force over each stratum [t i , t i+1 ) to update the velocities. However, since in uHMC we are almost exclusively interested in a stochastic notion of accuracy of the numerical time integration [8,Theorem 3.6], it is quite natural to instead use a randomized time integrator for the Hamiltonian dynamics. 1 The aim of this paper is to suggest one such randomized integration strategy, which involves a very minor modification of the Verlet time integrator, and hence, is easy to implement. The basic idea in this new integrator is to replace the trapezoidal quadrature rule used by Verlet in each stratum with a Monte Carlo quadrature rule. Note that this construction substantially relaxes the regularity requirements on the potential force. The resulting integration scheme is an instance of a stratified Monte Carlo (sMC) time integrator. To precisely define this variant of uHMC, in addition to the random initial velocities we define an independent sequence of sequences (U k i ) i,k∈N0 whose terms are independent uniform random variables U k i ∼ Uniform(t i , t i+1 ). In the (k + 1)-th transition step of uHMC, a discrete solution is computed from an ini- where we introduced the floor (resp. ceiling) function to the nearest time grid point less (resp. greater) than time t, i.e., and the random temporal point in the i-th stratum as illustrated below. be the potential force evaluated at the random temporal point in the i-th stratum. By integrating (9), note thatQ k t is a piecewise quadratic function of time that interpolates between the points {Q k ti } and satisfies d is a piecewise linear function of time that interpolates between {Ṽ k ti }. piecewise quadratic interpolation of positions For h = 0, we set ⌊t⌋ h = ⌈t⌉ h = t, drop the tildes in the notation, and since the corresponding flow is deterministic, we use lower case letters to denote the exact flow which satisfies On the time grid points, the sMC flow is an unbiased estimator for the semi- The semi-exact flow plays a crucial role in §2.5 to quantify the L 2 -accuracy of the sMC flow with respect to the exact flow. With this notation, the chain (X k ) k∈N0 corresponding to uHMC with sMC time integration of the Hamiltonian dynamics is defined as follows. Definition 1 (uHMC with sMC time integration). Given an initial state x ∈ R d , a duration hyperparameter T > 0, and time step size hyperparameter denote the corresponding one-step transition kernel. For h = 0, we recover exact HMC. In this case, we drop all tildes in the notation, i.e., the k-th transition step is denoted by X k (x), and the corresponding transition kernel is denoted by π. The target measure µ is invariant under π, because the exact flow preserves the Boltzmann-Gibbs probability measure on R 2d with density proportional to exp(−H(x, v)), and µ is the first marginal of this measure. When h > 0, and under certain conditions (detailed next),π has a unique invariant probability measure denoted byμ, which typically approaches µ as h ↘ 0. In the sequel, uHMC refers to uHMC with sMC time integration. Assumptions To prove our main results, we assume the following. Assumption 2. The potential energy function U ∶ R d → R is continuously differentiable and satisfies the following conditions: A.1 U has a global minimum at 0 and U (0) = 0. A.2 U is L-gradient Lipschitz continuous, i.e., there exists L > 0 such that A.3 U is K-strongly convex, i.e., there exists K > 0 such that Assumptions A.1-A.3 imply W 2 -contractivity of the transition kernel of uHMC; see Theorem 6 below. By the Banach fixed point theorem, contractivity implies existence of a unique invariant probability measure ofπ [3, Theorem 2.9]. The W 2 -asymptotic bias of this invariant measure is upper bounded in Theorem 9. A.1-A.3, using a quadratic Foster-Lyapunov function argument 2 , it can be shown that the target distribution satisfies Remark 3. Under where in the first step we used Jensen's inequality. The bound is sharp since it is attained by a centered Gaussian random variable ξ with E ξ 2 = d K. Remark 4. If U is continuously differentiable, convex, and L-gradient Lipschitz, then ∇U satisfies the following 'co-coercivity' property This property plays a crucial role in proving a sharp W 2 -contraction coefficient for the uHMC transition kernel in the globally strongly convex setting. L 2 -Wasserstein Contractivity ) be a realization of the sMC flow satisfying (9) from the initial condition (x, v) ∈ R 2d with a random sequence of independent temporal sample points (U i ) i∈N0 such that U i ∼ Uniform(t i , t i+1 ). When U is K-strongly convex and L-gradient Lipschitz, the exact flow from different initial positions but synchronous initial velocities is itself contractive if LT 2 ≤ 1 4 [13, Lemma 6]. 3 Analogously, if LT 2 ≤ 1 8 and the time step size additionally satisfies h ≤ T (which follows from the hypothesis that T h ∈ Z), the following lemma states that Q T (x, v) −Q T (y, v) 2 is almost surely contractive. Lemma 5 (Almost Sure Contractivity of sMC Time Integrator from Synchronized Velocities). Suppose that A.1-A.3 hold. Let T > 0 satisfy: and The proof of Lemma 5 is deferred to Section 2.7. By synchronously coupling both the random initial velocities and the random temporal sample points in two copies of uHMC starting at different initial conditions, and applying Lemma 5, we obtain the following. Theorem 6 (W 2 -Contractivity of uHMC under global strong convexity). Suppose that A.1-A.3 hold. Let T > 0 and h ≥ 0 be such that (14) holds with T h ∈ Z if h > 0. Then for any pair of probability measures ν, η ∈ P 2 (R d ), Note that the W 2 -contraction coefficient in Theorem 6 is uniform in the time step size hyperparameter, and as h ↘ 0, recovers (up to a numerical pre-factor) the sharp W 2 -contraction coefficient of exact HMC. L 2 -Wasserstein Asymptotic Bias As emphasized in previous works [8,19], an apt notion of accuracy of the underlying time integrator in unadjusted Hamiltonian Monte Carlo (and other inexact MCMC methods) is a stochastic one, e.g., L 2 -accuracy. Remarkably, the sMC time integrator is 3 2-order L 2 -accurate without higher regularity assumptions such as Lipschitz continuity of the Hessian of U . Lemma 7 (L 2 -accuracy of sMC Time Integrator). Suppose that A.1-A.2 hold. Let T > 0 satisfy LT 2 ≤ 1 8, and let h > 0 satisfy T h ∈ Z. Then for any x ∈ R d and k ∈ N 0 such that t k ≤ T , Note that A.3 is not assumed in Lemma 18. The 3 2-order of L 2 -accuracy of the sMC time integrator is numerically verified in Figure 2. Proof of Lemma 7. The proof of L 2 -accuracy of the sMC integrator is carried out in two steps. First, the sMC flow is compared to the semi-exact flow in Lemma 13. Then, the semi-exact flow is compared to the exact flow in Lemma 14. An application of the triangle inequality, the bound L 1 4 h 1 2 ≤ 1 √ 2 and the evaluation of the exponentials gives the required result. Remark 8. The 3 2-order of L 2 -accuracy of the sMC time integrator is reminiscent of the classical Fundamental Theorem for L 2 -Convergence of Strong Numerical Methods for SDEs, which roughly states: if p 1 ≥ p 2 + 1 2 and p 2 > 1 2 are the order of mean and mean-square accuracy (respectively), then the L 2accuracy of the method is order p 2 − 1 2 [34, Theorem 1.1.1]. This is due to cancellations (to leading order) in the L 2 error expansion, due to independence of the Brownian increments. Here the cancellations (to leading order) occur because of independence of the sequence of random temporal sample points used by the sMC time integrator. A rigorous proof expanding on this heuristic is given in Section 2.9. Theorem 9. Suppose that A.1-A.3 hold. Let T > 0 and h ≥ 0 be such that (14) holds with T h ∈ Z if h > 0. Additionally, assume LT 2 ≤ 1 8. Then Proof. By the triangle inequality, Employing now Lemma 7, Remark 3, ξ ∼ N (0, I d ), the triangle inequality and the fact L K ≥ 1 gives the required result. Proof. Let m ≥ m ⋆ and h ≤ h ⋆ . By the triangle inequality, Since m ≥ m ⋆ and h ≤ h ⋆ , the required result follows. To turn this into a complexity guarantee, we specify the duration hyperparameter, and in turn, estimate the corresponding # of gradient evaluations. In particular, if one chooses the duration T to saturate the condition LT 2 ≤ 1 8 in (14), i.e., T = L −1 2 8, then the W 2 -contraction coefficient reduces to c = 48 −1 K L. Since each uHMC transition step involves T h gradient evaluations, the corresponding # of gradient evaluations is therefore (m uHMC transition steps) × (T h sMC integration steps) = 8 7 6 ⋅ 6 5 3 ⋅ e 4 d K Proof of L 2 -Wasserstein contractivity This proof carefully adapts ideas from [8] and Lemma 6 of [13]. The main idea in the proof is to carefully balance two competing effects at the random temporal sample points where the potential force is evaluated: (i) strong convexity of U (A.3); and (ii) co-coercivity of ∇U (Remark 13). Proof of Lemma 5. Let t ∈ [0, T ]. Introduce the shorthands where recall that τ t = U i for t ∈ [t i , t i+1 ) and (U k ) k∈N0 is a sequence of independent random variables such that Our goal is to obtain an upper bound for A t . To this end, define and note that by A.2, A.3 and (13), In (21), we underscore that the inequalities take place at the random positions (x t , y t ) in (20) where the potential force is evaluated. By (9), note that As a consequence of (22), a short computation shows that A t and B t satisfy: where t ∶= KA t − 2Z t ⋅ φ t . Introduce the shorthand notation which satisfy c t−r = − d dr s t−r . By variation of parameters, To upper bound the integral involving W r 2 in (24), use (22) and note that W 0 = 0, since the initial velocities in the two copies are synchronized. Therefore, where in the second step we used Cauchy-Schwarz, and in the last step, we used (21). Since s t−r is monotonically decreasing with r, Therefore, combining these bounds and by Fubini's Theorem, note that t 0 s t−r W r 2 dr In fact, as a byproduct of this calculation, observe that To upper bound the integral involving r in (24), rewrite it as where in the last step we used Cauchy-Schwarz and Young's product inequality. Inserting these bounds into the second term of (24) yields where in the last step we used that K ≤ L, h ≤ T (which follows from T h ∈ Z), and condition (14). In fact, this upper bound is non-positive, and therefore, this term can be dropped from (24) to obtain A T ≤ c T A 0 . The required estimate is then obtained by inserting the elementary inequality which is valid by condition (14) and K ≤ L. The required result holds because 1 2 − 1 48 = 23 48 > 1 3. A priori upper bounds for the Stratified Monte Carlo Integrator The following a priori upper bounds for the sMC and semi-exact flows are useful to prove L 2 -accuracy of the sMC time integrator. Lemma 12 (A priori bounds). Suppose For any x, y, u, v ∈ R d , we have almost surely The proof of Lemma 12 is nearly identical to the proof of Lemma 3.1 of [6] and hence is omitted. Proof of L 2 -Accuracy of sMC Integrator The next two lemmas combined with the triangle inequality imply the L 2accuracy of the sMC time integrator given in Lemma 7. Lemma 13 (L 2 -accuracy of sMC Time Integrator with respect to Semi-Exact Flow). Suppose A.1-A.2 hold. Let T > 0 satisfy LT 2 ≤ 1 8 and let h > 0 satisfy T h ∈ Z. Then for any x ∈ R d and k ∈ N 0 such that t k ≤ T , Lemma 14 (Accuracy of Semi-Exact Flow). Suppose A.1-A.2 hold. Let T > 0 satisfy LT 2 ≤ 1 8, and let h > 0 satisfy T h ∈ Z. Then for any x, v ∈ R d and k ∈ N 0 such that t k ≤ T , The proofs of these lemmas use the following discrete Grönwall inequality, which we include for the readers' convenience. Lemma 15 (Discrete Grönwall inequality in Forward Difference Form). Let λ, h ∈ R be such that 1 + λh > 0. Suppose that (g k ) k∈N0 is a non-decreasing sequence, and (a k ) k∈N0 satisfy a k+1 ≤ (1 + λh)a k + g k for k ∈ N 0 . Then it holds Proof of Lemma 13. Let (Q t (x, v),Ṽ t (x, v)) be a realization of the sMC time integrator from the initial condition (x, v) ∈ R 2d which satisfies (9). A key ingredient in this proof are the a priori upper bounds in Lemma 12. In particular, since L(T 2 + T h) ≤ 1 4 (by the hypotheses: LT 2 ≤ 1 8 and T h ∈ Z), (30) and the Cauchy-Schwarz inequality imply that For all t ≥ 0, let F t denote the sigma-algebra of events up to t generated by the independent sequence of random temporal sample points (U i ) i∈N0 . Define the distorted 2 -metric 4 . By Young's product inequality, As a shorthand notation, for any k ∈ N 0 , let Since the sMC and semi-exact flows satisfy (9) and (12) respectively, By the L-Lipschitz continuity of ∇U , Moreover, since the sMC flow is an unbiased estimator of the semi-exact flow, and it follows that, where we used, in turn, the Cauchy-Schwarz inequality, (39), and Young's product inequality. Similarly, By (36) and (37), Since Lh 2 ≤ 1 8 implies L 1 2 h ≤ 1 2 , By (40), (41) and (42), where we used L 1 2 h ≤ 1 2, LT 2 ≤ 1 8 and (35). Finally, by (38), and using once more L 1 2 h ≤ 1 2 and Lh 2 ≤ 1 8 and (35), where in the last step we inserted the a priori upper bounds from (33). Inserting (44), (45) and (46) into (43) yields By discrete Grönwall inequality in forward difference form (Lemma 15), Here we simplified via LT 2 ≤ 1 8, L 1 2 T ≤ 1 2 and T h ∈ Z. Employing (35), and invoking the triangle inequality, gives the required upper bound. Proof of Lemma 14. Define the weighted 1 -metric and F t ∶= −∇U (q t ). Since the exact and semiexact flows satisfy (11) and (12) respectively, we have By the triangle inequality and A.2, Moreover, since the semi-exact flow incorporates the average of the potential force over each stratum, where in the last step we used Inserting (50) and (51) into (48) and (49) respectively, yields Inserting (52) and (53) into (47), and using L 1 2 h ≤ 1 2 (by the hypotheses: LT 2 ≤ 1 8 and T h ∈ Z), gives By the discrete Grönwall's inequality in forward difference form (Lemma 15), which gives (32) -as required. Note that in the last two steps we inserted the a priori upper bound in (30) and applied the conditions LT 2 ≤ 1 8 and L 1 2 h ≤ 1 2. Duration-Randomized uHMC with sMC Time Integration Here we consider a duration-randomized uHMC algorithm with complete velocity refreshment (or randomized uHMC for short). In order to avoid periodicities in the Hamiltonian steps of HMC, duration randomization was suggested by Mackenzie in 1989 [30]. There are a number of ways to incorporate duration randomization into uHMC [9,36,7,4] including randomizing the time step and/or randomizing the # of integration steps. One overlooked way, which is perhaps the simplest to analyze, is the unadjusted (or inexact) Markov jump process on phase space introduced in [7, Section 6], as briefly recounted below. Before delving into more detail, it is worthwhile to remark that duration randomization has a similar effect as non-randomized uHMC with partial velocity refreshment. Intuitively speaking, after a random # of non-randomized uHMC transition steps with partial velocity refreshment, a complete velocity refreshment occurs. Therefore, the findings given below are expected to hold for non-randomized uHMC with partial velocity refreshment. However, in comparison to randomized uHMC, the analysis of non-randomized uHMC with partial velocity refreshment is a more demanding task if the bounds have to be realistic with respect to model and hyper parameters. Definition of Randomized uHMC with sMC time integration The randomized uHMC process is an implementable, inexact MCMC method defined on phase space R 2d and aimed at the Boltzmann-Gibbs distribution First, we define the infinitesimal generator of the randomized uHMC process; and then describe how a path of this process can be realized. To define the infinitesimal generator, let U ∼ Uniform(0, h) and ξ ∼ N (0, I d ) be independent random variables. Denote by Θ h (x, v, U) a single step of the sMC time integrator operated with time step size h > 0 and initial condition Recall that F ≡ −∇U . On functions f ∶ R 2d → R 2d , the infinitesimal generator of the randomized uHMC process is defined bỹ where λ > 0 is the intensity of velocity randomizations and h > 0 is the step size. The operatorG is the generator of a Markov jump process (Q t ,Ṽ t ) t≥0 with jumps that result in either: (i) a step of the sMC time integrator Θ h ; or (ii) a complete velocity randomization (x, v) ↦ (x, ξ). As we will see below, due to time-discretization error in the sMC steps, this process has an asymptotic bias. Since the number of jumps of the process over [0, t] is a Poisson process with intensity λ + h −1 , the mean number of steps of Θ h (and hence, gradient evaluations) over a time interval of length t > 0 is t h. The random jump times and embedded chain of the randomized uHMC process may be produced by iterating the following algorithm. Step 1 Draw an exponential random variable ∆T with mean h (λh + 1), and update time via Note: the random variables ∆T , ξ, V, and U are mutually independent and independent of the state of the process. Let {T i } i∈N0 and {(Q Ti ,Ṽ Ti )} i∈N0 denote the sequence of random jump times and states obtained by iterating this algorithm. The path of the randomized uHMC process is then given by Moreover, for any t > 0, the time-average of an observable f ∶ R 2d → R along this trajectory is given by: Let θ h ∶ R 2d × (0, 1) → R 2d denote the map that advances the exact solution of the Hamiltonian dynamics over a single time step of size h > 0, i.e., In the asymptotic bias proof, we couple the randomized uHMC process to a corresponding exact process (Q t , V t ) t≥0 with generator defined by: A key property of the exact process is that it leaves infinitesimally invariant the Boltzmann-Gibbs distribution µ BG , and under our regularity assumptions on the target measure µ, it can be verified that µ BG is also the unique invariant measure of the exact process. L 2 -Wasserstein Contractivity of Randomized uHMC Let (p t ) t≥0 denote the transition semigroup of the randomized uHMC process ((Q t ,Ṽ t )) t≥0 . A key tool in the contraction proof is a coupling of two copies of the randomized uHMC process with generator defined bỹ where y = ((x, v), (x,ṽ)) ∈ R 4d . To measure the distance between the two copies we use a distorted metric: This distorted metric involves the "qv trick" behind Foster-Lyapunov functions for (i) dissipative Hamiltonian systems with random impulses [38]; (ii) secondorder Langevin processes [32,40,1]; and (iii) exact randomized HMC [7]. This cross-term plays a crucial role since it captures the contractivity of the potential force. Using the Peter-Paul inequality with parameter δ, we can compare this distorted metric to a 'straightened' metric, Similarly, the distorted metric is equivalent to the standard Euclidean metric By applying the generatorG C on this distorted metric, and using the cocoercivity property of ∇U (see Remark 4), we can prove the following. ThenG C satisfies the following infinitesimal contractivity result The proof of Lemma 16 is deferred to Section 3.4. As a consequence of Lemma 16, we can prove L 2 Wasserstein contractivity of the transition semigroup (p t ) t≥0 of randomized uHMC. Theorem 17. Suppose that Assumptions A.1-A.3 hold and λ > 0, h > 0 satisfy (63) and (64), respectively. Then for any pair of probability measures ν, η ∈ P 2 (R 2d ), and for any t ≥ 0, Proof of Theorem 17. Let (Y t ) t≥0 denote the coupling process on R 4d generated byG C with initial distribution given by an optimal coupling of the initial distributions ν and η w.r.t. the distance W 2 . As a consequence of Lemma 16, the process t ↦ e γt ρ(Y t ) 2 is a non-negative supermartingale. Moreover, by using the equivalence to the standard Euclidean metric given in (62), Taking square roots of both sides gives the required result. L 2 -Wasserstein Asymptotic Bias of Randomized uHMC As a consequence of Theorem 17, the randomized uHMC process admits a unique invariant measure denoted byμ BG . Here we quantify the L 2 -Wasserstein asymptotic bias, i.e., W 2 (µ BG ,μ BG ). A key tool in the asymptotic bias proof is a coupling of the unadjusted and exact processes with generator where y = ((x, v), (x,ṽ)) ∈ R 4d . The proof of Lemma 18 is deferred to Section 3.4. Let (p t ) t≥0 denote the transition semigroup of the exact process ((Q t , V t )) t≥0 . We are now in position to quantify the asymptotic bias of randomized uHMC with sMC time integration. Theorem 19. Suppose that Assumptions A.1-A.3 hold and λ > 0 and h > 0 satisfy (63) and (64), respectively. Then Remark 20 (Why duration randomization?). Since the number of jumps of the randomized uHMC process over [0, t] is a Poisson process with intensity λ+h −1 , the mean number of steps of Θ h (and hence, gradient evaluations) over a time interval of length t is t h. Let ν be the initial distribution of the randomized uHMC process. We choose λ to saturate (63), i.e., λ = 12L 1 2 . The contraction rate in (65) then becomes According to Theorem 17, to obtain ε-accuracy in W 2 w.r.t.μ, t can be chosen such that However, sinceμ is inexact, to resolve the asymptotic bias to ε-accuracy in W 2 , Theorem 19 indicates that it suffices to choose h such that In other words, it suffices to choose h such that (71) Combining (70) and (71) gives an overall complexity of Proof of Theorem 19. Let (Y t ) t≥0 be the coupling process generated by Then by the coupling characterization of the L 2 -Wasserstein distance, and Itô's formula for jump processes applied to t ↦ e γt 2 Φ(Y t ), we obtain Since the exact process leaves µ BG invariant, the integrand in this expression simplifies Simplifying this expression gives (69). Proofs for Randomized uHMC Proof of Lemma 16. Let F U ∶= F (x + Uv),F U ∶= F (x + Uṽ), Z U ∶= z + Uw, and ∆F U ∶= F U −F U . Note that by A.2, A.3 and (13), The idea of this proof is to decomposeG C ρ(y) 2 into a gain Γ 0 and loss Λ 0 , and use (73) and the hyperparameter assumptions, to obtain an overall gain. Proof of Lemma 18. Let F U ∶= F (x + Uv),F U ∶= F (x + Uṽ), Z U ∶= z + Uw, and ∆F U ∶= F U −F U . The idea of this proof is related to the proof of Lemma 16: we carefully decompose A C ρ(y) 2 into a gain Γ, loss Λ, and also, a discretization error ∆, and use (73) and the hyperparameter assumptions, to obtain a gain from the contractivity of the underlying randomized uHMC process up to discretization error. This estimate results in an infinitesimal drift condition, as opposted to an infinitesimal contractivity result. The quantification of the discretization error is related to the L 2 -error estimates for the sMC time integrator developed in Lemma 7, though the semi-exact flow only implicitly appears below, since the proof involves a one step analysis. As a preliminary step, we develop some estimates that are used to bound the discretization error. Recall that (q s (x, v), v s (x, v)) denotes the exact Hamiltonian flow. Since Lh 2 ≤ 12 −2 ≤ 1 4 (by the hypotheses: (Lλ −2 ) 1 2 ≤ 1 12 and λh ≤ 1), (30) and the Cauchy-Schwarz inequality imply that As a shorthand, define Then by the Cauchy-Schwarz inequality where in the next to last step we used Young's product inequality. Similarly, Combining (78) and (79) we obtain (80) In order to obtain a sharp error estimate for the sMC time integrator, the following upper bound is crucial Thus, by Cauchy-Schwarz inequality, in the last step the numerical pre-factor was simplified by using λh ≤ 1 and (Lλ −2 ) 1 2 ≤ 12 −1 . E⟨z, ∆F 1 ⟩, E⟨z, ∆F 2 ⟩, E⟨w, λ −1 ∆F 1 ⟩, and E⟨w, λ −1 ∆F 2 ⟩), and Young's product inequality for the remaining cross terms. As expected, the gain in (82) is the same as the gain in (74). We next bound the loss and discretization error terms separately. For the discretization error ∆ in (84), apply λh ≤ 1, and insert (80) ). Both simulations have initial condition (2, 1) and unit duration. The time step sizes tested are 2 −n where n is given on the horizontal axis. The dashed curve is 2 −3n 2 = h 3 2 versus n.
2022-11-22T06:41:15.276Z
2022-11-20T00:00:00.000
{ "year": 2022, "sha1": "0c440b74434aa64ede6903b61b122021f776b95a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "0c440b74434aa64ede6903b61b122021f776b95a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
119507080
pes2o/s2orc
v3-fos-license
Light transport and vortex-supported wave-guiding in micro-structured optical fibres In hydrodynamics, vortex generation upon the transition from smooth laminar flows to turbulence is generally accompanied by increased dissipation. However, vortices in the plane can provide transport barriers and decrease losses, as it happens in numerous geophysical, astrophysical flows and in tokamaks. Photon interactions with matter can affect light transport in ways resembling fluid dynamics. Here, we demonstrate significant impact of light vortex formation in micro-structured optical fibres on the energy dissipation. We show possibility of vortex formation in both solid core and hollow core fibres on the zero energy flow lines in the cladding. Through intensive numerical modelling using different independent approaches, we discovered a correlation between appearance of vortices and reduction of light leakage by three orders of magnitude, effectively improving wave guiding. This new effect potentially might have strong impact on numerous practical applications of micro-structured fibres. For instance, a strong light localization based on the same principle can also be achieved in the negative curvature hollow core fibres. In hydrodynamics, vortex generation upon the transition from smooth laminar flows to turbulence is generally accompanied by increased dissipation. However, vortices in the plane can provide transport barriers and decrease losses, as it happens in numerous geophysical, astrophysical flows and in tokamaks. Photon interactions with matter can affect light transport in ways resembling fluid dynamics. Here, we demonstrate significant impact of light vortex formation in micro-structured optical fibres on the energy dissipation. We show possibility of vortex formation in both solid core and hollow core fibres on the zero energy flow lines in the cladding. Through intensive numerical modelling using different independent approaches, we discovered a correlation between appearance of vortices and reduction of light leakage by three orders of magnitude, effectively improving wave guiding. This new effect potentially might have strong impact on numerous practical applications of micro-structured fibres. For instance, a strong light localization based on the same principle can also be achieved in the negative curvature hollow core fibres. Transverse confinement of a flow is fundamental to many fields of science and technology. Decreasing momentum losses through the pipe walls reduces drag and can save us trillions in the energy cost of oil and gas transport. Decreasing heat losses through tokamak walls is crucial for thermonuclear fusion. Wave guiding of photons in optical fibre is important for optical communications (and Internet), high power lasers, imaging, beam delivery and other optical technologies. In a standard step-index fibre, transversal confinement is ensured by the total internal reflection, but decreasing light leakage is a challenge in micro-structured fibres emerging in a variety of modifications. Appearance of vortices in a flow can have opposite effects on confinement. In pipe and channel flows, from industrial to cardiovascular systems, generation of vortices generally increases dissipation while vortex suppression can lead to drag reduction. On the contrary, in rotating, magnetized systems and in fluid layers, quasi-two-dimensional vortices have separatrices, serving as transport barriers. Most dramatic example is the transition from low to high confinement in tokamak when zonal vortex flow suppresses heat transfer to the walls. Here we show how generation of optical vortices correlates with significantly improved light confinement in micro-structured fibres. The waveguide losses can be reduced by several orders of magnitude. Though optics and hydrodynamics look distinct, wave dynamics and underlying mathematical models have many qualitative and quantitative similarities [1][2][3][4][5][6] . In particular, in nonlinear fibre optics a number of physical effects have been observed, that closely resemble nonlinear hydrodynamic problems, including modulations instability, oceanic rogue waves, localized nonlinear structures, shock waves, optical turbulence [1][2][3][4][5][6][7][8][9][10][11][12][13] . These similarities provide possibilities to transfer knowledge and techniques, and observe beautiful and non-evident connections between the two diverse fields. The present work reveals an interesting and potentially highly important connection through the analysis of formation of vortex structures in optical fibres and its impact on the light energy flows. For quantum and wave phenomena described by a complex field, vortex is an amplitude-zero topological phase defect [1][2][3][4][5][6][7][8][9][10][11][12][13] . For instance, an anomalous light transmission through a subwavelength slit in a thin metal plate is accompanied by wave-guiding and phase singularities -vortices of the optical power flow 14 . In nonlinear media, the vortices can exist in the form of topological solitons 15 Three main features associated with optical vortices (OVs) are zero intensity and phase indefiniteness in the center (phase singularity), and a screw dislocation of the wave front 16,17 . In two dimensions, phase singularities occur when three or more plane waves or two Gaussian pulses interfere and light vanishes at some points 18 . Phase rotates by 2π around the zero-intensity point, which leads to circulation of the optical energy. A circular flow of energy leads to the ability of optical vortices to carry angular momentum (AM). The classical AM is well studied for the monochromatic waves in free space. In recent years, AM transfer in dielectrics and fibres has attracted much interest [19][20][21][22][23][24] . To date, the OVs in waveguides have been considered mostly for eigenmodes of step-index or graded-index fibres [25][26][27] . A systematic study of the internal energy flow of light beams (Gaussian, Laguerre -Gaussian, Bessel) during their propagation in free space was carried out only in few works [28][29][30] . In this case, singularities of the internal flows occur where the Poynting vector or its transverse component vanishes 29 . In the case of transverse flow fields, the singular points can be nodes, saddle points, vortices, spiral points. In work 30 the Poynting vector singularities were classified based on the theory of dynamic systems as applied to optics. The work described the Poynting vector singularities of both types associated with either the vector field singularities (zeros of the electric and magnetic fields) or mutual polarization of the fields (owing to vector product). In addition, in 31 it was demonstrated that the components of the Poynting vector vary in sign for some phase difference between TE and TM -polarized waves forming Bessel beam. In this work, we examine for the first time the linear OVs that occur in the cladding of micro-structured optical fibres (MOFs), all solid photonic band gap fibres (ASBGF) 32 and a new type of hollow core fibres -negative curvature hollow core fibres (NCHCF) [33][34][35] . OVs considered here arise in the transverse component of the Poynting vector of the core modes, which determines the losses of these modes in the micro-structured fibres. The process of energy leakage from the fiber core can be most clearly represented by the energy flow lines or streamlines of the transverse component of the Poynting vector of the core modes 36 . The transverse component of the Poynting vector always has an uncertainty in direction and zero value at the origin. The novelty of our work is in the analysis of the singularities of the transverse component of the Poynting vector of the core modes of micro-structured optical fibers and in correlating their formation with the level of fibre losses. We have demonstrated that by changing the geometric parameters of such fibers, one can find such a configuration of vortex structures in the fiber cladding that can lead to a significant reduction of leakage losses. Summary of Results The main result of our work is the discovery and quantitative analysis of the correlation between formation of optical vortices and fibre leakage losses. Waveguide microstructures with continuous rotational symmetry of the core-cladding boundary, such as dielectric tube waveguides or Bragg fibers, have no singularities (OVs) in the energy flow lines of the core modes in the cladding ( Fig. 1(left)). The singular points of the transverse Poynting vector component occur in MOFs with discrete rotational symmetry of the cladding elements such as ASBGFs or NCHCFs. Discrete rotational symmetry of the cladding elements arrangement in the MOFs defines the azimuthal energy fluxes of the core modes. This energy flux leads to the formation of additional vortices both inside the cladding elements and in the space between them ( Fig. 1(right)). As stated in 29 , the vortices 'organize' the whole field in the neighboring space. The phase dislocations in the electromagnetic wave structure in free space occur when the real and imaginary parts of the field strength are simultaneously equal to zero 37 . The stream lines of the transverse Poynting vector → P transv of the core modes in a cross section of the graded -index fibre are described by the equations dx/P x (x,y) = dy/P y (x,y), where P x and P y are the components of → P transv . The locations of the OVs in the streamlines are determined by the condition P x (x,y) = P y (x,y) = 0 38 . We demonstrate here that these conditions define the positions of the OV centers in the cladding of ASBGFs and NCHCFs. Also, we demonstrate OV induced wave-guiding effect, in which photons passing through the optical fibre excite the circulating power currents that impact light transport and decrease the overall losses. This reduction in losses is achieved due to the "negative propagation" of the leaky light energy 29,31 (backward propagation of the core mode leaky energy). The formation of the OV in the cladding capillary walls of silica glass NCHCFs leads to a strong light localization in the hollow core, which makes it possible to transmit radiation in the mid IR spectral range 39 . Vortex -Supported Wave -Guiding in all Solid Band Gap Fibres Dislocations of monochromatic waves are stationary in space and form isolated interference fringes. The phase of the field is undefined on the zero-amplitude lines and changes by π when crossing them. The wave front dislocations of coherent radiation can be characterized by zero field amplitude in its center or along the dark rings where the field amplitude vanishes. The origin of the dark rings and dislocation centers is boundary diffraction and destructive interference. For such micro -structured optical fibres as ASBGFs or NCHCFs the electric and magnetic fields of the core H r e ( , ) i z t ( ) can be described by specifying the axial components of the fields E z and H z , where ω denotes the angular frequency and β is propagation constant of the core mode. Then, the wave equation for each axial component is the Helmholtz equation: and n i is a refractive index of the cladding element or surrounding area. In the case of micro -structured fibres it is possible to separate the "longitudinal" phase of the core mode e i z β and the "transversal" phase. Each component of the core mode fields in the vicinity of the cladding element is a linear superposition of an infinite set of cylindrical harmonics. Axial components of the fields can be expressed as: 40 where F m (r) is Bessel function of 1 th order if r < a and Hankel function of 1 th order if r > a, a is the radius of the cladding dielectric cylinder. For Hankel function of 1 th order the radiation condition at infinity is satisfied. All transversal components of the core mode fields can be expressed in terms of the axial component using well known relations 41 . For example, azimuthal components of the core mode fields are: The details of calculation of the expansion coefficients of the cylindrical harmonics in (1) and the complex propagation constants of the core modes performed using the Multipole method can be found in 40 and in the section Methods. In addition, we performed the same calculations using COMSOL 4.4 software package. Part of the figures and calculations in the paper were also made using COMSOL 4.4. We considered all solid band gap fibres with one and two rows of the cladding dielectric rods. The refractive index contrast between the cladding rods and the surrounding glass matrix (n glass = 1.45) is Δn = 0.05. The arrangement of the rods has a hexagonal structure. The value of a pitch of Λ = 12 µm (distance between the centers of the cladding rods), and the ratio of d/Λ = 0.33, where d is the cladding rod diameter. Let us consider the light leakage from the fibre core and calculate the loss dependencies on the wavelength for the fundamental core mode in several transmission bands. The loss dependencies on the wavelength for both ASBGFs are shown in Fig. 2 and have several transmission bands according to the band gap waveguide mechanism 32 . It is seen that there are relatively narrow transmission bands in which the losses are three orders of magnitude lower than in the rest of the spectrum. This is especially true for the fibres with one row of the cladding rods. The same resonant loss reduction can be observed if we fix the wavelength, for example, in the minimum of losses at a wavelength of λ = 1 µm, and change the value of the pitch Λ. The calculation results are shown in Fig. 3. As in Fig. 2, there is a sharp decrease in losses Scientific RepoRtS | (2020) 10:2507 | https://doi.org/10.1038/s41598-020-59508-z www.nature.com/scientificreports www.nature.com/scientificreports/ by several orders of magnitude in both cases in a narrow range of the pitch values. It is clear that this substantial decrease in losses for both fibres can be only associated with the narrow spectral regions and with specific values of the pitch. Light leakage of the core modes can be characterized by the distribution of projection of the transverse component of the Poynting vector on the radius -vector drawn from the origin P r . The values of the projection of the transverse component of the Poynting vector for the fundamental core mode were calculated at the wavelengths of λ = 1 µm, 1.5 µm and a value of pitch of Λ = 12 µm (Fig. 2). As in the case of polygonal waveguides 42 , the distribution of the radial component of the the transverse Poynting vector has periodic alternating character, which points to a vortex structure of the core mode fields in the cladding (Fig. 4). The vortex structures for both distributions of the radial projection of the transverse component of the Poynting vector are different at different wavelengths (Fig. 4). In the case of minimal losses ( Fig. 4(left)), the OVs centers are located inside the cladding rods, while in the case of large losses (Fig. 4(right)), the OVs are located at the boundaries of the cladding rods. Thus, the leaky radiation of the fundamental core mode moves along different trajectories in the cladding. The "negative propagation" of leaky energy of the core mode implies some balance between the mode energy flowing from the core and the energy flowing back into the core. The magnitude of this energy balance determines the losses in the waveguide. It is well established that the single fundamental property of the optical vortex formation is the rotation of the Poynting vector (energy rotation) around the phase dislocation (the OV core) 16,17 . To clearly demonstrate the OV formation and the formation of phase dislocations of various structures in the cladding of the all solid band gap fibre with one ring of the cladding rods (Fig. 4) at wavelengths of λ = 1 µm and λ = 1.5 µm (Fig. 2). The distribution for the wavelength of λ = 1 µm is shown in Fig. 5. The structure of the Poynting vector streamlines shown in Fig. 5 points to the formation of the dislocation lines around which the leaky radiation of the fundamental core mode rotates. The vortex centers are formed at the intersection points of the curves P x (x, y) = 0 and P y (x, y) = 0. Moreover, it can be seen from Fig. 5 Although the streamlines distribution of the transverse component of the Poynting vector at a wavelength of λ = 1.5 µm forms the OVs (Fig. 6), the energy of the core mode efficiently flows through different paths in the cladding and rotates only in small regions near the OV centers at the rod boundaries. Moreover, a major part of www.nature.com/scientificreports www.nature.com/scientificreports/ the core mode radiation flows through the cladding rods ( Fig. 4(right)) which, in contrast to the previous case, cannot serve as effective reflectors for the leaky radiation. This leads to large losses in the fibre ( Fig. 2 and Fig. 4). Let us now consider the phase distribution of the transverse component of core mode electric field, for example, E x . According to the general principles of the optical vortices theory 16,17 , the phase distribution of the transverse component of the core mode electric field should experience a jump in π when passing through the boundary of the closed area around the dislocation lines. We calculated the phase distribution of E x in the cladding of both fibres for two wavelengths of λ = 1 µm and λ = 1.5 µm (Fig. 2). The phase distribution of E x for the fibre with one row of the cladding rods is shown in Fig. 7. The vertical scale represents the phase values in degrees. It can be seen from Fig. 7(left) that the OVs located in the cladding rods at wavelength of λ = 1 µm have a corresponding phase jump π for the distribution of E x . The phase distribution of E x at a wavelength of λ = 1.5 µm has a qualitatively different character Fig. 7(right). In this case, the phase distribution does not experience a pronounced jump in π in the cladding rods, so the ring phase dislocations for the fundamental core mode don't form in the cladding rods. This difference in the core mode field structure leads to a difference in the loss level at these two wavelengths. It is possible to demonstrate that in the case of ASBGFs with two rows of the cladding rods (Figs. 2, 3) the loss reduction is also determined by the phase ring dislocations in the rods. Figures 5, 6 demonstrate qualitative (topological) difference between two cases. Only isolated vortices exist for λ = 1.5, which makes the energy to spiral out without returning. On the contrary, the green and red lines in Fig. 5 cross not only at isolated points but also coincide along the whole radial lines, which are thus vortex lines having phase jump in the left panel of Fig. 7. Those vortex lines provide for energy recirculation and improved confinement at wavelength of λ = 1 µm. Vortex -Supported Wave -Guiding in the Negative Curvature Hollow Core Fibres To demonstrate the vortex -supported wave -guiding in the NCHCFs and its distinction from the wave -guiding in waveguides with continuous rotational symmetry of the core -cladding boundary, let us consider the loss dependence of the fundamental air core mode on the wavelength for three waveguide micro -structures. The first one is a single capillary with a wall thickness of 0.65 µm and the air core diameter of 14.4 µm. The refractive index of the capillary wall is equal to 1.5 as in the case of ASBGF considered in Section 2. The second one is a waveguide consisting of two capillaries nested in one another and having a common center. The internal capillary has the same parameters as the single capillary described above. The outer capillary has an inner diameter of 32.3 µm and the same wall thickness as the internal capillary. The light localization in both waveguides can be described within the ARROW model 44 . The capillary wall can be considered as Fabry -Perrot resonator which either passes radiation outside (condition of the resonant regime is k d m t π = , where k t is the transverse component of the air core mode wavevector, d is the capillary wall thickness and m is an integer) or reflects it into the air core (condition of the anti -resonant regime is k d m ( 1/2) t π = + ). In this case, the losses are consequently reduced with the addition of each new cladding layer (Fig. 8). The NCHCF has 6 cladding capillaries with a wall thickness of 0.65 µm, the inner diameter of 8.3 µm and the air core diameter is 14.4 µm. The distance between the nearest points of the adjacent cladding capillaries is 2.4 µm. The losses of NCHCF with 6 cladding capillaries are approximately two orders of magnitude lower than in the double capillary fibre at the point of minimum loss (Fig. 8). This difference in losses can be explained by the difference in the leakage process for the fundamental air core mode between these cases. Another dissimilarity pertains to the loss behavior of the NCHCF at the long wavelength edge of the short wavelength transmission www.nature.com/scientificreports www.nature.com/scientificreports/ band (Fig. 8) and originates from the coupling between the fundamental air core and the cladding modes with a certain type of discrete rotational symmetry 45 . As in the case of ASBGFs, the leakage process for the fundamental air core mode can be characterized by the streamlines of the transverse component of the Poynting vector and the lines of zero components of The corresponding distribution of the streamlines of → P transv and the points of intersection of P x (x, y) = 0 and P y (x, y) = 0 are shown in Fig. 9 for the internal capillary of the double capillary fibre and the single cladding capillary of the NCHCF whose losses are shown in Fig. 8. The calculations were carried out at the center of the second transmission band at a wavelength of λ = 0.9 µm (Fig. 8). It can be seen from Fig. 9 (left) that the curves P x (x, y) = 0 and P y (x, y) = 0 have no intersection points in the case of the double capillary waveguide. Thus, the OV formation in the capillary wall is impossible. It was shown in 42 that in this case the radial projection of the transverse component of the Poynting vector P r must be positive along the whole core -cladding boundary of the capillary. For the double capillary fibre this distribution is shown in Fig. 10 for the sum of two orthogonally polarized fundamental air core modes. In all points of the cladding, P r > 0 and streamlines are directed along the radial direction only. The magnitude of P r is normalized by the value of the axial component of the Poynting vector for the sum of two orthogonally polarized fundamental air core modes, which is taken to be one watt. The distribution of P r does not depend at all on the azimuthal angle ϕ along any circle. A different situation is observed in the case of the NCHCFs. The OVs centers are formed at the intersection points of the curves P x (x, y) = 0 and P y (x, y) = 0, where the distance between the cladding capillaries is minimal Scientific RepoRtS | (2020) 10:2507 | https://doi.org/10.1038/s41598-020-59508-z www.nature.com/scientificreports www.nature.com/scientificreports/ ( Fig. 9(right)). They occur both at the boundary and inside the capillary wall. The energy flow of the fundamental air core mode rotates around the vortices. As in the case of the ASBGFs (Fig. 4), some part of the core mode energy flux changes its direction in the regions between the cladding capillaries and then returns to the air core of NCHCF. To confirm this assumption, let us consider a distribution of P r at the wavelength 0.9 µm, as shown in Fig. 11(left). It can be seen from Fig. 11(left) that due to the OVs in the cladding capillary walls there are regions between the capillaries, where P r = 0 or P r < 0. Because of these vortices, the energy of the fundamental air core mode passes only through the limited segments of the cladding capillary wall surfaces, which are located closest to the center of the core. Only a small part of this energy passes through the space between the cladding capillaries. In addition, since P r < 0 in separate areas of the space between the cladding capillaries, the fundamental core energy undergoes 'negative propagation' 29,31 back to the fibre core ( Fig. 11 (left)). To confirm this conclusion, let us consider the dependence of P r on the azimuthal angle ϕ along the circle with the radius 10.6 µm passing near the centers of the OVs (Fig. 11(right)). It can be seen from Fig. 11(right) that the distribution and value of P r are largely determined by the OV locations in the cladding. Radiation leakage from the NCHCF also occurs in the anti -resonant regime at λ = 0.9 µm. The normalization of P r is as in the case of the double capillary fibre. That regime of light localization was called the local ARROW mechanism (only a limited part of the capillary wall reflects light in the anti -resonant regime) 46 . In the recent paper 47 , using technique of transverse power flow streamlines visualization for the air core modes of the negative curvature hollow core fibres 36 , it was demonstrated a possibility of reducing losses with only a modest increase in fabrication complexity. When approaching the resonant condition for the cladding elements, for example, at λ = 0.75 µm (Fig. 8), the cladding capillary walls become more transparent for the outgoing radiation, and the total loss level increases (Fig. 12). Due to the presence of the OVs and the reflective properties of the cladding capillaries, this increase in losses is not significant compared to the increase in losses for the double capillary fibre (Figs. 8,10). In that case, the vortex structure of → P transv is preserved, which leads to an inhomogeneous distribution of the outgoing energy flux depending on the azimuthal angle and reduces the leakage loss growth (Fig. 12(right)). The geometrical parameters of both NCHCFs and ASBGFs have a strong impact on the OV formation in the cladding at a given wavelength and, consequently, on wave -guide properties of the micro -structured fibres. By changing the distance between the walls of adjacent cladding capillaries, the number of the capillaries in the cladding or the distance between their centers, it is possible to control the positions of the vortices in the cladding and the level of light localization in the air core 42 . Moreover, as it was shown in 36 , the introduction of the supporting tube does not significantly change the vortex formation mechanism in NCHCFs. In order to demonstrate the influence of the geometric parameters of the cladding elements on the loss level of the fundamental core mode in waveguides with a complex cladding structure, we considered a hollow core fibre with a cladding consisiting of 12 ellipsoidal capillaries. We calculated the dependence of the fundamental mode loss on the ratio of the minor axis to the major axis of the ellipsoidal cladding element (a/b). The magnitude of the major axis was 8.3 μm, the wall thickness of the ellipse was 0.65 μm, and the hollow core of the waveguide was R core = 14 μm. The refractive index of the wall of the ellipsoidal capillary was n = 1.5 as in previous cases. The results of the calculation at λ = 1 µm are shown in Fig. 13. As can be seen from Fig. 13, the waveguide losses of the hollow core fibre described above are high in a wide range of parameter a/b values and only in a narrow region near a/b~0.524 there is a sharp decrease in losses by several orders of magnitude. In order to demonstrate a qualitative difference in the behavior of the transverse Poynting vector component of the leaky fundamental air core mode, we consider the distribution of the streamlines of the vector and its radial component (in color as in Fig. 4). The corresponding distributions are shown in Fig. 14. Figure 13. The loss dependence on the ratio of the minor axis to the major axis (a/b) of the cladding capillary for the hollow core fibre with the cladding consisting of 12 ellipsoidal capillaries. The parameters of the fibre are described in the text. It is seen from Fig. 14 that in the case of large losses of the air core mode the streamlines density is much higher in the air core than in the cladding ellipsoidal capillaries, and the color of the radial component of the Poynting vector indicates the flow of the core mode energy through these capillaries into the outer space. Strong leakage of the energy occurs despite the presence of a certain vortex structure in the cladding capillaries (Fig. 14(left)). At the same time, a small change in the parameter a/b makes it possible to reduce losses by three orders of magnitude while changing the vortex structure of the transverse Poynting vector component in the cladding. In this case, the density of the flow lines is much higher precisely in the ellipsoidal capillary wall where the vortices occur than in the air core of the fibre (Fig. 14(right)). conclusion We have studied impact of transversal light vortex formation on energy leakage in micro-structured optical fibres. To the best of our knowledge, it is the first demonstration that creation of a specific configuration of vortices at the intersections of zero energy flow lines can reduce losses by three orders of magnitude. This new effect was verified by two independent methods. Appearance of vortices suppresses light leakage, effectively improving wave-guiding. Depending on the type of rotational symmetry of the arrangement of the cladding elements, minimal losses are achieved when centers of the phase dislocations are located in the cladding element wall. The same underlying physical mechanism provides for a strong light localization in the negative curvature hollow core fibres. The key point of this general concept is that the right arrangement of vortices in the fibre cladding produces a balance between the outward and inward energy flux of the core mode. We anticipate that this mechanism can be used to develop advanced low-loss fibres across spectral range for various applications, from telecom to high power lasers and optical beam delivery. Methods To describe vortex formation in the transverse component of the Poynting vector of the leaky core modes of micro-structured optical fibres, we employed two computational methods accounting for all electric and magnetic components of the core mode fields and their complex propagation constants. First, we used a multipole expansion method to perform full-vector modal calculations of the micro-structured optical fibres with circular rods or capillaries in the cladding. This is an efficient approach for micro-structured fibres with circular cladding elements. For a given time dependence is exp(-iωt), the core mode fields are expressed via cylindrical harmonics. In the neighborhood of the circular cladding element, the axial components are presented using local cylindrical coordinates (r i , ϕ i ), where i is a number of the circular cladding element. In the case of the cladding rods, the axial field components can be expressed in terms of Bessel functions of the first kind (J m ) inside the rod. In the case of the cladding capillary,, the axial components present the sum of Hankel functions of the first and second kind (H m (1) and H m (2) ) in the capillary wall and Bessel functions of the first kind (J m ) in the capillary hollow core. Matching the boundary conditions for radial and azimuthal components of the core mode fields in two domains leads to a matrix equation. It should be noted that the source-free J m parts of the expansion in the neighborhood of the cladding rod or the capillary i are due to H m (1) fields radiated from the cladding rods or capillaries j ≠ i, and in order to obtain the matrix equation when applying the boundary conditions it is necessary to use Graf 's addition theorem. The determinant of the matrix defines the propagation constants of the core modes β, and the associated null vectors determine the modal fields. We cross-check the results of our calculations of the propagation constants of the leaky core modes using a commercial software package COMSOL Multiphysics based on finite element method (FEM). The complex propagation constants and distribution of the fields of the leaky core modes have been found by solving the eigenvalue problem for the wave equation for the two types of the micro-structured fibres. The waveguide losses have been calculated www.nature.com/scientificreports www.nature.com/scientificreports/ through the imaginary parts of the propagation constant of the fundamental core mode by the above two methods for different geometrical parameters of the fibre and the wavelengths. Further, using the calculated values of the core mode fields, we computed the distribution of the transverse component of the Poynting vector and its stream lines. The positions of the optical vortices in the cross -section of the micro -structured fibre were determined from the equation for the streamlines of the transverse component of the Poynting vector of the core mode = dx P dy P / / x y . The conditions = = P x y P x y ( , ) ( , ) 0 x y defined the singular points and lines of the streamline pattern in the cross section of the fibre. Optical vortices have been detected in the regions and individual points, where the lines of zero values of transverse Poynting vector components overlap or intersect. The analysis of the vortex distributions and loss dependencies allows us to identify us the connection between the structure of the vortices and the minimum loss values in the micro-structured fibres.
2019-04-10T09:33:11.000Z
2019-04-10T00:00:00.000
{ "year": 2020, "sha1": "1181bb1f3c28a34a47912c21d57b42a53a55683e", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-59508-z.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "00d2d8b9869fdc98eba8d8b9ab9fb6f52d67d5a0", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
218682230
pes2o/s2orc
v3-fos-license
Exploratory analysis of immune checkpoint receptor expression by circulating T cells and tumor specimens in patients receiving neo-adjuvant chemotherapy for operable breast cancer Background While combinations of immune checkpoint (ICP) inhibitors and neo-adjuvant chemotherapy (NAC) have begun testing in patients with breast cancer (BC), the effects of chemotherapy on ICP expression in circulating T cells and within the tumor microenvironment are still unclear. This information could help with the design of future clinical trials by permitting the selection of the most appropriate ICP inhibitors for incorporation into NAC. Methods Peripheral blood samples and/or tumor specimens before and after NAC were obtained from 24 women with operable BC. The expression of CTLA4, PD-1, Lag3, OX40, and Tim3 on circulating T lymphocytes before and at the end of NAC were measured using flow cytometry. Furthermore, using multi-color immunohistochemistry (IHC), the expression of immune checkpoint molecules by stromal tumor-infiltrating lymphocytes (TILs), CD8+ T cells, and tumor cells was determined before and after NAC. Differences in the percentage of CD4+ and CD8+ T cells expressing various checkpoint receptors were determined by a paired Student’s t-test. Results This analysis showed decreased ICP expression by circulating CD4+ T cells after NAC, including significant decreases in CTLA4, Lag3, OX40, and PD-1 (all p values < 0.01). In comparison, circulating CD8+ T cells showed a significant increase in CTLA4, Lag3, and OX40 (all p values < 0.01). Within tumor samples, TILs, CD8+ T cells, and PD-L1/PD-1 expression decreased after NAC. Additionally, fewer tumor specimens were considered to be PD-L1/PD-1 positive post-NAC as compared to pre-NAC biopsy samples using a cutoff of 1% expression. Conclusions This work revealed that NAC treatment can substantially downregulate CD4+ and upregulate CD8+ T cell ICP expression as well as deplete the amount of TILs and CD8+ T cells found in breast tumor samples. These findings provide a starting point to study the biological significance of these changes in BC patients. Trial registration NCT04022616. Background Breast cancer (BC) is the most common malignancy in women, with over 1.3 million cases worldwide and 240, 000 cases in the United States annually [1][2][3]. Approximately 93% of all newly diagnosed cases of BC in the United States are operable, but many patients require systemic chemotherapy in order to decrease the risk of locoregional and systemic recurrence [4]. Recently, there has been an increase in the use of neoadjuvant chemotherapy (NAC), especially for patients with triplenegative (TNBC) and human epidermal growth factor receptor 2 (HER2) + disease [5]. Randomized, controlled, prospective studies that compared NAC with adjuvant chemotherapy have shown that patient survival is similar between these two approaches [6,7]. However, NAC offers several advantages over adjuvant chemotherapy, including the ability to increase the rate of breast conservation and to monitor for chemotherapy response [5,8]. Notably, pathologic complete response (pCR) following NAC has emerged as a reliable surrogate marker of improved disease free survival (DFS) and overall survival (OS), especially in patients with TNBC and hormone receptor (HR)−/HER2+ disease [9]. Several studies have shown that the presence of tumor-infiltrating lymphocytes (TILs) is associated with higher rates of pCR to NAC [1,[10][11][12]. Furthermore, many studies have revealed that TIL levels are predictive of response to NAC and that for individuals with TNBC and HER2+ BC, TIL levels were positively associated with a survival benefit [10,11,[13][14][15]. These data suggest that the immune system may play a role in controlling breast cancer and that the cytotoxic agents used in NAC may function in part through modulation of the immune system. This observation opens up the possibility that immune therapies could be incorporated into NAC for BC. Several such approaches are currently under investigation in multiple clinical trials [16]. In the metastatic setting, the IMpassion130 trial showed a 7 month improvement in OS when the PD-L1 inhibitor atezolizumab was added to nab-paclitaxel chemotherapy in the front line setting for patients with TNBC and positive expression of PD-L1 on the immune cells within the tumor microenvironment [17,18]. Similarly, results from the Keynote-522 trial have shown that addition of an immune checkpoint (ICP) inhibitor to standard BC NAC can improve the rate of pCR in TNBC patients [19]. Other studies that combine ICP inhibitors and NAC backbones are currently ongoing. For example, study NCI10013 adds atezolizumab to carboplatin and paclitaxel [20] and study NCT03289819 tests the addition of the PD-1 inhibitor pembrolizumab to neo-adjuvant nab-paclitaxel followed by epirubicin and cyclophosphamide. Thus far, only antibodies targeting PD-1, PD-L1, and CTLA4 have received FDA approval for the treatment of cancer. However, it is likely that drugs targeting additional ICPs, such as Tim3, Lag3, and OX40, could be approved in the future [21,22]. Tim3 is an inhibitory receptor that has been found to inhibit Th1 T cell responses, and there are several antibodies targeting Tim3 in development [21]. Lag3 is another checkpoint receptor expressed by regulatory T cells and TILs that has been shown to dampen anti-tumor immune responses [21]. Finally, OX40 is a costimulatory molecule expressed by activated CD4+ and CD8+ T cells [21]. Agonists of OX40 can induce T cell proliferation and expansion [23]. In order to effectively incorporate immune therapy into NAC for BC, it will be important to understand the changes that occur in the expression of ICP proteins during NAC, both in circulating T cells and within the tumor. Thus, the goal of this study was to evaluate the changes that occur in the expression of PD-1, CTLA4, Tim3, Lag3, and OX40 by circulating CD4+ and CD8+ T cells in response to NAC. Levels of stromal TILs and tumor PD-1/PD-L1 expression were also evaluated in BC patients receiving NAC. Study design Specimens for this analysis were obtained under an IRBapproved, single-arm correlative study that was conducted at The Ohio State University Comprehensive Cancer Center between May 2012 and March 2014 (IRB protocol No. 2010C0036). Eligible patients included adult women (≥18 years old) with biopsy proven, nonmetastatic BC who, in the opinion of the treating physician, were suitable for NAC. Exclusion criteria were the presence of inoperable BC or receipt of chemotherapy for breast cancer prior to study enrollment. All patients were required to sign an IRB-approved informed consent form prior to enrollment. Neo-adjuvant chemotherapy Eligible participants received intravenous NAC as determined by the treating physician. The chemotherapy regimens employed in this study have previously been described and are listed in Additional File 1 [2]. Briefly, the majority of patients received 4 cycles of doxorubicin and cyclophosphamide given every 2 weeks at standard doses, followed by either 12 treatments of paclitaxel given weekly or 4 cycles of dose-dense paclitaxel given every 2 weeks. For patients with HER2+ BC, trastuzumab was administered alone or in combination with pertuzumab along with the paclitaxel. For all chemotherapy regimens, dexamethasone was utilized as an anti-emetic agent (frequency and timing detailed in Additional File 1). Peripheral blood samples were all obtained prior to administration of chemotherapy. All blood draws were performed 7 days or more from the last dose of dexamethasone. Residual post-NAC tumor samples were obtained three or more weeks after the last dose of dexamethasone. Sample collection and procurement Peripheral blood was collected prior to the first and last cycle of NAC for this study. Peripheral blood mononuclear cells (PBMCs) were isolated from peripheral venous blood via density gradient centrifugation with Ficoll-Paque (Amersham Pharmacia Biotech, Uppsala, Sweden), as previously described [24,25]. PBMCs were cryopreserved and stored at − 80°C until 1 × 10 6 PBMCs from all compared samples could be concurrently thawed and analyzed by flow cytometry. Assessment of ICP expression on CD4+ and CD8+ T cells was performed at baseline and at the time of the last chemotherapy treatment. Archived formalin-fixed, paraffinembedded pre-NAC biopsies and post-NAC resection specimens were retrieved for analysis of TILs, CD8+ T cells and PD-L1 and PD-1 expression. Flow cytometry for expression of ICPs on circulating T cells PBMCs were stained with fluorescent antibodies to CD4, CD8, CTLA4, PD-1, Lag3, OX40, and Tim3. Specific antibodies and fluorophores were as follows: CD4 FITC, CD8 APC, PD-1 PE, Lag3 PE, Tim3 PE, CTLA4 PE, OX40 PE. To perform flow cytometry compensation and verify fluorescent antibody efficacy, the AbC Total Antibody Compensation Bead Kit (Thermo Fischer Scientific, Waltham, MA) was utilized according to manufacturer's instructions to determine positive and negative cell populations. Gating on CD4+ cells identified T helper lymphocytes and gating on CD8+ cells identified cytotoxic T lymphocytes. CD4+ and CD8+ T cells were subsequently analyzed separately for expression of CTLA4, PD-1, Lag3, OX40, and Tim3. All samples were run on a BD LSR-II flow cytometer and data was analyzed with FlowJo software (Tree Star, Inc.). Differences in the expression of ICP receptors before and after NAC were determined by comparing the percentage of CD4+ or CD8+ T cells expressing a given ICP. Analysis of tumor immune infiltrate A multi-color immunohistochemistry (IHC) multiplex assay simultaneously detecting PD-1, PD-L1, and CD8 expressing cells (Roche Tissue Diagnostics) was performed on whole sections from formalin-fixed, paraffinembedded pre-NAC biopsies or post-NAC resected tumor specimens. In this assay, PD-L1 staining is brown, PD-1 staining is red, and CD8 staining is green. Membranous staining was considered to be specific. A cut off of ≥1% was employed to define PD-1 or PD-L1 positive expression, as this was previously determined to be an appropriate measure of PD-L1 positivity and associated with improved outcomes for the addition of PD-L1 inhibitors to chemotherapy in several clinical trials [17,26]. PD-L1 positive expression in the tumor is reported as the percentage of PD-L1 positive tumor cells amongst total tumor cells. Similarly, within the stroma, the amount of PD-L1 positive stromal/immune cells is reported as the percentage of PD-L1 positive stromal/immune cells amongst total stromal/immune cells. Total PD-L1 positive cells are reported as the total percentage of PD-L1 positive tumor and stromal/immune cells amongst total tumor and stromal/immune cells. The amount of CD8+ T cells within the tumor, stroma, and the total sample was calculated by comparing CD8+ immune cells to total immune cells within tumor area, stromal area, and entire area respectively. TILs were identified on hematoxylin and eosin stained whole sections and defined as the percent of stromal area within/ surrounding tumor containing infiltrating lymphocytes compared to the total area. Analysis of tumor specimens was performed by an experienced pathologist specializing in BC and tumor microenvironment (ZL). Statistical analysis Statistical differences between treatment groups were determined using paired (when comparing pre-and post-NAC samples) and unpaired (when comparing between tumor subtypes) Student's t-tests. On presented graphs, bars represent group means and each pair of connecting circles signify individual patient values preand post-NAC. Patient characteristics Twenty-four women with operable BC were enrolled in this study. Two patients did not complete all of the required blood draws and were therefore only included in the tumor specimen analysis. Patient characteristics are summarized in Table 1. The median patient age was 48 years (range 32-70). All patients were Eastern Cooperative Oncology Group (ECOG) performance status of 0 or 1, indicative of all patients being completely ambulatory. The majority of patients were Caucasian (n = 17) and pre-menopausal (n = 15). Eleven patients had TNBC, eight had HR+/HER2-BC, three patients had HR −/HER2+ BC, and two patients had HR+/HER2+ BC. Only one patient had stage I disease, while 20 and 3 patients had stage II and III BC, respectively. All 24 patients had invasive ductal carcinoma as the tumor histology. These characteristics are felt to be representative of a typical patient population that is offered NAC [27]. The overall rate of pCR, which is defined as no pathologic evidence of residual invasive cancer in the breast and sampled regional lymph nodes, was 41.7% (45.5% in patients with TNBC, 37.5% for patients with HR+/HER-BC, 66.7% in patients with HR−/HER2+ BC, and 0% for patients with HR+ HER2+ BC). The rates of pCR and residual cancer burden indexes [28] by NAC regimen are reported in Additional File 1. The surgical management of the patients' BC following NAC are detailed in Additional File 2. Differences in ICP expression dependent upon breast tumor subtype were also examined. In Additional File 3, TNBC patients' peripheral blood CD4+ and CD8+ T cell expression of ICPs were compared to patients with other breast cancer subtypes. In this analysis, the only statistically significant difference seen was greater pre-NAC CD8+ T cell Tim3 expression in TNBC patients over patients with other breast cancer subtypes (p < 0.05). HR+ and HR-patient levels of ICP expression were also compared in Additional File 4. In accordance with the prior analysis, pre-NAC CD8+ T cell Tim3 expression was lower in HR+ blood specimens than HR-samples (p < 0.01). No other statistically relevant differences were seen. Fig. 4a. In the pre-NAC samples, an average of 29.8% of the stroma contained TILs, compared to 24.9% TILs in the stroma of post-NAC samples ( Table 2). In the pre-NAC group, the range of stromal area containing TILs was 1-80%, with 3/6 samples having more than 10% and 2/6 samples having greater than 50% TILs. In the post-NAC group, the range of TILs was similar at 2-70%, with 11/17 samples having greater than 10% and 3/17 samples having greater than 50%. It should be noted that of the patients with available pre-NAC specimens, only one ( Frequency and location of CD8+ T cells in tumor samples before and after NAC To evaluate changes in CD8+ T cell localization, the percentage of stromal or tumor areas containing CD8+ cells was calculated by dividing the area containing CD8+ cells by the total area in either the stroma or the tumor. In addition, the percentage of CD8+ cells within the entire sample was determined by combining stromal and tumor analysis for each sample (i.e. tumor and stroma together). Representative images of the IHC analysis of CD8+ T cells are available in Fig. 4. In the stroma alone, an average of 24.6% of cells were CD8+ in the pre-NAC specimens, while an average of 21.2% of stromal cells were CD8+ following NAC. Within the tumor alone, an average of 12.0% of cells were CD8+ prior to NAC, and in the post-NAC samples, only 7.9% of cells were CD8+. In the pre-NAC samples, 18.3% (range 0.5-60%) of cells in the stroma and tumor combined were CD8+, while 15.7% (range 1-50%) in the post-NAC group were CD8+ (Table 2). Table 3. The intensity of PD-L1/PD-1 expression in the tumor, stroma, and overall cells are listed in Additional File 5. Analysis by breast cancer subset and patients with paired samples There were three pre-NAC specimens and nine post-NAC specimens available for analysis of samples from patients with TNBC (Additional File 6). Additionally, there were two pre-NAC specimens and seven post-NAC samples from patients with hormone receptor positive breast cancer that were obtainable for study (Additional File 7). Overall, the levels of TILs, CD8+ T cells, and PD-L1/PD-1 expression in both of these groups remained stable after NAC. Four paired pre-NAC and post-NAC tissue samples were available for comparison and revealed amounts of TIL and CD8+ T cells, as well as PD-L1/PD-1 expression, to be mostly unchanged prior to and after NAC (Fig. 5). Since these patients by definition did not exhibit a pCR following neoadjuvant chemotherapy, it is not possible to interpret these paired results in the context of the full study population in which the pCR rate was 42%. However, visualization of individual patient levels of peripheral blood T cell ICP expression next to the same patient's intratumoral PD-L1 intensity was completed (Additional File 8). Due to the small number of samples, no formal statistical analyses were performed to compare peripheral blood ICP levels to intra-tumoral levels of PD-L1 and no clear trends are seen on graphics. Discussion NAC is an increasingly adopted treatment strategy for women with early-stage operable BC. Importantly, the To address this issue, we present the results of a study that used flow cytometry and multi-color IHC to characterize expression of PD-1, CTLA4, Tim3, Lag3, and OX40 by circulating CD4 and CD8 T cells, as well as the level of TILs, infiltrating CD8+ T lymphocytes, and PD-1/PD-L1 expression within tumor samples obtained before and after NAC. Overall, this study found that NAC resulted in a decrease in checkpoint receptor expression (CTLA4, Lag3, OX40, and PD-1) by circulating CD4+ T helper lymphocytes. In contrast, when looking at CD8+ T cytotoxic lymphocytes, there was an increase in CTLA4, Lag3, and OX40 expression following NAC. Only expression of Tim3 was not statistically different between baseline and post-chemotherapy samples on circulating CD4+ and CD8+ T cells. Intratumorally, we observed that less of our samples were considered to be positive for the expression of either PD-L1 or PD-1 following NAC. The percentage of stromal TILs, CD8+ lymphocytes, and PD-L1 positivity in these patients decreased after NAC. In contrast, these values were relative stable between baseline and post-chemotherapy in the triplenegative tumors although the small sample size (n = 3 for pre-NAC baseline biopsy and n = 9 for post-NAC resection residual tumors) precluded formal statistical comparisons. Furthermore, the small sample size did not allow for the testing of the association between pCR rate and levels of stromal TILs, CD8+ lymphocytes, and PD-L1/PD-1 expression. Several investigators have noted that the frequency of TILs is associated with increased rates of pCR. It has also been shown that TILs are associated with a survival benefit in patients with TNBC and HER2+ BC [13][14][15]29]. Furthermore, several of the agents currently used in NAC regimens have been shown to modulate aspects of the immune system. For example, doxorubicin has been shown to promote antigen presentation by dendritic cells and help drive antigen-specific CD8+ T cell responses in mouse models [30][31][32]. Cyclophosphamide can stimulate natural killer cell anti-tumor responses, as well as promote macrophage recruitment to tumors and skew them towards an anti-tumor M1 like phenotype [33][34][35][36]. There are also several reports supporting the notion that administration of cyclophosphamide enhances the action of tumor-specific adoptive T cell therapy [37][38][39]. Finally, paclitaxel has been shown to promote the cytotoxicity of tumor-associated macrophages, increase natural killer cell activity, and stimulate tumor specific CD8 T cell responses [40][41][42]. These findings suggest that incorporation of therapies aimed at leveraging the immune system against BC could lead to more effective NAC regimens and improve the rate of pCR. It should also be pointed out that all patients in this study received intravenous dexamethasone as a standard pre-chemotherapy medication to prevent nausea and vomiting during the anthracycline portion of chemotherapy and/or to minimize the risk of severe hypersensitivity reactions prior to paclitaxel administration. While the impact of episodic steroid use is unclear, it is possible that any use of steroids may also affect the immune tumor microenvironment. To date, the knowldege about influence of NAC on the expression of targetable checkpoint receptors has been limited. In order to optimally incorporate immune therapies into NAC regimens it will be important to understand how these agents affect the host immune system as well as the ability of tumor cells to impact infiltrating T cells. Recently, a report published by Pelekanou et al. found that following NAC use in breast cancer cases there was a decrease in the frequency of TILs, while PD-L1 expression was relatively stable [1,43]. These results are consistent with the present analysis of pre-and posttreatment tumor specimens except that this study found a decrease in the PD-L1 expression in residual tumors following NAC. Furthermore, Pelekanou and colleagues showed that higher pre-treatment levels of TILs and PD-L1 expression were significantly associated with higher pCR rates [1]. These findings provide information that can be useful for incorporating immune therapies into NAC regimens for BC. Values are denoted as the number of patients in each group with percentage of cohort that is PD-L1 or PD-1 positive in the parentheses The current work helps expand on these findings by determining the expression of targetable checkpoint receptors on circulating CD4+ and CD8+ T cells before and after NAC. This analysis revealed a significant decrease in the frequency of circulating CD4+ T cell expressing CTLA4, Lag3, PD-1, and OX40 following NAC. In contrast, the frequency of CD8+ T cells expressing CTLA4, Lag3, and OX40 increased following NAC. The reason for the dichotomous change in the frequency of CD4+ and CD8+ T cells expressing checkpoint receptors is unclear. However, this effect could be driven by differences in the activation status of circulating CD4+ and intensity, (f) stromal PD-L1 intensity, (g) intra-tumoral PD-L1 intensity, and (h) overall PD-1 intensity at these times points. N = 4 for each group, if a sample is not graphed it is due to values being 0 CD8+ T cells after NAC or differences in the effect of chemotherapy agents on cytokine production by the T cell subsets. The decreased expression of the co-stimulatory molecule OX40 by CD4 T cells and its increase in CD8 T cells makes it an intriguing target as well. The present study has several limitations that should be noted. First, the study was a single institutional experience and was limited by a small sample size in both the analysis of tumor specimens and circulating T cells. Also, the high rate of pCR contributed to the issue of not having substantial post-surgical samples. Furthermore, only four patients had paired tumor samples since many patients enrolled in the study had their biopsy performed at an outside institution and thus these samples were unavailable for review. Additional Files 5-7 document the recorded pre-and post-NAC changes in stromal TILs, CD8+ T cells, and PD-L1/PD-1 expression. Due to the small sample size, a meaningful statistical analysis of the correlation between pCR and TIL/ICP levels would not be possible. Nevertheless, these findings should serve as stimulus to investigate these changes in larger patient cohorts. Conclusions In conclusion, this study shows that NAC use results in significant but opposite changes in the expression of ICP proteins by circulating CD4+ and CD8+ T cells in BC patients. In addition, the few tumor samples available post-NAC treatment appeared to have smaller frequencies of stromal TILs and intratumoral CD8+ T cells. Also, fewer of these post-NAC tumor samples were positive for PD-L1 or PD-1 following NAC To our knowledge, this study is the first to systematically assess peripheral blood expression of various ICPs together with changes in tumor immune infiltrates in women with non-metastatic BC. Understanding the effect of NAC on circulating and tumor-infiltrating immune cells will be important for optimally incorporating immune therapies into the NAC setting for BC. Furthermore, this work and that done by others serves as important data for the initiation of further studies to understand the mechanism and biological significance of these immunologic changes.
2020-05-19T14:42:51.180Z
2020-05-19T00:00:00.000
{ "year": 2020, "sha1": "2038c8eb132b1b35da3f932aecd7a6fb59f25216", "oa_license": "CCBY", "oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/s12885-020-06949-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "09f9b6bae5d5830869db9df4ad14a0a4131fb146", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
84846412
pes2o/s2orc
v3-fos-license
Emergent geometry from stochastic dynamics, or Hawking evaporation in M(atrix) theory We develop an microscopic model of the M-theory Schwarzschild black hole using the Banks-Fischler-Shenker-Susskind Matrix formulation of quantum gravity. The underlying dynamics is known to be chaotic, which allows us to use methods from Random Matrix Theory and non-equilibrium statistical mechanics to propose a coarse-grained bottom-up picture of the event horizon -- and the associated Hawking evaporation phenomenon. The analysis is possible due to a hierarchy between the various timescales at work. Event horizon physics is found to be non-local at the Planck scale, and we demonstrate how non-unitary physics and information loss arise from the process of averaging over the chaotic unitary dynamics. Most interestingly, we correlate the onset of non-unitarity with the emergence of spacetime geometry outside the horizon. We also write a mean field action for the evolution of qubits -- represented by polarization states of supergravity modes. This evolution is shown to have similarities to a recent toy model of black hole evaporation proposed by Osuga and Page -- a model aimed at developing a plausible no-firewall scenario. Introduction and highlights The study of black holes in M(atrix) theory holds a treasure trove of insight into quantum gravity and the nature of spacetime. As a non-perturbative formulation of M-theory, Matrix theory [1,2] can in principle access and potentially resolve many of the puzzles we associate with black holes. Early attempts at staging Matrix black holes have consisted of promising sketches [3]- [6] and numerical simulations [7]- [10]. We have learned that understanding black holes is related to studying strongly coupled Yang-Mills at finite temperature [11]- [13], and that there might be intricate non-local dynamics near the event horizon [14,15]. More recently, we have learned that Matrix theory is characteristically chaotic [6,16,17], and interactions can scramble initial value data at the fastest possible rate that is allowed by the postulates of quantum mechanics [18]- [25] -as also expected from black hole physics. In this work we ask if one can write a mean field coarse-grained description of the strongly coupled microscopic dynamics of Matrix theory in a manner that captures the essential features of black holes and informs us about the geometry near the event horizon. To illustrate through an analogy, if M(atrix) theory is to black hole quantum mechanics as BCS theory is to superconductivity, we are looking for the analogue of a Landau-Ginzburg description of the quantum physics of black holes -with the underpinning element of stochastic chaotic evolution. We know that Matrix theory is chaotic, and we know that one can often use the language of random variables, or in this case Random Matrix theory (RMT) [25]- [33] [6], to capture chaotic dynamics. We also know that RMT is closely related to the strong damping regime of Fokker-Planck stochastic evolution [34,35,36,26] whereby a statistical description of ergodic motion is effectively described with macroscopic variables. The suggestion is then to formulate a description of Matrix black holes where the entries of the Matrices are described through particles moving in a mean field potential -one that is obtained by coarse-graining over microscopic degrees of freedom that are engaged in ergodic motion. In this work, we show that such an effective description of black holes is indeed possible using Matrix theory. In the process of developing this effective model, we settle on a microscopic picture of Matrix black holes that is both intuitive and complex. Entries on the diagonal of the matrices incorporate the thermodynamics and encode information. These can be thought of as particles that mostly hang around near the surface of the would-be horizon. They are subject to a mean field potential whose shape we determine. An additional 'goo' of off-diagonal matrix entries glue these particles into clusters, effectively acting like bound states. These clusters contain around d particles each, for a black hole in d space dimensions. Figure 1 depicts a cartoon of the model. In the figure, the clusters are depicted as cells. The configuration is far from static, and in fact we expect that the cells continuously exchange particles and rearrange themselves. The rest of the matrix degrees of freedom, which constitute the overwhelming majority of the total, condense in a quantum ground state. It is possible that they should be thought of as a membrane stretched at the horizon, without any associated thermodynamics or entropy. Thermal energy is distributed in the dynamics of the cells as they slide near the horizon and interact with each other. We develop this model in detail, matching with all expectations from the dual M-theory supergravity description of a Schwarzschild black hole in the light-cone frame. In particular, Hawking evaporation [37]- [40] is reproduced and information loss is demonstrated to arise from the process of coarse-graining over otherwise unitary dynamics. It becomes clear that dynamics near the horizon has a non-local component when explored at short enough timescales, while being local at the longer timescales associated with Hawking radiation 3 . Most interestingly, we demonstrate that non-unitary evolution and information loss arise at the timescales for which the Matrix dynamics is strongly coupled and spacetime geometry is expected to be emergent in the dual supergravity language. This suggests that Hawking information loss is inherently tied to the premise that geometry near the horizon of a large black hole is smooth and well-defined. The microscopic degrees of freedom underlying black hole dynamics are Planck sized bits that are interacting chaotically over Planckian timescales. Any description of the physics over timescales larger than the Planck time involves coarse graining over stochastic dynamics in a manner that leads to an effective quantum picture that is non-unitary. The notion of spacetime geometry arises at around those Planckian timescales, implying the breakdown of the geometrical picture of black hole evaporation as we approach the horizon. Put differently, the Hawking computation is robust when applied in smooth spacetime backgrounds over large enough timescales, yet the evaporation should still be regarded as unitary because the notion of geometry and spacetime is lost at the event horizon at short timescales. The outline of the text is as follows. In the first section, we present a brief overview of Matrix theory, followed by a review of Fokker-Planck dynamics and the light-cone Schwarzschild black hole in supergravity. We then systematically develop the effective model for the Matrix black hole, matching and checking against expectations on the dual low energy M-theory side. In the second section, we focus on the time evolution of information within the Matrix black hole. We track information encoded in the polarization states of the low energy Mtheory supergravity multiplet, and we write an effective qubit time evolution operator that is based on the stochastic model developed earlier. We show how the evolution becomes non-unitary at longer timescales because of the coarse-graining over chaotic dynamics, and correlate this with the emergence of spacetime geometry in the dual M-theory language. For short timescales, we write a unitary time evolution operator that describe the weakly coupled qubit dynamics near the event horizon. Finally, in the discussion section, we reflect on the implications and future directions. The effective model 2.1 M(atrix) theory overview The M(atrix) theory action is the dimensional reduction of 10 dimensional Super Yang-Mills (SYM) to 0 + 1 dimensions and is given by The gauge group is U (N ), with the X i s (i = 1, . . . , 9) and the Ψ in the adjoint representation of the group. In our conventions, we have where g s is the string coupling and s is the string length 4 . The Yang-Mills coupling is The length dimensions of the various quantities are: X ∼ 1 , t ∼ 1 , and ψ ∼ 0 . The theory is purported to be a non-perturbative formulation of M-theory in the lightcone frame in the following scaling limit 5 This corresponds to focusing on energies that scale as E ∼ g s / s . It is sometimes convenient to introduce alternate M-theory variables , τ , and ξ that remain fixed in the scaling regime of interest For example, the corresponding light-cone M-theory energy scale is ∼ R/ 2 P = fixed. In the map onto light-cone M-theory, N/R is interpreted as total light-cone momentum. Light-cone energy scales inversely with light-cone momentum, hence as (R/N ) × mass 2 . Depending on the coupling regime, the number of active degrees of freedom of a configuration scales as N k , where k = 2 in the weakly coupled regime, and k = 1 at strong coupling. Compactifying light-cone M-theory to d space dimensions, we can describe it through Matrix theory with d of the 9 X i matrices removed from the dynamics, assuming that the compact directions are small enough that associated modes are too heavy to excite. Alternatively, one can use d + 1 dimensional SYM for a full description of the compactified theory, obtained from the current setup via a T-duality map. The relation between light-cone M-theory and Matrix theory is known to hold for N → ∞, but the correspondence is valid for finite N as well -between Discrete Light-Cone Quantized (DLCQ) M-theory and finite N matrix theory, where N is mapped onto units of M-theory 4 Matrix theory is sometimes written in Planck scale conventions, related to the one we use by X → Y / √ R and t → τ /R. Using units such that 2π 3 P = 1 where P is the eleven dimensional Planck length, the action takes the form whereẎ = dY /dτ . In this alternate convention, the length dimensions of the various quantities become P , then X = s , given that P = g 1/3 s s . 5 This scaling limit corresponds to the decoupling regime for holographic duality [41,11,42,43] -as applied to D0 branes. The Matrix theory conjecture is thus in the same class of gravity-SYM correspondences that give rise to the AdS/CFT map. discrete light-cone momentum [44]. In this work, we will work at finite but large N in trying to describe an M-theory black hole that is large enough to have small curvature scales at its horizon. From chaos to a stochastic evolution Recently, Matrix theory has been demonstrated to be highly chaotic [16,17,6], with dynamics that can scramble initial value data in a time that scales logarithmically with the entropy [20,21,22,23,25] -as opposed to the more common power law behavior. This allows one to capture Matrix theory physics, in the appropriate setting, by treating the matrix entries as random variables. Describing a non-extremal black hole is certainly a good candidate setup for exploring chaos in Matrix theory [33,45,25]. And techniques from the well-established field of Random Matrix Theory (RMT) [26,27,28,29,30] can then be used to tackle the problem. RMT is most powerful when one is dealing with a theory with a single matrix; it then allows a robust statistical treatment of the eigenvalues of this matrix. In our setup, we will be interested in studying a configuration of matrices in Matrix theory that represents a d dimensional Schwarzschild black hole in the dual light-cone M-theory. We will assume from the outset that we work with spherically symmetric configurations, where the different X i matrices are chaotic and uncorrelated in different space directions. Hence, each matrix entry in the d matrices X i , with i = 1, . . . , d, is random and not correlated with any other matrix entry. This configuration is to be mapped onto a black hole in the dual M-theory -with a fixed temperature and associated Hawking evaporation phenomenon. The fermionic matrix entries of Ψ in (1) will be treated as a component of the thermal soup -in equilibrium with the bosonic matrix entries. At finite temperature, we will hence mostly focus on the bosonic sector with a mirror image at play in the fermionic sector being implied. However, we do need to incorporate the one-loop quantum contribution of the fermionic degrees of freedom to the mean field potential for the bosonic stochastic variables. Furthermore, later on, we will use the fermionic variables as probes to track information evolution in this thermal soup. We start by noting that RMT is closely related to stochastic physics. In particular, since the work by Dyson [26], it has been demonstrated that RMT dynamics can be properly captured by the strong damping regime of Fokker-Planck evolution. We present here a quick overview of the subject. In RMT, each matrix entry can be thought of as a stochastic particle evolving in an mean field potential. For a particle with position r and velocity v in d space dimensions, we can study it through the probability function which represents the probability of finding the particle at time t within r and r + dr and v and v + dv. In our setup, we will consider matrix configurations that are spherically symmetric in d dimensions. We will then focus on probability profiles where Here, the v θ i are d−1 components of v in the angular directions, and v = v r . Correspondingly, the mean field potential is spherically symmetric 6 V (r) → V (r) (9) and the Fokker-Planck equation takes the form where T is the temperature of the environment, γ is a damping parameter, and m is the mass of the particle. This then allows us to study the evolution of the matrix entry in a statistical framework. The spherically symmetric Fokker-Planck equation is solved by the equilibrium time-independent profile C here is a normalization constant. Note that this non-relativistic treatment is consistent with Matrix theory since light-cone M-theory has Galilean symmetry with dispersion relation E LC = p 2 /2p LC , where the light-cone momentum p LC ∼ 1/R plays the role of Galilean mass. As mentioned above, the relation between RMT and stochastic physics arises in the regime of strong damping Focusing on this regime, we also write the probability profile as 6 The model we develop involves time averaging over stochastic, chaotic dynamics. The cluster tiling of Figure 1 is not rigid and very dynamical over timescales shorter than the Hawking timescale. It is then reasonable to expect that, at timescales larger than the characteristic timescale associated with cluster dynamics, an approximate spherical symmetry sets in. Of course, going beyond this coarse model one needs to consider the possible breaking of the spherical symmetry [31,32]. integrating over all velocities. The resulting evolution equation is known as the Smoluchowski equation The radial probability current that follows from (14) takes the form which we will use later in understanding evaporation through stochastic diffusion. Our goal is to develop an effective model for strongly coupled chaotic Matrix theory, using the Smoluchowski equation with r representing matrix entries in the bosonic matrix i X 2 i ∼ X i of (1) -since different directions in space are statistically uncorrelated. We then need to identify the relevant mean field potential V (r), mass m, temperature T , and damping parameter γ. It is worthwhile noting that an alternate and equivalent approach is to track the evolution of moments of random matrix entries. If χ represent any matrix entry, then the Smoluchowski equation with a quadratic potential is equivalent to stochastic fluctuations given by which then imply the differential equations for the moments The timescale of stochastic evolution can then be easily read off as It is important to note that this is not the timescale over which one coarse-grains the random motion to arrive at a mean field potential for stochastic variables. This other timescale, which we call the stochastic timescale t stoch , must be shorter than the thermal timescale, t stoch < t T , and is determined from the process of averaging over microscopic dynamics. We next need to determine the parameters of the model. We will build this effective description of strongly coupled chaotic Matrix theory by using knowledge of the gravity dual, and of the microscopic string theory dynamics that underlies Matrix theory. The light-cone Schwarzschild black hole We start by reviewing the dual gravity picture of the Matrix theory setup of interest -a light-cone M-theory Schwarzschild black hole [46]. The corresponding geometry is obtained by Lorentz boosting a d dimensional Schwarzschild black hole in the light-cone direction with a boost factor given by r h /R, where r h is the radius of the black hole horizon. While the horizon geometry is unchanged and the entropy or area in Planck units remains the same, the Hawking temperature is red-shifted The Hawking radiation flux from evaporation takes the form in general d dimensions. The thermal timescale associated with the Hawking temperature is then The entropy is related to the black hole mass M bh as usual S ∼ M bh r h , and the evaporation process can be described by [47,48] Hence, the black hole lifetime is given by Beside the timescale t h and t life , the shorter scrambling timescale determines the timescale over which the black hole scrambles information. We have written all these relations in forms that can be compared to the Matrix theory stochastic model in the choice of units presented earlier. In our SYM choice of units, the entropy of the black hole is written as For a large black hole, we see that we must require r h s (27) leading to small curvature scales at the black hole horizon. The task next is to model an effective Matrix theory stochastic system that reproduces these properties of a light-cone Schwarzschild black hole. A conjecture for an effective model In a perturbative regime, Matrix theory consists of ∼ N 2 degrees of freedom as all matrix entries participate in the dynamics. In early models of a Schwarzschild black hole in Matrix theory, the authors of [3,4,5] noted however that, to reproduce the correct equation of state of a light-cone black hole, one must have the entropy proportional to N at strong coupling, This implies that only N of the entries in each matrix X i are to participate in the thermodynamics of the Matrix black hole; that is, most degrees of freedom must be 'frozen', given that N 1 follows from (26) and (27). Inspired from the works of [3,4,5], we then propose that the thermodynamics of the Matrix black hole is carried by the N diagonal entries of the X i matrices. Information in the black hole would also be carried by diagonal degrees of freedom only. These entries can be sometimes interpreted as coordinates of the corresponding D0 branes underlying Matrix theory. Entropically, these order ∼ N degrees of freedom would like to spread to infinity -the theory even admits flat directions for this purpose. However, perturbatively there can be an initial cost in energy in doing so from strings stretching between the D0 branes -i.e. off-diagonal modes of the matrices. Presumably, taking strong coupling effects into account, the configuration forms a metastable ball of size r h , the black hole radius, along with decay channels that implement the process of Hawking evaporation. As a diagonal matrix entry random walks its way out, a bit of the black hole evaporates away [10]. If N diagonal degrees of freedom are to spread in a volume r d h , average inter-brane spacing is generically parametrically much larger with N than if they are spread over an area . And since inter-brane spacing is costly in energy, we can start seeing that the proper model of a Matrix black hole would involve the diagonal entries of the matrices spread on the surface of a would-be black hole horizon. Figure 1 shows a cartoon of the setup. Figure 2(a) shows a cartoon of a matrix X i , focusing on a sub-block associated with a group of 'nearest-neighbor' branes 7 . Using the permutation subgroup of U (N ), we can The δXs refer to the off-diagonal entries spanning clusters; the off-diagonal entries within a cluster are in the shaded block, denoted by δx. (b) General structure of non-zero entries in the matrices for different space dimensions d. The d − 1 labels refer to the number of active columns or rows in the first row or column, respectively. The shaded diagonals start within the shaded square in (a). always arrange to sort the matrix entries as depicted. We expect that a certain number of branes, of order d−1, whose coordinates appear as x in the figure, would be close enough that corresponding matrix off-diagonal modes, labeled δx in the figure, can be light. This still would not affect the S ∼ N requirement as the number of such modes would be independent of N . Branes much farther away, over a distance scale r h , would be much heavier. We propose that beyond the d × d sub-block, all other off-diagonal modes would be too heavy to excite and would freeze or condense in a Bose-Einstein (BE) condensate. Indeed, if we look at the critical condensate temperature T c , we would expect 8 which we can quickly see to be much larger than the Hawking temperature for d ≥ 2. It is possible that this BE condensate describes a membrane-like configuration stretching at the black hole horizon [3,4,5,49,50]. In a coarse-grained effective language, we would set these heavy off-diagonal modes, the δXs in the figure, to zero. Interestingly, fuzzy spheres of various dimensions in Matrix theory have been shown to necessitate the activation of more off-diagonal modes that spread away from the diagonal [51,52]. For example, a 2-sphere (d = 3) is realized through SU (2) representations, which activate 3 diagonal lines along the matrix diagonals; and a 4-sphere (d = 5) activates 5 diagonal lines. Our model then fits well with this pattern. Figure 2(b) shows the general scheme. The diagonal entries within the d×d sub-block of matrices would be spread out from each other at a distance that is around the Planck scale and might naturally involve marginal bound state physics. In M-theory language, this would correspond to supergravity excitations carrying ∼ d units of light-cone momentum. These marginal bound states are conjectured to exist in Matrix theory and are a necessary ingredient for the dictionary between Matrix theory and M-theory [1]. The off-diagonal modes δx in these sub-blocks would remain relatively light and participate in making the physics of these clusters non-local, at around the Planck scale. They would correspond to strings joining nearest neighbor branes, and henceforth we refer to the δxs as 'off-diagonal nearest neighbor modes' 9 . and bottom left of each matrix are active as well. This is a detail in the description, in the large N d limit, we assume has subleading effect on the larger picture. 8 The right hand side is the expression for the number of degrees of freedom in a Bose condensate in d dimensions. 9 Our treatment explicitly picks out a 'frame' or gauge where the diagonal and off-diagonal matrix entries Our stochastic model would then involve writing an effective theory of all the modes that remain active -diagonals x and nearest neighbor modes δx -while integrating out all other δX modes. We need to provide two separate stochastic treatments, one for the x modes on the diagonal, and another for the off-diagonal nearest neighbor modes δx. The first would describe the coarse-grained thermal state of the black hole; the second would describe finer cluster physics within each matrix sub-block. We will next demonstrate how these two sectors effectively decouple and can reliably be treated through stochastic methods due to a hierarchy in the relevant timescales. In the Matrix theory scaling regime time scales as g s / s ; this allows us to measure timescale through the effective Yang-Mills coupling g eff (τ ) 2 defined as which remains finite in the scaling regime. Hence, larger effective coupling corresponds to longer times since 0 + 1 SYM is super-renormalizable. In this language, the first timescale t h from (22) arises from the thermodynamics of the diagonal modes, of order N in number; this gives g s The scrambling timescale t scr of (25) is then given by The lifetime of the configuration t life from (24) should correspond to These statements follow from the expected black hole physics on the dual side of the correspondence. Note that all three timescales correspond to regimes where the Matrix theory SYM is strongly coupled. have very different physical roles. We expect that this setup corresponds to a description of the Matrix black hole from the perspective of the outside observer. U (N ) gauge transformations would naturally change the perspective, while mixing the roles of diagonal and off-diagonal entries. More on this in the Discussion section. On the SYM side, perturbatively, we know that off-diagonal modes have dynamics given by 10 where ∆r is the distance between the corresponding diagonal entries; this gives a frequency of We can then easily see that if ∆r ∼ s for nearest neighbor off-diagonal modes, δx modes can be treated as heavy and can hence be integrated out over time scales This is the strong coupling transition point for the SYM, a regime that we typically associate with emergence of geometry on the dual M-theory side. The relevant strong coupling benchmark is given by g eff (τ ) 2 ∼ 1, instead of the one using the 't Hooft effective coupling g eff (τ ) 2 N ∼ 1, because the dynamics in question is that of individual partons in the black hole soup, as opposed to the interaction of the black hole as a whole. More on the interplay between these two couplings and the emergence of a valid geometrical description can be found in the Discussion section. Next, looking at off-diagonal modes δX that straddle diagonal modes separated by a large distance of order ∆r ∼ r h , we see from (36) that these can be integrated out for timescales This is the shortest of the timescales and determines the regime where a stochastic treatment is valid: it corresponds to timescales where integrating out the δX's leads to a stochastic mean field potential for the diagonal modes. Note also that, for r h s , part of this regime overlaps with weak coupling in the Matrix SYM. Figure 3 summarizes the various timescales and clarifies the range of validity for the effective model that we propose. The stochastic formalism with a mean field potential for the diagonal modes requires coarse graining over time scales longer than t stoch . For t > t stoch , δX's are frozen in a BE condensate. We can then incorporate the effect of the δX's into a mean field potential for the modes on the diagonal. The nearest neighbor off-diagonal modes, Figure 3: The hierarchy of timescales for event horizon dynamics. Timescales t < t o are associated with non-local physics within D0 brane clusters, but timescales t > t stoch allow a local description for coarser inter-cluster dynamics. the δx's, cannot be integrated out at these timescales. We leave them part of the degrees of freedom participating in the physics of cluster formation. For timescales t > t o , the nearest neighbor modes are heavy as well and are associated with high frequency dynamics that can be coarse grained and described through a stochastic treatment. However, the δX modes will always have a much higher frequency (for r h s ) and hence will still determine the mean field potential for the diagonal modes. Finally, thermal timescales, t h , t scr , and t life are all much longer and live well within the regime of a stochastic treatment that coarse grains physics faster than t stoch . We then list in one place the set of observations underlying our model: • We have a stochastic effective description for diagonal modes for t > t stoch , or (g eff (τ ) 2 ) 1/3 s r h . We integrate out the off-diagonal modes that straddle widely separated modes on the diagonal. • Strong coupling corresponds to timescales t > t o , or (g eff (τ ) 2 ) 1/3 1. In this regime, all off-diagonal modes are heavy, but the effect of nearest neighbor off-diagonal modes on diagonal modes is sub-leading. We associate emergence of geometry on the dual M-theory side with the onset of strong coupling in Matrix theory [2,53]. At timescales t stoch < t t o , we might be able to write a stochastic effective description of D0 brane cluster dynamics. We expect that at around t ∼ t o , the degrees of freedom of Matrix theory organize in clusters of about d nearest neighbor branes moving in the larger thermal soup. • Hawking evaporation physics sets in at t t h , or (g eff (τ ) 2 ) 1/3 ∼ r h s 2 1, well within the regime of validity of the stochastic treatment. It is useful to write some of these timescales in M-theory Planck units. Using (6), and the fact that light-cone time is boosted by a factor of P /R, we find and Hence we see that τ o correspond to Planck scale time in M-theory language. As we shall see, all this means that the chaotic microscopic dynamics that underlies black hole horizon physics is associated with a characteristic timescale that is given by the Planck scale. A well-defined notion of spacetime geometry necessitates coarse graining over longer timescales. Our next task is to develop the stochastic effective descriptions of diagonal and nearest neighbor off-diagonal modes -the first describing black hole thermodynamics and evaporation, the second giving us a crude peak into brane cluster/bound state dynamics. Modes on the diagonal In this section, we propose a mean field stochastic potential for diagonal modes, valid over timescales t > t stoch . Using spherical coordinates, we posit writing r 2 = x 2 i , where x i is any diagonal mode of X i . The potential is parametrized by two scales, r 0 and V 0 , and we need to determine these two parameters by comparing the resulting dynamics to that of a light-cone black hole. Note also that we have incorporated quantum effects that we know would arise from the fermionic sector of Matrix theory: the θ(r 0 − r) flattens the potential so as to model the expected flattening of the potenial from supersymmetry-based cancellations of zero mode energies 11 . We start by noting that the only scale near the horizon of the Schwarzschild black hole is given by r h 12 . We then start by setting fixing the size of the stochastic diagonal fluctuations to within the would-be horizon size. The temperature of the soup should naturally be the Hawking temperature in the light-cone frame The mass of a stochastic particle should be set to the mass of a D0 brane This leaves us with determining the damping parameter γ and the potential scale V 0 . We start by looking at evaporation flux from the thermal soup. Following [62], we arrange for a steady state scenario for the probability distribution given by where u = r − r 0 and C is a normalization constant to be determined. We need to find f (u) given the boundary conditions where the first one follows from matching with the equilibrium configuration at r = 0, while the second one amounts to absorbing the evaporation flux at r = r 0 , corresponding to evaporation to infinity. The Fokker-Planck equation at strong damping then leads to where for the mean field potential at hand. The solution is given by the error function Integrating over the velocities we have which then leads to the current We will see below that when we find that V 0 ∼ T h . We then note that erf(−r 0 κ/2) −1 . For erf(−x), the function near x 1 is very well approximated by −1 with corrections suppressed exponentially as e −x 2 /x. We determine the normalization factor C using For this, we write near r 0, and near r r 0 . We then get up to a numerical factor. The probability current near r 0 takes the form which then leads to the evaporation flux which we can then match with Hawking evaporation at temperature T h This gives one of the two conditions we need to determine γ and V 0 . The other condition comes from the well-known one-loop effective potential of a probe D0 brane in the background of N D0 branes. Using M-theory Planck units, we have [49] where v is the relative velocity of two partons at a separation r ∼ r h . While this is a perturbative result in the Matrix SYM, it is know to lead to an exact match with the dual M-theory scenario [49] implying that it is valid at strong coupling as well 14 . Remembering that the black hole entropy is given by in Planck units, and saturating the Heisenberg uncertainty bound for each parton [3,4,5] v 13 If we want to include the kinetic energy of the evaporated bit, we would get with ω being the kinetic energy, giving the standard black body spectrum. 14 There have been suggestions that a non-remormalization theorem perhaps underlies this finding [2]. we get the scale of the potential energy at the size of the horizon Rescaling to SYM units using (6) gives the same relation (r h → r h √ R, E → E/R). We then naturally identify this energy scale with the depth of the mean field potential Finally, from F = F h , we then get The latter relation implies that which corresponds to a borderline strong damping regime (12) -needed for consistency with RMT. We can now look at the quantum and thermal vacuum expectation values of a mode x on the diagonal, given by For the given potential and parameters, we have leading to borderline thermal regime, which implies that the diagonal modes are barely excited above the ground state. We also note that odd moments vanish at equilibrium, so that We then have succeeded in developing a stochastic model for diagonal mode dynamics that matches with Hawking evaporation. As a result, a consistency check shows that this stochastic evolution has characteristic timescale given by (19) as required. Off-diagonal nearest neighbor modes At timescales t ∼ t o , where Matrix theory enters the strongly coupled realm, we have the possibility to describe clusters of d nearest neighbor branes through stochastic means. The clusters are marginally held together and we expect this dynamics to be a delicate one, given their natural overlap with the physics D0 brane marginal bound state formation. Nevertheless, we will use the methods of stochastic dynamics to try to describe the problem, bearing in mind that we aim only to identify scaling relations of what is most likely a very subtle cluster formation process. We model the potential for the nearest neighbor offdiagonal modes V δx with a simple quadratic confining form, and the only relevant scale is the curvature V (0). For nearest neighbor diagonals, we expect an inter-brane separation of ∆r ∼ s , leading to a perturbative potential for the corresponding off-diagonal modes given by This is a perturbative result but we extend it to t t o as a scaling relation. The thermal and quantum vacuum expectation values are where in the thermal expression, we want to think of T as a scale for kinetic energy within the bound system. We would expect ground state physics, implying as the expected scale for kinetic energy in the cluster. The mass parameter would still be given by Finally, we propose that the strong damping bound needed by RMT should be valid, and at worst saturated identifying the damping parameter γ for cluster dynamics. As a sanity check, we can verify that the associated characteristic timescale for the stochastic dynamics is which again matches well with our expectations that the relevant dynamics is at the onset of strong coupling in the SYM theory. Finally, the expected size of the cluster becomes which also syncs well with our expectation that one thermal parton is to occupy one Planck area at the black hole horizon 15 . Quantum information In this section, we want to describe how information evolves in the stochastic model we developed above. For this purpose, we need to look more closely at the fermionic degrees of freedom of the Ψ matrix in (1). It is known that these correspond to the polarizations of the light-cone M-theory supergravity multiplet -the graviton, the gravitino, and the 3-form gauge field [1]. That is, in the low energy regime, we can think of an entry on the diagonal in the X i 's as the coordinate of a supergravity particle whose flavor and polarization state is determined by the corresponding entry in the Ψ matrix. We can expect that information in an M-theory black hole can be encoded in the polarization states of a thermal soup of supergravity excitations. We would then want to study the time evolution of the Ψ matrix within the effective model we have developed. Note that the quantum contribution from the fermionic modes in their ground state has already been taken into account in the shape of the mean field potential for the diagonal bosonic modes. In the spirit of RMT, the equilibrium dynamics of the fermionic and bosonic matrix entries are treated as statistically uncorrelated. This justifies working with the bosonic sector by itself as we have done so: it is assumed that a corresponding thermal state is also set up in the fermionic sector as the two sectors are in thermal equilibrium. Our goal now is to track how information encoded in the polarization states evolves when this equilibrium configuration is slightly perturbed. We could for example consider one particularly interesting scenario, the emission of a supergravity particle from the stochastic soup, as a matrix entry of X i ventures off to large distances. We would choose a particular matrix configuration that can describe this situation, and analyze the evolution of the corresponding bit of quantum information in Ψ. Qubit dynamics and M-theory polarizations We start by considering a d = 3 matrix configuration that looks like 16 where X bh and Ψ bh are a (N − 2) × (N − 2) sub-blocks representing part of the black hole, and the remaining x bh /ψ bh and x/ψ represent 1 × 1 entries that are bits of the black hole that will participate in an emission process. The particle with coordinate x and polarization state ψ has perhaps ventured outside the black hole via ergodic motion. The δx mode is a nearest neighbor off-diagonal, implying that x bh and x are part of a cluster. The rest of the matrix entries start off in an equilibrium state at temperature T h . Note that δx bh and δψ bh are N − 2 component vectors. The fermionic part of the Matrix theory action is given by (1) Quantizing the fermionic matrix entries, we have where α and β are 10 dimensional spinor indices, α, β = 1, . . . , 16, remembering that the matrix entries Ψ ab are Majorana-Weyl in 10 spacetime dimensions. Applying this quantization to the matrix configuration (82), we get for the off-diagonal modes while the diagonal entries lead to a Clifford algebra The latter means that we can introduce new raising/lowering spinors on the diagonal by where we now restrict α = 1, . . . , 8. We then have as needed. In general, the fermionic sector then consists of 8 N (N − 1) qubits from offdiagonal modes and 8 N qubits from the diagonal modes for a total of 8 N 2 qubits corresponding to 2 8 = 256 polarization states of the M-theory supergravity multiplet -one for each of the N 2 matrix degrees of freedom. Using (82), we can then expand the action (83) treating all matrix entries as stochastic variables. Furthermore, given spherical symmetry, we expect all spatial directions to be statistically equivalent so that we can write x i → x for all i. We get the action Throughout, we use a symmetric representation for the Γ i s. Note that Γ 2 = 1 and Tr Γ = 0 so that the eigenvalues of Γ are ±1. We will then choose the convenient representation where Taking the thermal vacuum expectation value of (89), we see that the thermal average of the action S ferm vanishes at equilibrium given that we know This is simply the statement that, once equilibrium is achieved, we have two separate systems -a bosonic and a fermionic one -that can be treated as two thermal components in equilibrium at the same temperature. The interesting physics arises when we consider a perturbed configuration, for example one corresponding to x − x bh being momentarily large -describing the process of evaporation of a bit of the Matrix black hole. The subsequent relaxation process would be driven by the couplings in (89) between bosonic modes and qubits. We can analyze this physical setup by looking at the stochastic effective action of the qubits provided we arrange proper boundary conditions where x and δx are initially perturbed away from equilibrium. In the next section, we develop this method of tracking qubit information evolution. Qubit action We expect that a small perturbation should not affect the whole system appreciably on short enough timescales. This means that if we were to perturb x and δx in (82) off-equilibrium, X bh and δX bh (as well as Ψ bh and δΨ bh ) would remain in equilibrium as long as N 1. Using techniques from [63], given a stochastic variable χ coupling to other degrees of freedom F (t) via S = dt χ F , we can write an action where χ would be x or δx from earlier, and where the Stochastic action is T is the temperature to which the perturbed χ relaxes to, and the path integration involves boundary conditions corresponding to the quenching process of interest. The potential V , the damping parameter γ, and the mass m are all determined from our previous discussion in Section 2. F (t) can be obtained from (89) and is bilinear in the qubit variables. It can easily be shown that the Smoluchowski equation for χ given by (14) follows from S stoch [63]. To evaluate the path integral, we start with the classical equations of motion where If χ represents a radial coordinate x 2 i in a spherically symmetric setup as given by (42) we get instead Since V 0 ∼ T for any of the bosonic perturbations of interest, Ω has then the same scale irrespective of symmetry. We solve the sourceless classical equation and we easily find with F = 0, where χ i is an initial off-equilibrium configuration, and χ f is an equilibrium configuration χ relaxes towards. The classical contribution to the action is then where we take the initial time t i = 0. The quantum contribution is given by with the associated Green's function In summary, we arrive at an action for the qubit variables -hidden in the F (t) -of the form describing the evolution of the relevant qubits as the bosonic stochastic variable χ relaxes -after a quench described by the boundary conditions χ i and χ f . Note that the second part of (103) is imaginary and this implies that the qubit evolution would be in general non-unitary. This piece involves quartic qubit interactions and would be responsible for scrambling information away as the background evolves stochastically. This is not surprising yet an important observation: we are then able to associate information loss in Hawking radiation to the scheme of coarse-graining over short timescales that results in an effective model of what otherwise is microscopic unitary evolution of information. That is, we see how averaging over chaotic dynamics in Matrix theory is responsible for information loss in the dual low energy M-theory or supergravity. Below, we will see that when this non-unitary piece of the effective dynamics becomes important, we expect the emergence of geometry on the dual M-theory side. Our goal next is to consider scenarios where χ, or x and δx, are perturbed away from equilibrium, and then we want to track the evolution of the qubits described by ψ and δψ. Long timescales Consider the qubit couplings given by (89) where x and δx are arranged to start off in an off-equilibrium configuration. Neglecting the back reaction of this perturbation onto the black hole, we can take X bh = x bh = δX bh = 0 (104) so that we have We want to develop the action of the qubits using (93), which then gives (103) where χ represents x or δx, and F (t) can be read off from (105). Before looking at the details, notice that the second term in (103), which is quartic in the qubits, is imaginary and renders the evolution non-unitary. The term is the result of coarse-graining over the stochastic variables x and δx and naturally leads to information loss. The scale for this non-unitary piece is given that the propagator G(t, t ) scales as δ(t − t )/(∂ 2 t − Ω 2 ) and the fermions are dimensionless. Irrespective of whether χ represents x or δx, we have From (67) and (96), we have when χ is identified with x; while from (74) and (96) we instead have when χ is identified with δx. Hence, for t t o , the non-unitary coupling scales as whether χ represents x or δx. We then see that this coupling, and hence information loss, sets in for timescales of order t ∼ t o , where the effective dimensionless Yang-Mills coupling becomes order unity and Matrix theory starts describing emergent spacetime geometry in the dual formulation. For shorter timescales, t t o , the evolution is effectively unitary, given by the first semi-classical term in (103). Note however, that for t stoch < t t o , the dynamics is non-local, given by the Planck scale cluster physics and the light nearest neighbor off-diagonal modes δx of the matrices. Short timescales Let us first start by writing the full qubit action (103) that follows from using (105). When χ is identified with the diagonal coordinate x of (82), we have and obtained from (91) and (105), and where δψ α ≡ δψ α+8 with α = 1, . . . , 8. The 'dot' represents a sum over 8 qubits, i.e. δψ · δψ ≡ α δψ α δψ α . As mentioned above, the second non-unitary piece is negligible for t t o . Looking at the first term of (111), we can see that it provides mass to the δψ and δψ qubits, and it scales as For early times where t < t o , this term is important only if x cl is large. This, for example, would be the case if the matrix entry labeled by x would evaporate away, x cl r h . If the initial perturbation for the stochastic variable x is such that x i ∼ r h , the subsequent stochastic evolution is in a flat potential given the form of (42). This evolution, described by (17) and (18) -or equivalently (99), results in x cl (t) growing to infinity 17 . We then conclude that the effective qubit dynamics that arises from a perturbation on the diagonal -that corresponds to x evaporating away -is described by with x cl ∼ r h initially and growing larger thereafter. This is the statement that the offdiagonal qubits δψ and δψ become heavier and heavier and condense as the bit evaporates away. For the off-diagonal coordinate δx in (105), the resulting action takes the form 17 To account for the flat direction in (42), we can for example take Ω x → 0, which gives from (99) In arriving at this expression, we have used a complexified version of the action (93) where χ is complex as is δx -since the integrated modes are most naturally represented by complex variables. We also have used the diagonal qubit operators ψ ± and ψ ± bh defined in (87). Once again, as described above, the second non-unitary piece is negligible for t t o . The first term in (115) term provides a coupling between qubits ψ, ψ bh , and δψ and it scales as As the bit x evaporates away, equations (17) and (18) -or equivalently (99) -tell us that the initial value of δx cl decays exponentially to zero on timescale given by t o , as the mode becomes heavy 18 . At short times t t o , we write In summary, the qubit action is given by S where we have switch from time t, x, and δx to scaled variables τ , ξ, and δξ (see equation (6)), and the effective coupling g is defined as which has units of length such that g τ is the effective dimensionless coupling. In total, the system describes 8 × 4 qubits: 8 × 2 off-diagonals ones denoted by δψ α and δψ α , and 8 × 2 on the diagonal denoted by ψ α and ψ bh α . The stochastic relaxation from a quench is given by the classical profiles ξ cl (τ ) = x cl (τ )/ s and δξ cl (τ ) = δx cl (τ )/ s that follow from (99). We now elaborate on the implications of the qubit evolution action (119), restricting our attention to early times t stoch < t t o -before the onset of dissipation and emergence of geometry. For the remaining discussion, we will use the coherent state representation of the qubits, which we first briefly review. For a qubit with states |0 and |1 , a representation over a coherent state |η looks like [64] where η is a Grassmanian. A general state |Φ is then a function over the Grassmanians η|Φ ≡ Φ(η). A Bell state is then represented as The expectation value of an operator gets a form of a function over Grassmanians The path integral measure is such that For a Hamiltonian of qubits referenced by the operators ψ ± , we would write ψ + → η and ψ − → η. For a simple bilinear and time-dependent structure with sources, we have The unitary evolution operator as a function over Grassmanians takes the form where the propagator is given by We can then use this approach to write the unitary evolution operator for the qubits given by (119). The Grassmanian variables will be labeled as δψ, δψ , ψ − , ψ − bh , and their complex conjugates -in correspondence with the respective operators. We then seek the evolution operator written as that acts on the qubit wavefunction Φ(ψ + (0), ψ + bh (0), δψ(0), δψ (0)). We have the evolution of a 8 × 4 qubit system, half on the matrix diagonal and the other half off-diagonal; all 32 qubit are part of a cluster. The time evolution is obviously sensitive to the details of the quench, given by x cl (t) and δx cl (t). The initial wavefunction Φ is another input to the problem. Cluster formation dynamics might naturally involve the delicate physics of D0 bound state formation -akin to Cooper pair formation in superconductivity. The dynamics of the marginal bound states in Matrix theory is a complicated strong coupling problem that remains an open issue, and we will not be able to tackle the full problem here. Instead, given the spirit of an effective approximate scaling analysis, we will next engage in a speculative analysis that is inspired by a recent toy model of black hole qubit evaporation due to Osuga and Page [65]. We will argue that the Matrix theory qubit evolution operator has the hallmarks of the toy model presented in [65], under a series of assumptions. In [65], a toy model was proposed whereby the black hole Hilbert space is augmented to a tensor product that involves the black hole qubit sector and two other sectors, one for in-falling and another for outgoing radiation modes just inside and just outside the event horizon. Each black hole qubit is paired with two qubits that are in the singlet Bell state. The latter is proposed to represent the vacuum for the radiation pair of modes that assures smooth spacetime near the horizon. As a black hole qubit evaporates away, [65] proposes a unitary evolution operator that essentially exchanges the black hole qubit with a qubit of outgoing radiation, leaving the black hole sector qubit entangled in a Bell state with the qubit of incoming radiation. The result of this is that one qubit of information leaves the black hole (into the outgoing radiation sector), and a vacuum Bell state of two qubits (black hole and incoming radiation sectors) is left behind that is now to be interpreted as part of a bit of new empty spacetime created just outside a black hole as the latter shrinks in size. The key assumptions in this model are: interactions in the black hole qubit sector are non-local at the Planck scale, and a Bell vacuum state for black hole and incoming radiation qubits is tantamount to shrinking the black hole or equivalently expanding the vacuum space outside of it. The motivation for this toy example is to present a proof of concept model of black hole evaporation consistent with black hole complementarity. In our setup, we have an explicit quantum theory of gravity that dictates the qubit evolution operator. The partons of the matrix black holes are clusters of diagonal and offdiagonal matrix qubits, about 8 × (d − 1) 2 qubits in d space dimensions. For d = 3, that's 32 qubits. We propose that each cluster of qubits, a 32-qubit system, carries 8 qubits worth of information only -corresponding to the 256 supergravity states that can encode information; the remaining 24 qubits are scaffolding that are in a highly entangled Bell-like vacuum state that is the result of cluster dynamics. These represent the halo at around the event horizon. Naturally, the information is on the diagonal qubits, say in ψ bh in the specific setup we have been considering. That means that δψ, δψ , and ψ start off in a maximally entangled vacuum Bell state of 24 qubits representing radiation or 'membrane goo' near the horizon. We then propose that the unitary evolution operator from (130) and (119) -given a perturbation of the stochastic variables x cl (t) and δx cl (t) that describes the evaporation of the x matrix entry -results in having the qubit of information ψ bh transfered to ψ which exits the Matrix black hole. The end result leaves behind a vacuum Bell state of qubits for δψ, δψ , and ψ bh that is to be interpreted as the production of a bit of new spacetime outside the black hole. As a result, the matrix black hole shrinks in size from N to N − 2. Looking at the form of (119), we see a structure that has the right general form to potentially generate such an evolution of qubits. The analogue of the exchange operator from [65] in our language takes the form exp [i α t (ψ + − ψ + bh )(ψ − − ψ − bh )]. Our effective Hamiltonian involves in addition the mediation of the light δψ modes in combinations of the form ∼ (ψ + −ψ + bh )δψ − and its complex conjugate. Bell states with 24 qubits are very difficult to study and even determine in their own right. Added to this complication is the fact that (119) is in general non-local due to the light off-diagonal modes. As a result, it is a very challenging task to determine the evolution of the qubits using the action (119). To see this, note that the non-local couplings in (128) have scale given by where we used (99). For χ → ξ cl 1, given that r bh s . For χ → δξ cl ∼ 1, given that the cluster length scale is s . In any scenario, the relevant dynamics is highly non-local. Noting some of the general similarities between the model of [65] and ours, we leave the analysis of the significantly more complex dynamics of our system for future work. Discussion and Outlook The analysis in this work is a first attempt to develop a quantum gravity-centric, bottom up picture of black hole event horizon physics. The results can be summarized through two main conclusions: 1. We have determined that near horizon dynamics is non-local in space and time at the Planck scale. The thermal degrees of freedom of the black hole are 'cells' of around d particles, for a black hole in d space dimensions; each cell spans a size of order the Planck scale. One can think of each cell carrying bits of information, encoded in the polarization states of the fermionic variables of Matrix theory -or equivalently the polarization states of the supergravity multiplet on the dual side. The dynamics of black hole degrees of freedom is non-local and chaotic for short Planckian timescales, in a regime where the Yang-Mills theory is hovering just below strong coupling. At longer timescales and larger distances, the dynamics is effectively local both in time and space, while being strongly coupled. This is when and where an effective geometrical picture is possible. 2. When describing evaporation, one is dealing with a chaotic system near the would-be event horizon with a characteristic timescale given by the Planck scale. To describe the evaporation via a top down approach, i.e. via Hawking's approach, one needs to average chaotic dynamics over super-Planckian timescales. Where a spacetime description is valid, one is necessarily left with a non-unitary effective picture for the evaporation arising from coarse graining over Planckian chaotic motion. The suggestion is that the resolution of the black hole information loss paradox cannot lie in any framework that relies on a well-defined smooth spacetime geometry at the event horizon. This is a plausibility argument: We demonstrated that, through a rather simple stochastic model with a single input scale, one can understand how Hawking evoporation is inherently non-unitary -naturally due to stochastic, chaotic UV physics. This simplest of settings necessitates however the breakdown of smooth geometry at the horizon. This obervation, together with other independent evidence towards a breakdown of geometry at the horizon, constitute strong evidence that one most likely needs to look for resolutions of the information paradox in models involving a new perspective on near horizon geometry. The geometrical description of black hole evaporation is inherently non-unitary as it arises from averaging over Planckian timescales that characterize the chaotic physics of the underlying degrees of freedom. A couple of footnotes are in order. First, we identify emergent geometry at the benchmark of strong effective Yang-Mills coupling g eff (τ ) 2 , as opposed to strong effective 't Hooft coupling g eff (τ ) 2 N , which is the natural coupling for large N . The subtlety here is that the coupling that governs the microscopic event horizon dynamics is one that arises from the interaction of 'order one' matrix entries on the diagonal. At most groups of order d 2 particles participate in the dynamics, hence the relevant effective coupling is not the N dependent 't Hooft coupling. In describing the gravitational interaction of the whole black hole with entropy S ∼ N , the relevant effective coupling is indeed the 't Hooft coupling; but microscopic event horizon dynamics does not involve the participation of all N degrees of freedom. The second footnote has to do with implicit connections to the issue of black hole complementarity [58]- [61] . In modeling the mean field potential for the degrees of freedom of the Matrix black hole, we note that there was no need to introduce a separate Planck scale near the horizon: the entire potential can be modeled using a single scale, the radius of the event horizon 19 . This is not surprising since we were modeling the physics in a manner to match against expectations on the dual supergravity side. We also noted that the qubit action we arrived at has some of the features of the qubit evolution toy model proposed in the work of Osuga and Page [65]. The latter consisted of a proof-of-concept system that circumvents the need of a firewall by positing non-local interactions at the horizon and an exchange mechanism of qubits within a direct product of three Hilbert spaces. All these ingredients of this toy model emerge naturally from our Matrix theory discussion. However, our action is more complicated than the one in [65], and we leave a detailed analysis of the dynamics for future work. Nevertheless, these similarities between the two systems, ours and that of [65], might be hints that a firewall is not needed at the event horizon after all, and black hole complementarity prevails. This is consistent with [14,15,54] given the non-local nature of the interactions near the event horizon in Matrix theory -at the level of D0 brane clusters. There is however a significant conceptual challenge to this argument. Black hole complementarity is a statement about the perspective of an in-falling observer. This means that one needs to understand how a change of perspective between the observer at infinity and the one in-falling past the horizon is realized in the language of Matrix theory. Presumably, this involves a Matrix transformation in U (N ) since one expects that local spacetime coordinate invariance is embedded in the gauge group of the theory. This in turn requires a more precise map between emergent geometry and metric, and matrix degrees of freedom. Without this critical missing ingredient, we cannot conclusively understand how the firewall paradox is addressed by our effective model. Related to this last point, we also note that our treatment explicitly chooses a frame for describing the black hole, presumably corresponding to the perspective of an outside observer. This creates a clear separation between the roles of diagonal and off-diagonal matrix entries. The residual gauge freedom is the group of permuting diagonal entries, a subgroup of U (N ). The more interesting transformations would mix diagonal and off-diagonal entries, and we believe these correspond in part to switching the perspective of the observer. Very little is known or understood about this part of the Matrix-supergravity duality, and it seems a full treatment of the quantum black hole would necessitate progress in this direction. This work is a step towards unravelling the microscopic details of black hole horizon physics within a theory of quantum gravity that is fully embedded in string/M-theory. The effective model approach opens up new directions for a range of possible investigations and extensions that can only add to our understanding of black holes and quantum gravity. We hope to report on some of these in future works. Acknowledgments This work was supported by NSF grant number PHY-0968726.
2019-02-12T03:25:09.000Z
2018-12-12T00:00:00.000
{ "year": 2018, "sha1": "e1d2ab224fca34249b0fd0e236cf109d55d926cf", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP03(2019)061.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "e1d2ab224fca34249b0fd0e236cf109d55d926cf", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
251776225
pes2o/s2orc
v3-fos-license
Application of near-infrared spectroscopy for the nondestructive analysis of wheat flour: A review The quality and safety of wheat flour are of public concern since they are related to the quality of flour products and human health. Therefore, efficient and convenient analytical techniques are needed for the quality and safety controls of wheat flour. Near-infrared (NIR) spectroscopy has become an ideal technique for assessing the quality and safety of wheat flour, as it is a rapid, efficient and nondestructive method. The application of NIR spectroscopy in the quality and safety analysis of wheat flour is addressed in this review. First, we briefly summarize the basic knowledge of NIR spectroscopy and chemometrics. Then, recent advances in the application of NIR spectroscopy for chemical composition, technological parameters, and safety analysis are presented. Finally, the potential of NIR spectroscopy is discussed. Combined with chemometric methods, NIR spectroscopy has been used to detect chemical composition, technological parameters, deoxynivalenol, adulterants and additives of wheat flour. Furthermore, NIR spectroscopy has shown great potential for the rapid and online analysis of the quality and safety of wheat flour. It is anticipated that the current review will serve as a reference for the future analysis of wheat flour by NIR spectroscopy to ensure the quality and safety of flour products. Introduction Wheat flour is a powdered product made from wheat kernel and mainly used for manufacturing various bakery and pasta products, such as breads, cakes, biscuits and noodles . As one of important consumable raw materials in our daily lives, wheat flour provides numerous nutrients, such as carbohydrates, protein, and minerals. However, the quality and safety of wheat flour products are sometimes challenged by the inferior quality parameters and adulteration, which cannot be easily detected and pose risks to human health. Consequently, there is an urgent need to develop a rapid, labor-saving and efficient analytical method for the quality and safety monitoring of wheat flour. The main quality parameters of wheat flour include its chemical composition, which is related to the moisture, protein, ash, and wet gluten contents of the flour, and technological parameters, such as the sedimentation value, falling number and rheological properties of wheat flour dough. Conventional methods available for the quality and safety assessment of wheat flour are listed in Table 1. Although these methods have good precision, most of them are laborious and time-consuming. Therefore, the quality and safety of wheat flour cannot be monitored quickly and efficiently. Recently, near-infrared (NIR) spectroscopy, as a reliable tool in agricultural and food industry analysis, has been widely used in the daily inspection of wheat flour (Delwiche, 1998;Porep et al., 2015). NIR spectroscopy, which has the advantages of fast, easy operation, high efficiency and nondestructive measurement, can be used for qualitative and quantitative analysis of basic components in samples and the detection of adulterated samples. The applications of NIR spectroscopy in the quality and safety evaluation of tea products (Lin and Sun, 2020), grain (Caporaso et al., 2018), fruits and vegetables (Nicolai et al., 2007), oilseeds and edible oils have been reviewed. Although reviews on wheat-based products (Badaró et al., 2022) and wheat flour (Du et al., 2022) were recently reported, respectively, the safety analysis of wheat flour was not fully illustrated. Therefore, this review first summarizes the basic knowledge of NIR spectroscopy. Then, recent advances in the quality and safety evaluation of wheat flour by NIR spectroscopy, including the analysis of basic nutritional components, technological parameters and safety, are reviewed. In addition, future trends and challenges of NIR spectroscopy are presented. Principle of NIR spectroscopy and chemometrics As defined by the American Society for Testing and Materials (ASTM), near-infrared light is part of the electromagnetic spectrum in the range of 780-2500 nm, which is between the visible light and midinfrared light spectrum (Pasquini, 2018). NIR spectroscopy is a technique that applies the NIR portion of the electromagnetic spectrum and can provide complex structural information related to the vibration behavior of chemical bonds (Kamal and Karoui, 2015). NIR spectra present the overtones and combination of hydrogen-containing C-H, O-H and N-H groups, which are the primary structural components of organic compounds, such as water, lipids and proteins (Futami et al., 2016;Wang et al., 2019). In other words, NIR spectroscopy is a technique in which the instrument emits wavelengths across the entire near-infrared spectrum that penetrate the sample. Some wavelengths are absorbed through the activation of specific chemical bonds within the sample, while the remaining wavelengths are transmitted or reflected back to the instrument, forming the resulting spectrum (Johnson, 2020). Combined with chemometrics, the spectral data collected by the NIR spectrometer are further utilized for qualitative and/or quantitative analysis of products. The whole measurement process of NIR spectroscopy generally includes the following steps: (i) spectral data acquisition; (ii) data preprocessing; (iii) use of a set of samples with known analytical concentrations to establish a calibration model; (iv) validation of the model; and (v) prediction or characterization of the unknown samples (Cen and He, 2007). The NIR spectrometer is the hardware for near-infrared analysis and is mainly composed of a light source, a beam splitting system, a sample detector, an optical detector, and a data processing and analysis system (Cen and He, 2007). Based on the spectroscopic system, NIR spectrometers can be divided into four types: filter type, dispersion type, interference type and acousto-optical tunable filter type (Pasquini, 2018). In terms of applications, NIR spectrometers can be divided into laboratory spectrometers, portable spectrometers and online spectrometers. In the past ten years, different types of NIR spectrometers have developed rapidly, such as visible/shortwave near-infrared spectrometers (Vis/SW-NIR) (Barragan et al., 2021), miniaturized and handheld near-infrared spectrometers (Mcgrath et al., 2020), near-infrared hyperspectral imaging (NIR-HSI), which integrates sample spectra and images (Khamsopha et al., 2021). Compared with traditional methods for quality and safety analysis of wheat flour, the main technical features of NIR spectroscopy include fast analysis speed, convenient operation, simultaneous determination and nondestructive sampling. In analytical processes, the combination of NIR and chemometrics is essential (Qu et al., 2015). Chemometrics is the multivariate data analysis application that uses mathematical and statistical methods to systematically study the connotation of chemical measurement values (Yin et al., 2021). The application of chemometrics in NIR spectral analysis includes three aspects: (i) spectral data pretreatment; (ii) establishment of a calibration model for quantitative and qualitative analysis; and (iii) model transfer (Cortés et al., 2019). The pretreatment of the original spectral data can be used to remove interference information and improve the modeling effect. The main pretreatment techniques are smoothing, normalization, wavelet transform, multiplicative scatter correction, orthogonal signal correction, standard normal variable (SNV), first derivative or second derivative, direct quadrature signal correction and straight line subtraction . The chemometric algorithms used for modeling include principal component analysis (PCA), artificial neural networks (ANNs), partial least squares (PLS), partial least squares discriminant analysis, linear discriminant analysis (LDA), multiple linear regression, support vector machines (SVMs), radial basis functions (RBFs), back propagation (BP), random forests (RFs), extreme learning machine (ELM), soft independent modeling of class analogy, and cluster analysis (CA) (Dankowska and Kowalewski, 2019;Granato et al., 2018;Shahbazi and Esfahanian, 2019;Zhang et al., 2020). To obtain a stable and robust model, evaluation of the final model is critical. The correlation coefficient (R), coefficient of determination (R 2 ) and correlation coefficient for prediction (R p ) are often used to evaluate the performance of built models. The best models typically have the highest R and R 2 or sometimes R p while having a lower root mean square error of prediction (RMSEP) and root mean square error of cross-validation (RMSECV) (Minas et al., 2021). Additionally, residual predicted deviation (RPD) is used to evaluate the stability of the model, and a higher RPD indicates a better predictive performance (Kutsanedzie et al., 2018). Chemical composition analysis of wheat flour The main components of wheat flour analyzed include moisture, protein, ash, and some functional substances, which are closely related to its nutritional quality and processing properties. The hydrogencontaining groups (O-H, N-H, C-H, and S-H bonds) in each component of wheat flour have characteristic absorption peaks in the nearinfrared spectral region, which is the foundation for the detection of chemical constituents of wheat flour by NIR spectroscopy (Wadood et al., 2019). NIR spectroscopy combined with chemometrics has been successfully applied to analyze chemical composition of wheat flour (Wadood et al., 2019). Moisture Moisture is an important quality parameter for wheat flour storage and processing, and moisture content is typically less than 14.5% (Khalid et al., 2017). PLS regression (PLSR) is the most commonly used regression algorithm for predicting moisture content. A moisture content PLSR model was established based on 120 wheat flour samples using NIR spectroscopy, and the R, R 2 , RMSEP, RMSECV and RPD were 0.92, 0.85, 0.27, 0.47, and 2.43, respectively (Kahrıman and Egesel, 2011). The developed calibration models were successfully used to estimate the moisture content of wheat flour. Dong and Sun (2013) selected characteristic bands of 4000-4896 cm − 1 and 5504-6704 cm − 1 that related to moisture by interval partial least squares (iPLS), and applied PLSR to establish the model, which showed R p and RMSEP of 0.99 and 0.088, respectively. The handheld NIR spectrometer is also suitable for fast and quantitative determination of moisture in wheat flour. The moisture content in wheat flour samples was evaluated quickly and quantitatively using a linear variable filter-based Viavi MicroNIR 1700 spectrometer by X. . The established PLS calibration model yielded an R 2 of 0.8367, and the RMSEP and RMSECV were 0.32 and 0.35, respectively. In a study by Sun et al. (2018), the NIR spectral data of wheat flour were collected and analyzed using a MicroNIR-2200 NIR spectrometer, and then the PLS model was utilized to detect the moisture content in wheat flour. The model performed well and the R 2 , RMSEP and RMSECV were 0.929, 0.154, and 0.125, respectively. In addition, Mutlu et al. (2011) analyzed the moisture content of wheat flour using the ANN, and the resulting R 2 of the moisture content was 0.920. Protein The quality and quantity (normal variation ranges 8-16%) of wheat flour protein affect wheat flour dough properties and the quality of the final products (Gabriel et al., 2017;Korkmaz et al., 2021). Kahrıman and Egesel (2011) utilized the first derivative and SNV to preprocess the NIR spectrum and then established a PLS model to determine the protein content in wheat flour with an R 2 of 0.81 and an RMSEP of 0.58. Jin et al. (2011) improved the prediction effect of protein content model by applying the second derivative to preprocess the spectrum, and an R 2 of 0.937 and an SEP of 0.492 were obtained. A PLS model of the protein content was established byX. Chen et al., 2021 using a handheld NIR spectrometer based on the Fourier transform technique, and the results showed excellent performance with an R 2 of 0.9624 and an RMSEP of 0.22. Generally, support vector regression (SVR) is superior to the PLS and ANN methods in modeling NIR data and the synergy interval has a strong ability to select appropriate variables during model building (Lin et al., 2014). In the study by Chen et al. (2017a,b), the effects of spectral pretreatment and synergy interval on the model performance were researched, and the optimal protein model was established. The results showed that the R 2 and RMSEP were 0.906 and 0.425, respectively. Meanwhile, the results revealed that the models based on the original spectra are generally unacceptable, but they can be substantially improved by applying a proper spectral pretreatment approach. Ash Ash contains mineral elements such as calcium, magnesium, phosphorus, and potassium. The ash content indicates the milling degree of wheat flour and serves as an important indicator of the wheat flour's quality and usage (Czaja et al., 2020). In recent years, many studies have confirmed the feasibility of the quantitative determination of wheat flour ash content by NIR spectroscopy (Gao et al., 2021). Dong and Sun (2013) built an ash model for NIR spectroscopy and used interval PLS as the characteristic band selection method ranging from 4000 to 5500 cm − 1 and 6708-7304 cm − 1 . The predictive ability of ash content models was improved with an R p of 0.911 and an RMSEP of 0.019 using the characteristic bands. A PLS calibration model of ash content was established after pretreatment by the first derivative and SNV within the wavelength range of 908-1676 nm; R 2 and RMSEP values of 0.9431 and 0.06, respectively, were achieved (X. . Wet gluten Wet gluten is a viscoelastic soft gelatinous substance that remains after the starch, water-soluble carbohydrates, fats and other ingredients in the dough formed by mixing wheat flour with water are washed with water. It is mainly composed of gliadin and glutenin, and its content affects the quality and technological properties of wheat flour baked products (Chandi and Seetharaman, 2012). A PLS model was built by applying the spectral information in the 1200-2400 nm range, and the prediction effect of the gluten content of wheat flour was good, with an R 2 of 0.88 (Kahrıman and Egesel, 2011). Baslar and Ertugay (2011) used NIR spectroscopy to establish wet gluten content correction models for 120 kinds of bread wheat flour from different regions and obtained good results, with R and SEP values of 0.976 and 1.36, respectively. Albanell et al. (2012) established a wet gluten content prediction model using modified PLSR correction, and the best model for wheat flour was obtained, with an R 2 of 0.985. In the study conducted by Chen et al. (2017a,b), three spectral intervals of 10719.02-9839.59 cm − 1 , 5396.15-4516.72 cm − 1 and 4509.01-3629.58 cm − 1 were selected. Standard normal variate, first derivative and support vector regression were subsequently used to establish the wet gluten content siSVR model. As a result, the R 2 , RMSEP and standard deviation ratio values of the optimal model for wet gluten content were 0.850, 1.024 and 2.482, respectively. Furthermore, the feasibility of rapid quantitative analysis of wet gluten of wheat flour samples with handheld NIR spectrometers based on a linear variable filter was investigated by X. , and the model achieved an R 2 p and an RMSEP of 0.8585 and 0.66, respectively. Other chemical components In addition to the main chemical composition attributes, the determination of total phenolic, mineral element, and fiber contents as well as free fatty acids of wheat flour were also studied by NIR spectroscopy. Free fatty acids is one of the important indices used to evaluate wheat flour quality during storage. A portable NIR spectroscopy system was developed for the quantitative detection of free fatty acids content in wheat flour during storage by Jiang et al. (2020). Standard normal variate (SNV) and variable combination population analysis (VCPA) methods were used to pretreat the spectral data, and extreme learning machine (ELM) was employed to construct quantitative detection models based on different characteristic wavelength variables to achieve quantitative detection of free fatty acids. The ELM models showed good prediction accuracy and stability when predicting independent samples in the validation set, and the mean R 2 p when using the ELM models was above 0.96. Moreover, a PLS model of oleic acid content in wheat flour during long-term storage was established by Lancelot et al. (2021). Phenolic compounds contribute greatly to the health benefits of whole wheat products. Tian et al. (2021) presented a novel application of NIR spectroscopy for total phenolic content prediction in whole wheat flour. The optimal regression model demonstrated R 2 values of 0.92 and 0.90 for the calibration and validation sets, respectively, and an RPD value of 3.4. Fiber in wheat flour can increase its nutritional value, but also affect its functional properties. Therefore, it is important to detect the fiber content and distribution in wheat flour. The amount of fiber added to semolina and its distribution were investigated via NIR spectroscopy and hyperspectral imaging (NIR-HSI) by Badaro et al. (2019), and the results of PLSR models showed that the R 2 p was between 0.85 and 0.98, and the RMSEP was between 0.5 and 1%. In addition, NIR spectroscopy coupled with PLS was successfully applied to establish a model for the rapid prediction of mineral elements (calcium, phosphorus and potassium) contents in wheat flour samples. The R 2 values of the best models for calcium, phosphorus and potassium were 0.7907, 0.9777, and 0.9777, respectively; the RMSEP values were 5.35, 15.3, and 18.9, respectively; and the RPD were 2.19, 6.71, and 6.84, respectively (Gao et al., 2021). Although the R 2 of the calcium model is low, NIR can still predict the calcium content of wheat flour, and NIR also has excellent predictive performance for the phosphorus and potassium content in wheat flour. Technological parameters analysis of wheat flour It is known that NIR spectroscopy can be used to predict wheat flour technological parameters since abundant changes in the technological parameters of wheat flour are related to the chemical variation of its components (Kaddour and Cuq, 2011;Lancelot et al., 2021). Zeleny sedimentation value can reflect the quality and quantity of gluten protein in wheat flour and predict the rheological properties of dough. NIR spectroscopy was used to develop the calibration models for the Zeleny sedimentation test of bread wheat flours collected from different regions of Turkey by Baslar and Ertugay (2011). Reasonable model results were obtained for the Zeleny sedimentation test with an R of 0.924 and a SEP of 3.74. Mutlu et al. (2011) predicted the Zeleny sedimentation value and the water absorption of wheat flour by using NIR spectroscopy combined with an ANN. The prediction accuracy was good, with R 2 values of 0.917 and 0.832, respectively. In the study conducted by Chen et al. (2017a,b), the PLS models of near-infrared based on different spectral pretreatment methods were adopted to predict water absorption, dough development time, and dough stability, but the performance was unsatisfactory, with a R and an RMSEP for each parameter of 0.7 and 1.560, 0.73 and 1.065, 0.79 and 1.090, respectively. X. used a handheld NIR spectrometer to investigate the sedimentation value of wheat flour, and the R and RMSEP of the calibration model were 0.8185 and 2.12, respectively. In the study conducted by Lancelot et al. (2021), a good prediction was observed for the Hagberg falling number, swelling index and solvent retention capacity (SRC), which reflected α-amylase, amylose-amylopectin ratio, and gluten viscoelasticity, respectively, but unsatisfactory results were obtained for the farinograph parameters. Additional applications of NIR in detecting the technological parameters of wheat flour are summarized in Table 2. Safety analysis of wheat flour Wheat flour safety issues typically involve chemical additives and undesired contaminants, as well as the adulteration of wheat flour. Generally, the addition of inexpensive substitutes, such as rice, corn or potatoes, can reduce the processing performance and commercial value of wheat flour (Che et al., 2017). And unapproved chemical additives or undesirable compounds in wheat flour may mask its quality, and even affect consumer health (Annalisa et al., 2020). Wheat flour may also be contaminated by other flours and mycotoxins during storage and processing. Currently, the analysis of the safety of wheat flour is a primary concern in the food and agricultural product markets. General chemical methods cannot effectively identify the adulteration of wheat flour due to the similar taste, appearance, and physicochemical properties of other flours. As an excellent measurement tool, NIR spectroscopy has been widely used for the detection of safety in a variety of complex foods (Pereira et al., 2020;Yuan et al., 2020). In terms of wheat flour, NIR spectroscopy has been widely applied to the detection of chemical additives, biological contaminants and the adulterant use of other flours in wheat flour. Besides, the contamination of allergens in wheat flour is also an important safety issue. For example, Zhao et al. (2018a,b) have used the NIR-HSI technique to predict the contamination concentrations of peanut and walnut flour in whole wheat flour. The optimal general multispectral model had promising results, with R 2 p and RMSEP values of 0.987 and 0.373%, respectively. However, there are few studies in this area at present, and this research direction is worthy of attention. Chemical additives Chemical additives are used to standardize the quality and process performance of wheat flour and chemical additives used in wheat flour include benzoyl peroxide (BPO), talcum powder, azodicarbonamide, ascorbic acid, emulsifiers, enzymes, etc (Luis et al., 2017;Hu et al., 2018;C. Zhao et al., 2020). Some of these additives should be used within the established limits, while some are prohibited in wheat flour because of the potential for serious adverse effects (Fu et al., 2020;Matina et al., 2011). Meanwhile, some additives or undesirable chemicals may be excessively added to the flour for profit. Therefore, the safety and quality of wheat flour are challenged by chemical additives. The BPO content in pure wheat flour was determined based on NIR diffuse reflectance spectroscopy by Zhang et al. (2011), and the R 2 cal , RMSEC, R 2 pred , and RMSEP of the PLS model were 0.8901, 40.85 mg/kg, 0.8865, and 44.69 mg/kg, respectively. Another study on detecting the concentration of benzoyl peroxide in wheat flour was conducted by Sun et al. (2016), who designed prediction models based on NIR reflectance spectroscopy integrated with PLS, BP neural networks, and RBF neural networks separately. The results showed that the RBF neural network model had optimal predictive accuracy and feasibility, with R, RMSEP, and RPD values of 0.9937, 15.5095, and 8.8216, respectively. At the same time, Deng et al. (2019) extracted the optimal wavelengths and then established a competitive adaptive reweighted sampling (CARS) model for talc content in wheat flour by NIR. The verification set R 2 p was 0.998 and the RMSEP was 0.282%, and the detection limit of the model reached 0.5%. Furthermore, the results of the study by Fu et al. (2020) demonstrated that talcum powder and BPO could be effectively discriminated in wheat flour. In addition, a feasibility study was conducted by Wang et al. (2013) to rapidly test lime and calcium carbonate concentrations in wheat flour samples using NIR with the PLS algorithm. The results indicated that the R 2 values of lime and calcium carbonate using the PLS algorithm were 99.80% and 96.98%, the RMSEC were 0.19 and 0.34, the RMSECV were 0.26 and 0.75, the RMSEP were 0.63 and 0.44, and the RPD were 8.57 and 5.24, respectively. In terms of azodicarbonamide (ADA) detection, Gao et al. (2016) utilized NIR spectroscopy in combination with RBF to quantitatively detect the content of ADA in wheat flour. The established model presented good prediction indicators, with R, RMSEP, and RPD values of 0.97828, 18.2887 mg/kg, and 4.7621, respectively. The limits of quantitation and detection of the model were 72 and 15 mg/kg, respectively. Recently, the ADA content in wheat flour was determined using NIR hyperspectral imaging technology by Wang et al. (2018). From this study, it was found that the two wavelength bands with the largest differences between wheat flour and ADA were 1892 nm and 2039 nm, respectively, and the result showed that the minimum detected concentration of the optimal model was 0.2 g/kg. Additional applications of NIR spectroscopy in the detection of chemical additives in wheat flour are listed in Table 3. Biological contamination Biological contamination of wheat flour mainly includes deoxynivalenol (DON) and insects. DON also known as vomitoxin, is a major mycotoxin detected in wheat (Shen et al., 2022). DON contamination not only reduces wheat yield, but also causes vomiting, anorexia, teratogenicity, mutagenicity and carcinogenicity (Lippolis et al., 2014;Pestka, 2010). DON does not degrade easily, which threats wheat flour and the entire product chain . Although the contamination level of DON in wheat flour is relatively low, DON has distinct absorption in the near-infrared region (Peiris et al., 2009), and a number of studies have shown that it is feasible to measure DON in wheat samples with NIR spectroscopy (De Girolamo et al., 2009. In the study by Liang et al. (2020), the DON content of wheat flour samples was determined by SW-NIR reflectance spectroscopy, and the sparse autoencoder model yielded the highest prediction accuracy, with Tyska et al. (2021). The classification models of both partial least squares discriminant analysis and principal component analysis-linear discriminant analysis achieved accuracy rates of over 80%. Generally, whole wheat flour is made from intact wheat kernels, including the epidermis, which may be more susceptible to DON contamination . T. developed a scheme for the detection of DON contamination in whole wheat flour by Vis-NIR hyperspectral imaging, which can quickly analyze and ascertain whole wheat flour samples contaminated by DON. Wheat flour may also be contaminated by insects. Although NIR spectroscopy can quantitatively predict insect fragments in wheat flour to a certain extent, it cannot achieve high accuracy, and the sensitivity of NIR analysis needs to be further improved (Perez-Mendoza et al., 2003;Toews et al., 2007). Flour adulteration in wheat flours Wheat flours are divided into different varieties, and they have different uses and qualities. NIR spectroscopy combined with chemometric methods has been utilized to distinguish between common wheat flour and durum wheat flour (Unuvar et al., 2021). For example, einkorn is an old variety of wheat and is sold at higher prices than common wheat. Either to compensate for its weaker gluten structure or unfair economic profit, einkorn flour tends to be adulterated with bread wheat flour (Hidalgo et al., 2016). Ayvaz et al. (2021) assessed NIR spectroscopy to monitor bread wheat flour adulteration in einkorn flour and developed a PLSR calibration model for both flour mixtures. Highly accurate models yielded high R p and RPD values of 0.99 and 19.3, respectively, and low SECV and SEP values of 1.12 and 1.39%, respectively. Furthermore, NIR spectroscopy was adopted to detect the adulteration of spelt flour with inexpensive bread wheat flour, and the resulting PLSR model achieved an R 2 of 0.966 and an RMSEC of 5.2% (Ziegler et al., 2016). Wheat flour is also susceptible to being adulterated or contaminated with inferior grains. For example, wheat flour may be mixed with some inexpensive grain flours, such as sorghum, corn and rice, which is very challenging to authenticate for consumers, especially when flours have a similar color. Verdú et al. (2016) developed a method for the detection of adulteration in wheat flour based on SW-NIR assisted by hyperspectral imaging technology. Taro flour in wheat flour was identified by the combination of near-infrared spectroscopy and multivariate analysis. Then, PCA was performed on the data, and the correct classification rate of the cross-validation model was 90.48% (Rachmawati et al., 2017). In another study by Su and Sun (2017), a predictive model using a spectral range of 900-1700 nm was established. The optimal model had the potential to authenticate the admixtures (common wheat flour, cassava flour and corn flour) in organic avatar wheat flour in the range of 3-75% (w/w). Additional applications of NIR spectroscopy for the detection of adulterated wheat flour are summarized in Table 4. Conclusions and future perspectives Wheat flour is an important ingredient in food products, and ensuring its quality and safety is of great significance. Due to the advantages of being rapid, efficient and nondestructive, NIR spectroscopy has been shown to be an excellent technique for the quality and safety analysis of wheat flour. This review mainly reported recent advances in NIR nondestructive quality and safety analysis of wheat flour, including chemical composition, technological parameters, chemical additives, undesired contaminants and adulteration detection. In general, NIR spectroscopy is a powerful tool for process analytical technology to assure the quality and safety of raw materials and final products. Therefore, online analysis of wheat flour is a future direction worth investigating. However, the application of NIR spectroscopy still faces some challenges due to the diversity of wheat flour samples and the complexity of NIR spectra data. Firstly, near-infrared models need to be updated according to the variability and differences of samples, and more suitable chemometric methods should be developed to maintain their predictive performance and improve the generalizability of the models. Secondly, low-cost and convenient NIR spectrometers are needed to promote the popularity of NIR spectroscopy technique in nondestructive analysis of wheat flour. Thirdly, the combination of NIR spectroscopy and other spectroscopic techniques (e.g. Raman and UV light) will broaden its application in wheat flour. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2022-08-25T15:14:00.385Z
2022-08-01T00:00:00.000
{ "year": 2022, "sha1": "85573d2ab1b31f748f320075c64c0473b16bd01c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.crfs.2022.08.006", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "79f2e805466f6ab05510dbdd94fb7e2cfafced74", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }
265218787
pes2o/s2orc
v3-fos-license
ACD15, ACD21, and SLN regulate the accumulation and mobility of MBD6 to silence genes and transposable elements DNA methylation mediates silencing of transposable elements and genes in part via recruitment of the Arabidopsis MBD5/6 complex, which contains the methyl-CpG binding domain (MBD) proteins MBD5 and MBD6, and the J-domain containing protein SILENZIO (SLN). Here, we characterize two additional complex members: α-crystalline domain (ACD) containing proteins ACD15 and ACD21. We show that they are necessary for gene silencing, bridge SLN to the complex, and promote higher-order multimerization of MBD5/6 complexes within heterochromatin. These complexes are also highly dynamic, with the mobility of MBD5/6 complexes regulated by the activity of SLN. Using a dCas9 system, we demonstrate that tethering the ACDs to an ectopic site outside of heterochromatin can drive a massive accumulation of MBD5/6 complexes into large nuclear bodies. These results demonstrate that ACD15 and ACD21 are critical components of the gene-silencing MBD5/6 complex and act to drive the formation of higher-order, dynamic assemblies at CG methylation (meCG) sites. The PDF file includes: Figs. S1 to S6 Legend for table S1 Tables S2 and S3 Other Supplementary Material for this manuscript includes the following: Table S1 Fig. S1.ChIP-seq and RNA-seq analysis of ACD15 and ACD21.A) Venn diagram of ChIP-seq peaks showing large overlap between samples.The peak sets indicated with circles (putative MBD5/6 unique, ACD15 unique, or MBD5/6/ACD15 unique peaks) were selected and visualized with heatmaps (right).We noted that at each peaks set groups, enrichment of most proteins was observed, thus suggesting that these regions are bound by all components of the MBD5/6 complex, despite not reaching our stringent significance threshold to be called as peaks.The heatmap shows log2(fold-change) over no-FLAG control.B) Scheme of ACD15 and ACD21 genes showing the location of the guide RNAs used for CRISPR/Cas9 mediated mutant generation.The table below shows the mutations obtained in each line.C) Bar plots showing the number of differentially expressed TEs (DE-TEs) or differentially expressed genes (DEGs) in the indicated genotypes.D) Upset plots showing the intersection of the upregulated genes or TEs found for each genotype.The largest intersection group constitutes loci upregulated in all six mutant lines.S3. qPCR primer sequences.List of qPCR primers, forward (Fw) and reverse (Rev), along with DNA sequences (5'-3'). Fig Fig. S2.Organization of the MBD5/6 complex structure.(A-I) Correlation between MBD6-RFP signal and either ACD15-YFP, ACD21-CFP, or SLN-CFP signal in the indicated mutant backgrounds (underlined).Images represent individual zstack slices of roots from plants co-expressing MBD6 with either ACD15, ACD21, and SLN.Scatter plots indicate signal intensity for each fluorescent protein at each pixel of the image shown.Correlation coefficient: Pearson.Scale bars = 20 µM.(J-K) AlphaFold Multimer predicted structure of MBD5/6 complex with two copies each of MBD6, ACDC15, ACD21, and SLN along with confidence score map of the predicted complex.(L) Cartoon representation of the core dimeric MBD5/6 complex based on the AlphaFold Multimer prediction.The figure was created with Biorender.com. Fig. S3 . Fig. S3.ACD15, ACD21, and SLN protein levels across plant lines.(A) ACD15-YFP nuclear intensity in wild-type vs sln mutant plants.Comparisons were made using two-tailed t tests (NS: P>=0.05,N=52 and 50 respectively).(B) Western blot analysis of ACD15-YFP protein levels along with ponceau stain as a loading control.(C) ACD21-CFP nuclear intensity in wild type vs sln mutant plants.Comparisons were made using two-tailed t tests (NS: P>=0.05,N=50 per genotype).(D-E) Western blot analysis of ACD21-CFP protein levels along with ponceau stains as loading controls.(F) Western blot analysis of SLN-CFP protein levels along with ponceau staining as a loading control. Table S1 . (Separate File) Table showing merged IP-MS experiments.The numbers reported correspond to MS/MS counts.Col0 samples are no-FLAG negative controls.Each independent experiment is annotated with a shaded color. Table S2 . Gene identifiers for MBD5/6 complex. All members of the MBD5/6 complex are listed along with gene identifiers associated with each gene for online A. thaliana database (https://www.arabidopsis.org/).
2023-11-17T05:23:42.459Z
2023-11-15T00:00:00.000
{ "year": 2023, "sha1": "de053702a5e77ba4787477c1b265cea60dc02bc1", "oa_license": "CCBY", "oa_url": "https://www.science.org/doi/pdf/10.1126/sciadv.adi9036?download=true", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "de053702a5e77ba4787477c1b265cea60dc02bc1", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
16866653
pes2o/s2orc
v3-fos-license
Kernel Bounds for Structural Parameterizations of Pathwidth Assuming the AND-distillation conjecture, the Pathwidth problem of determining whether a given graph G has pathwidth at most k admits no polynomial kernelization with respect to k. The present work studies the existence of polynomial kernels for Pathwidth with respect to other, structural, parameters. Our main result is that, unless NP is in coNP/poly, Pathwidth admits no polynomial kernelization even when parameterized by the vertex deletion distance to a clique, by giving a cross-composition from Cutwidth. The cross-composition works also for Treewidth, improving over previous lower bounds by the present authors. For Pathwidth, our result rules out polynomial kernels with respect to the distance to various classes of polynomial-time solvable inputs, like interval or cluster graphs. This leads to the question whether there are nontrivial structural parameters for which Pathwidth does admit a polynomial kernelization. To answer this, we give a collection of graph reduction rules that are safe for Pathwidth. We analyze the success of these results and obtain polynomial kernelizations with respect to the following parameters: the size of a vertex cover of the graph, the vertex deletion distance to a graph where each connected component is a star, and the vertex deletion distance to a graph where each connected component has at most c vertices. Introduction The notion of kernelization provides a systematic way to mathematically analyze what can be achieved by (polynomial-time) preprocessing of combinatorial problems [12]. This paper discusses kernelization for the problem to determine the pathwidth of a graph. The notion of pathwidth was introduced by Robertson and Seymour in their fundamental work on graph minors [16], and is strongly related to the notion of treewidth. There are several notions that are equivalent to pathwidth including interval thickness, vertex separation number, and node search number (see [3] for an overview). The problem to determine the pathwidth of a graph is well studied, also under the different names of the problem. It is well known that the decision problem corresponding to pathwidth is NP-complete, even on restricted graph classes such as bipartite graphs and chordal graphs [1,13]. A commonly employed practical technique is therefore to preprocess the input before trying to compute the pathwidth, by employing a set of (reversible) data reduction rules. Similar preprocessing techniques for the Treewidth problem have been studied in detail [7,17], and their practical use has been verified in experiments [8]. Using the concept of kernelization we may analyze the quality of such preprocessing procedures within the framework of parameterized complexity. A parameterized problem is a language Q ⊆ Σ * × N, and such a problem is (strongly uniform) fixed-parameter tractable (FPT) if there is an algorithm that decides membership of an instance (x, k) in time f (k)|x| O (1) for some computable function f . A kernelization (or kernel ) for Q is a polynomial-time algorithm that transforms each input (x, k) into an equivalent instance (x , k ) such that |x |, k ≤ g(k) for some computable function g, which is the size of the kernel. Kernels of polynomial size are of particular interest due to their practical applications. To analyze the quality of preprocessing rules for Pathwidth we therefore study whether they yield polynomial kernels for suitable parameterizations of the Pathwidth problem. As the pathwidth of a graph equals the maximum of the pathwidth of its connected components, the Pathwidth problem with standard parameterization is AND-compositional and thus has no polynomial kernel unless the ANDdistillation conjecture does not hold [4]. We thus do not expect to have kernels for Pathwidth of size polynomial in the target value for pathwidth k, and we consider whether polynomial kernels can be obtained with respect to other parameterizations. As Pathwidth is known to be polynomial-time solvable when restricted graph classes such as interval graphs [3], trees [11] and cographs [9], it seems reasonable to think that determining the pathwidth of a graph G which is "almost" an interval graph should also be polynomial-time solvable. Formalizing the notion of "almost" as the number of vertices that have to be deleted to obtain a graph in the restricted class F, we can study the extent to which data reduction is possible for graphs which are close to polynomial-time solvable instances through the following problem: Pathwidth parameterized by a modulator to F Instance: A graph G = (V, E), a positive integer k, and a set S ⊆ V such that G − S ∈ F. Parameter: := |S|. Question: pw(G) ≤ k? The set S is a modulator to the class F. Observe that pathwidth should be polynomial-time solvable on F in order for this parameterized problem to be FPT. Our main result is a kernel lower bound for such a parameterization of Pathwidth. We prove that despite the fact that the pathwidth of an interval graph is simply the size of its largest clique minus one -which is very easy to find on interval graphs -the Pathwidth problem parameterized by a modulator to an interval graph does not admit a polynomial kernel unless NP ⊆ coNP/poly. In fact, we prove the stronger statement that, under the same condition, Pathwidth parameterized by a modulator to a single clique (i.e., by distance to F consisting of all complete graphs) does not admit a polynomial kernel 1 (Section 5). As the graph resulting from the lower-bound construction is co-bipartite, its pathwidth and treewidth coincide [14]: a corollary to our theorem therefore shows that Treewidth parameterized by vertex-deletion distance to a clique does not admit a polynomial kernel unless NP ⊆ coNP/poly, thereby significantly strengthening a result of our earlier work [7] where we only managed to prove kernel lower bounds by modulators from cluster graphs and co-cluster graphs. Our kernel bound effectively shows that even in graphs which are cliques after the deletion of k vertices, the information contained in the (non)edges between these k vertices and the clique is such that we cannot decrease the size of the clique to polynomial in k in polynomial time, without changing the answer in some cases. Faced with these negative results, we try to formulate safe reduction rules for Pathwidth (Section 3). It turns out that many of the rules for Treewidth (e.g., the rules involving (almost) simplicial vertices) are invalid when applied to Pathwidth, and more careful reduction procedures are needed to reduce the number of such vertices. We obtain several reduction rules for pathwidth, and show that they lead to provable data reduction guarantees when analyzed using a suitable parameterization (Section 4). In particular we prove that Pathwidth parameterized by a vertex cover S (i.e., using F as the class of edgeless graphs in the template above) admits a kernel with O(|S| 3 ) vertices, that the parameterization by a modulator S to a disjoint union of stars has a kernel with O(|S | 4 ) vertices, and finally that parameterizing by a set S whose deletion leaves a graph in which every connected component has at most c vertices admits a kernel with O(c · |S | 3 + c 2 · |S | 2 ) vertices. Preliminaries In this work all graphs are finite, simple, and undirected. The open neighborhood of a vertex v ∈ V in a graph G is denoted by N G (v), and its closed neighborhood \W . If S ⊆ V is a vertex set then G−S denotes the graph obtained from G by deleting all vertices of S and their incident edges. For a single vertex v we In such a case, we call w the special neighbor of v. For a set of vertices W ⊆ V , the subgraph of G induced by W is denoted as G[W ]. A path decomposition of a graph G = (V, E) is a non-empty sequence (X 1 , . . . , X r ) of subsets of V called bags, such that: for all edges {v, w} ∈ E there is a bag X i containing v and w, and for all vertices v ∈ V , the bags containing v are consecutive in the sequence. The width of a path decomposition is max 1≤i≤r |X i | − 1. The pathwidth pw(G) of G is the minimum width of a path decomposition of G. Throughout the paper we will often make use of the fact that the pathwidth of a graph does not increase when taking a minor. We also use the following results. Lemma 1 (Cf. [9]). If graph G contains a clique W then any path-or tree decomposition for G has a bag containing all vertices of W . Lemma 2. All graphs G admit a minimum-width path decomposition in which each simplicial vertex is contained in exactly one bag of the decomposition. Proof. Lemma 1 shows that for each simplicial vertex v, any path decomposition of G has a bag containing the clique N [v]. As removal of v from all other bags preserves the validity of the decomposition, we may do so successively for all simplicial vertices to obtain a decomposition of the desired form. Reduction Rules In this section we give a collection of reduction rules. Formally, each rule takes as input an instance (G, S, k) of Pathwidth parameterized by a modulator to F, and outputs an instance (G , S , k ). With the exception of occasionally outright deciding yes or no, none of our reduction rules change the modulator S or the value of k. In the interest of readability we shall therefore be less formal in our exposition, and make no mention of the values of S and k in the remainder; they will be understood to be equal to S and k. We say that a rule is safe for pathwidth (or in short: safe) if for each input (G, S, k) and output (G , S , k ), the pathwidth of G is at most k if and only if the pathwidth of G is at most k . Any subset of the rules gives a 'safe' preprocessing algorithm for pathwidth: apply the rules until no longer possible. We will argue later that this takes polynomial time for our rules, and give kernel bounds for some parameters of the graphs. Vertices of small degree We start off with a few simple rules for vertices of small degree. Note that, necessarily, these rules are slightly more restrictive than for the treewidth case; e.g., we cannot simply delete vertices of degree one since trees have treewidth one but unbounded pathwidth. The first rule is trivial. Rule 1 Delete any vertex of degree zero. Rule 2 If two degree-one vertices share their neighbor then delete one of them. Correctness of Rule 2 follows from insights on the pathwidth of trees, pioneered by Ellis et al. [11]. A self-contained proof is provided in the appendix. The following rule handles certain vertices of degree two; a correctness proof is given in the appendix. Rule 3 Let v, w be two vertices of degree two, and suppose x and y are common neighbors to v and w with x ∈ S. Then remove w and add the edge {x, y}. The differences with safe rules for treewidth are interesting to note: for treewidth, we can remove vertices of degree one, and remove vertices of degree two when adding an edge between their neighbors. Common neighbors and disjoint paths Rule 4 in this section also appears in our work on kernelization for treewidth [7] and traces back to well-known facts about treewidth (e.g., [2,10]). It is also safe in the context of pathwidth; the safeness proof is identical to when dealing with treewidth and is hence deferred to the appendix. A special case of Lemma 3, and the implied Rule 4, is when v and w have at least k + 1 common neighbors. As we do not want to increase the size of a modulator, we only add edges between pairs of vertices with at least one endpoint in the modulator; thus G − S remains unchanged. Rule 4 (Disjoint paths (with a modulator)) Let v ∈ S be nonadjacent to w ∈ V , and suppose there are at least k + 1 paths from v to w that only intersect at v and w, where k denotes the target pathwidth. Then add the edge {v, w}. Simplicial vertices In this section, we give a safe rule that helps to bound the number of simplicial vertices of degree at least two in a graph. Recall that we already have rules for vertices of degree one and zero, which are trivially simplicial. Lemma 4. Let G = (V, E) be a graph, and let v ∈ V be a simplicial vertex of degree at least two. If for all For the converse, let (X 1 , . . . , X r ) be an optimal path decomposition of G − v. Using Lemma 2, we assume that for each simplicial vertex x, there is a unique bag Let C = N G (v). A bag that contains C is called a C-bag. As C is a clique, Lemma 1 shows there is at least one C-bag. The C-bags must be consecutive in the path decomposition; let them be X i1 , . . . , X i2 . We will first show there is a vertex w ∈ N G [v] which is simplicial in G − v, and is contained in a C-bag. Let x, y ∈ C (possibly with x = y) be vertices such that x does not occur in bags with index smaller than i 1 , and y does not occur in bags of index larger than i 2 . If x = y then let w ∈ N G [v] be simplicial in G such that x, y ∈ N G (w), whose existence is guaranteed by the preconditions. As w is also simplicial in G − v it occurs in a unique bag, which must be a C-bag since it must meet its neighbors x and y there. If x = y then, as v has degree at least two, there is a vertex w ∈ N G [v] which is simplicial in G and adjacent to x; hence its unique occurrence is also in a C-bag. Thus we have established there is a vertex w ∈ N G [v] which is simplicial in G − v and is contained in exactly one bag, which is a C-bag X i . Now insert a new bag just after X i , with vertex set (X i \ {w}) ∪ {v}. As X i \ {w} contains all v's neighbors, this gives a path decomposition of G without increasing the width, and concludes the proof. Lemma 4 directly shows that Rule 5 is safe for Pathwidth. Rule 5 Let v be a simplicial vertex of degree at least two. If for all x, y ∈ N G (v) with x = y there is a simplicial vertex w ∈ N G [v] such that x, y ∈ N G (w), then remove v. Simplicial components Let S be the set of vertices used as the modulator. We say that a set of vertices W is a simplicial component if W is a connected component in G−S and N G (W )∩S is a clique. Our next rule deals with simplicial components. Note that we have to include the case v = w to ensure correctness for simplicial components which are adjacent to exactly one vertex in the modulator. Lemma 9 in the appendix shows that Rule 6 is safe. Let us briefly discuss the running time of this reduction rule. As the modulator ensures that G − S is contained in the graph class F, the rule can be applied in polynomial time if the pathwidth of graphs in F can be determined efficiently. In the setting in which we apply the rule, the graphs in F are either disjoint unions of stars (which are restricted types of forests, allowing the use of the linear-time algorithm of Ellis et al. [11]), or F has constant pathwidth which means that the FPT algorithm for k-Pathwidth [2] runs in linear time. Almost simplicial vertices For almost simplicial vertices, we have a rule that replaces an almost simplicial vertex by a number of vertices of degree two. In several practical settings, the increase of number of vertices may be undesirable; the rule is useful to derive some theoretical bounds. Lemma 5. Let G = (V, E) be a graph and let v ∈ V be an almost simplicial vertex of degree at least three, with special neighbor w. Let G be obtained by deleting v and by adding a vertex v p,q with neighbors p and q for any p, q ∈ N G (v) with p = q. Then pw(G) = pw(G ). The proof of the lemma is moved to the appendix. The lemma justifies the following reduction rule, by observing that an almost simplicial vertex v with deg G (v) > k + 1 means that pw(G) > k, as N G [v] − w then forms a clique of size at least k + 2. Rule 7 Let v ∈ V \ S be an almost simplicial vertex of degree at least three with special neighbor w. Let k be the target pathwidth. If deg G (v) > k + 1 then output no. Otherwise, delete v and add a vertex v p,q with neighbors p and q for any p, q ∈ N (v) with p = q. As a simplicial vertex is trivially almost simplicial, note that -in comparison to Rule 5 -the previous rule gives an alternative way of dealing with simplicial vertices. Polynomial kernelizations For each of the safe rules given in the previous section, there is a polynomial time algorithm that tests if the rule can be applied, and if so, modifies the graph accordingly. (We assume that for Rule 6 the bound on the pathwidth of the components is a constant.) The following lemma shows that any algorithm that exhaustively applies (possibly just a subset of) these reduction rules can be implemented to run in polynomial time. applications of the reduction rules. Proof. First we note that for non-trivial instances, Rule 4 does not add edges to a vertex of degree at most two. In particular, no rule increases the number of vertices of degree at least three. So, we have at most n applications of a rule that removes a vertex of degree at least three, and O(n 2 ) applications of Rule 4. Rule 7 is therefore executed at most n times in total, and thus the number of vertices of degree two that are added in these steps is bounded by O(nk 2 ). As each other rule removes at least one vertex, the total number of rule applications in G is bounded by O(n 2 + nk 2 ). By analyzing our reduction rules with respect to different structural parameters, we get the following results. Theorem 1. Pathwidth parameterized by a modulator to F admits polynomial kernels for the following choices of F: if the modulator S is a vertex cover. 2. A kernel with O(c · 3 + c 2 · 2 ) vertices when F is the class of all graphs with connected components of size at most c. 3. A kernel with O( 4 ) vertices when F is the class of all disjoint unions of stars. Proof. We show Part 3 followed by Part 2. Part 1 follows from the latter since it is a special case corresponding to c = 1. (Part 3.) As stars have pathwidth one, graphs with a modulator S of size to a set of stars have pathwidth at most + 1. Thus, if k ≥ + 1, we return a dummy yes-instance of constant size. Now, assume k ≤ . Our kernelization applies Rules 1-6 while possible, and applies Rule 7 to all vertices which have at most one neighbor in G−S. (Applying the rule to vertices with more neighbors in G − S might cause the resulting graph G − S not to be a disjoint union of stars.) Recall for Rule 6 that pw(G − S) ≤ 1. Let (G, S, k) be a reduced instance. We will first bound the number of connected components of G−S, with separate arguments for simplicial and nonsimplicial components. Each component is a star, i.e., it is a single vertex or a K 1,r for some r (a center vertex with r leaves). Note that in this proof the term leaf refers to a leaf of a star in G − S, independent of its degree in G (and all degrees mentioned are with respect to G). Associate each nonsimplicial component C of G − S to an arbitrary pair of nonadjacent neighbors of C in S. It is easy to see that each such component provides a path between the two chosen neighbors, and that for different components these paths are internally vertex disjoint. Thus, since Rule 4 does not apply, no pair of vertices of S has more than k components associated to it. . Associate W to the pair v, w. It follows immediately that no pair of vertices of S has more than 2k+3 components associated to it, which gives a bound of (2k + 3) · |S| 2 = O( 3 ) on the number of simplicial components. Thus we find that G − S has a total of O( 3 ) connected components (each of which is a star). This bounds the number of centers of stars by O( 3 ). It remains to bound the total number of leaves that are adjacent to those centers. Clearly, each star center has at most one leaf which has degree one (in G) by Rule 2. Each leaf of degree two has exactly one neighbor in S in addition to its adjacent star center. Since Rule 3 does not apply, no two leaves of degree two can have the same star center and neighbor in S; thus there are at most O( 4 ) leaves of degree two. Now, we are going to count the number of leaves (of stars) that are of degree more than two. For each such leaf, one neighbor is the center of its star and all other neighbors are in S. If its neighbors in S would form a clique, then the leaf would be almost simplicial in G (with the star center as the special neighbor) and Rule 7 would apply. Hence, as G is reduced, we can associate each such leaf to a nonadjacent pair of vertices in S. As Rule 4 cannot be applied, we associate O(k) vertices to a pair, and thus the number of such leaves is bounded Thus, the total number of vertices in G is bounded by O( 4 ). By Lemma 6 the reduction rules can exhaustively be applied in polynomial time. As the rules preserve the fact that G − S is a disjoint union of stars, the resulting instance is a correct output for a kernelization algorithm. This completes the proof of Part 3. (Part 2.) Fix some constant c and let F be the class of all graphs of component size at most c. Let (G, S, k) be an input instance. Note that the pathwidth of G is bounded by c + |S| − 1, since each component of G − S has pathwidth at most c − 1. We assume that k ≤ c + |S| − 2; otherwise the instance is yes and we may return a dummy yes-instance of constant size. Our algorithm uses Rules 1, 2, 4, 5, and 6. Regarding Rule 6 we note that the pathwidth of components of G−S can be computed in constant time (depending only on c). Consider a graph G where none of these rules can be applied. The bounds for the number of simplicial and nonsimplicial components of G−S work analogously to Part 3; there are O(k|S| 2 ) components of the respective types. This gives a This completes the proof of Part 2. We remark that while Rule 5 is not needed to establish the kernel bounds presented in the previous theorem, we have included it in our presentation for two reasons. First of all, applying the rule may prove to be useful in practical situations. It also leads to improved theoretical bounds on some quantities. For example, when F is the class of all independent sets (and the modulator S is a vertex cover), the use of Rule 5 will decrease the number of simplicial vertices in G − S to O(|S| 2 ), whereas the bound would be O(|S| 3 ) without this rule. Alas, as the kernel size for this choice of F is dominated by the Θ(|S| 3 ) term for the number of non-simplicial vertices, we still end up with a cubic-vertex kernel. Lower bounds: Modulator to a Single Clique We complement the positive results of the previous section by some negative results. In particular, we show that the problems Treewidth parameterized by a modulator to a single clique (TWMSC) and Pathwidth parameterized by a modulator to a single clique (PWMSC) do not admit a polynomial kernel unless NP ⊆ coNP/poly. In fact, we show that the results hold when restricted to co-bipartite graphs; as for these graphs the pathwidth equals the treewidth [14], the same proof works for both problems. The problems are covered by the general template given in the introduction, when using F as the class of all cliques. Observe that F only contains connected graphs, and in particular F is not closed under disjoint union. To prove the lower bound we employ the technique of cross-composition [6], starting from the following NP-complete version [15, Corollary 2.10] of the Cutwidth problem: Cutwidth on cubic graphs (Cutwidth3) Instance: A graph G on n vertices in which each vertex has degree at least one and at most three, and an integer k ≤ |E(G)|. Question: Is there a linear layout of G of cutwidth at most k, i.e., a As space restrictions prohibit us from presenting the full proof in this extended abstract, we will sketch the main ideas. To obtain a kernel lower bound through cross-composition, we have to embed the logical OR of a series of t input instances of Cutwidth3 on n vertices each into a single instance of the target problem for a parameter value polynomial in n + log t. At the heart of our construction lies an idea of Arnborg et al. [1] employed in their NP-completeness proof for Treewidth. They interpreted the treewidth of a graph as the minimum cost of an elimination ordering on its vertices 2 , and showed how for a given graph G a co-bipartite graph G * can be created such that the cost of elimination orderings on G * corresponds to the cutwidth of G under a related ordering. We extend their construction significantly. By the degree bound, instances with n vertices have O(n 2 ) different degree sequences. The framework of crosscomposition thus allows us to work on instances with the same degree sequence (and same k). By enforcing that the structure of one side of the co-bipartite graph G * only has to depend on this sequence, all inputs can share the same "right hand side" of the co-bipartite graph; this part will remain small and act as the modulator. By a careful balancing act of weight values we then ensure that the cost of elimination orderings on the constructed graph G * are dominated by eliminating the vertices corresponding to exactly one of the input instances, ensuring that a sufficiently low treewidth is already achieved when one of the input instances is yes. On the other hand, the use of a binary-encoding representation of instance numbers ensures that low-cost elimination orderings for G * do not mix vertices corresponding to different input instances. The remaining details can be found in Appendix D. Our construction yields the following results. Theorem 2. Unless NP ⊆ coNP/poly, Pathwidth and Treewidth do not admit polynomial kernels when parameterized by a modulator to a single clique. Interestingly, the parameter at hand is nothing else than the size of a vertex cover in the complement graph. Conclusions In this paper, we investigated the existence of polynomial kernelizations for Pathwidth. Taking into account that the problem is already known to be ANDcompositional with respect to the target pathwidth -thus excluding polynomial kernels under the AND-distillation conjecture -we study alternative, structural parameterizations. Our main result is that Pathwidth admits no polynomial kernelization with respect to the number of vertex deletions necessary to obtain a clique, unless NP ⊆ coNP/poly. This rules out polynomial kernels for vertex deletion distance from various interesting graph classes on which Pathwidth is known to be polynomial-time solvable, like chordal and interval graphs. On the positive side we develop a collection of safe reduction rules for Pathwidth. Analyzing the effect of the rules we show that they give polynomial kernels with respect to the following parameters: vertex cover (i.e., distance from the class of independent sets), distance from graphs of bounded component size, and distance from disjoint union of stars. It is an interesting open problem to determine whether there is a polynomial kernel for Pathwidth parameterized by the size of a feedback vertex set. For the related Treewidth problem, a kernel with O(|S| 4 ) vertices is known [7], where S denotes a feedback vertex set. Regarding Pathwidth, long paths in G − S are the main obstacle that needs to be addressed by additional reduction rules. A Safeness of low degree rules Lemma 7. Rule 2 is safe. Proof. Let u and v be two vertices of degree one with shared neighbor w and let G be obtained from G by removing v. By Lemma 2 there is a minimum-width path decomposition (X 1 , . . . , X r ) of G in which u occurs in a unique bag X i , which must therefore also contain w. We obtain a path decomposition for G by adding a bag X i containing (X i \ {u}) ∪ {v} next to bag X i . As this does not increase the width, safeness of the rule follows. Proof. Suppose we obtain G from G by applying Rule 3. The graph G can be obtained from G by contracting the edge {w, y} and thus G is a minor of G. Hence the pathwidth of G is at most the pathwidth of G. As vertex v is simplicial in G , by Lemma 2 we can assume that we have a path decomposition of optimal width containing a unique bag X i with v, x and y. Take a new bag X i with X i = X i − {v} ∪ {w}, and insert it in the path decomposition directly after X i . This gives a path decomposition of G of the same width. Proof. Clearly, pw(G) ≤ k implies pw(G − W ) ≤ k. Now, suppose that the pathwidth of G − W is at most k. Consider a path decomposition (X 1 , . . . , X r ) of G − W of width at most k. We assume that X 1 = X r = ∅ for notational convenience. hence adding an optimal path decomposition of G[W ] to the decomposition of G − W does not increase its width, as there is no interaction between the two parts. In the remainder we may therefore assume that Y = ∅. B Safeness of the simplicial components rule Note that Y is a clique in G, so by Lemma 1 and the convexity property of path decompositions there are j 1 ≤ j 2 , such that Y ⊆ X j ⇔ j 1 ≤ j ≤ j 2 , i.e., the bags that contain all vertices in Y are precisely those with index in j 1 , . . . , j 2 . We say that a simplicial component By assumption, there are at least 2k + 3 simplicial components C such that pw(G[C]) ≥ pw(G[W ]) and {v, w} ⊆ N (C). If one of these is internal to Y , then we are done. Consider such a component C. As it is not internal, there must be a bag X i1 with X i1 ∩ C = ∅ and i 1 < j 1 or j 2 < i 1 . Suppose i 1 < j 1 . By adjacency of C to v, there must be a bag i 2 with v ∈ X i2 and X i2 ∩ C = ∅. As i 2 ≥ j 1 by choice of v, we have established that C ∩ X i1 = ∅, C ∩ X i2 = ∅ and i 1 < j 1 ≤ i 2 . By connectivity of C there is a path from a vertex in C ∩ X i1 to a vertex in C ∩ X i2 in G[C]. By tracing this path in the path decomposition, it is easy to see that C ∩ X j1 = ∅. A similar argument shows that when j 2 < i 1 then C ∩ X j2 = ∅. Therefore, each of the 2k + 3 simplicial components C has a vertex in X j1 or in X j2 . So |X j1 | ≥ k + 2 or |X j2 | ≥ k + 2, which contradicts the assumption that we had a path decomposition of width k. Now, let Z be an internal simplicial component of G−W such that pw(G[Z]) ≥ pw(G[W ]). Suppose the pathwidth of G[Z] is . There must be a bag X i that contains at least + 1 vertices of Z, otherwise (X 1 ∩ Z, . . . , X r ∩ Z) is a path decomposition of Z of width less than . Let (Z 1 , . . . , Z q ) be a path decomposition of G[Z] of width , and let (W 1 , . . . , W p ) be a path decomposition of G[W ] of width at most . Now, the following tuple constitutes a path decomposition for G of width at most k: It is obtained by first removing Z from all bags, making p + q copies of X i . and then adding in successive copies a path decomposition of G[Z] followed by a path decomposition of G[W ]. As |X i \ Z| ≤ |X i | − ( + 1), this transformation does not increase the width. C Safeness of the almost simplicial vertex rule Lemma 5. Let G = (V, E) be a graph and let v ∈ V be an almost simplicial vertex of degree at least three, with special neighbor w. Let G be obtained by deleting v and by adding a vertex v p,q with neighbors p and q for any p, q ∈ N G (v) with p = q. Then pw(G) = pw(G ). Proof. Let C := N (v) \ {w} and C v = C ∪ {v}. For ease of reading we denote by v p,q those vertices where p, q ∈ C and by v p,w those adjacent to w as well as to some p ∈ C. (≥): We first show that pw(G) ≥ pw(G ). Let P = (X 1 , . . . , X r ) be a path decomposition of G. We show how to get a path decomposition for G of at most the same width. Clearly C v = C ∪{v} is a clique in G, so there must be bags of P that completely contain C v ; we call those the C v -bags (they must be connected). If w is contained in a C v -bag, then we may delete v from all other bags while maintaining a path decomposition for G. We copy this bag, such that we get a consecutive chain of |N (v)| 2 bags, one for each vertex v p,q and v p,w . In each of those bags, we replace v by a different v p,q or v p,w vertex. It is easy to see that this gives a path decomposition for G of at most the same pathwidth. It remains to consider the more difficult case that w is not contained in any C v bag. W.l.o.g., let w occur only left of the C v -bags and let B denote the rightmost bag containing w. It follows that v occurs in all bags from B to the C v bags, in order to represent the edge {v, w} and adhere to connectivity of occurrences. Let B be the bag directly on the left of the C v bags (possibly B = B ). It contains v, so there must be somep ∈ C which is not contained in B , otherwise it would be a C v -bag. (Note that this means that w cannot be adjacent top in G.) Further, let B denote the leftmost C v -bag. We make a small modification of P to prepare the replacements that are necessary to obtain a path decomposition for G : We replace in B and all bags left of it the vertex v by the vertex w, while we simply delete v if w is already present. At this point we have a path decomposition for G except for not representing the edge {v, w}. The key observation is that v and w share the bag B before these changes are made, and that all other neighbors of v are in C (and hence contained in the C v -bag B ). Now we will make replacements around the adjacent bags B and B in such a way that we obtain a path decomposition for G . For ease of presentation let B = X ∪ {w} withp / ∈ X and let B = C ∪ X ∪ {v}, where X and X are vertex sets. We replace B and B by the following sequence of bags: It is easy to see that all edges incident with vertices v p,q or v p,w are properly represented. It can also be verified that connectivity is not violated. The key points for this are thatp / ∈ X , and that B and B were adjacent bags before the transformation. Thus after deleting all remaining occurrences of v, we obtain a path decomposition for G of at most the same width. (≤): Now we show that pw(G) ≤ pw(G ). Let P = (X 1 , . . . , X r ) be a path decomposition of G . We consider the bags of P that completely contain C, the C-bags. If there is a v p,w vertex in a C-bag, then replacing all occurrences of that vertex by v, and deleting all other introduced vertices, we obtain a path decomposition for G: The reason is that v p,w occurs together with C, but also together with w. In this case we are done. Otherwise, if the C-bags contain no v p,w vertex, then this implies that: 1. All vertices of C must occur also outside of C-bags. 2. The vertex w must occur at least once outside the C-bags. Assume for contradiction that w does not occur right of the C-bags. This would imply that all v p,w vertices can only be contained in bags left of the C-bags. Then however, all vertices p ∈ C must occur left of the C-bags, contradicting the fact that only the C-bags contain C completely. Thus, by symmetry, w occurs left and right of the C-bags. Hence, by connectivity of occurrences, w is contained in all C-bags. It follows, that if any C-bags contains a v p,q vertex, we may replace that vertex by v, and again obtain a path decomposition for G of the same width; observe that the bag in question contains N (v) = C ∪ {w}. We will show that there is always a C-bag containing some v p,q vertex. Assume for contradiction that no C-bag contains any v p,q vertex. We already know that each vertex of C must occur outside of the C-bags, and we have used that not all those vertices can occur left of the C-bags (or not all right of them). Hence, there are distinct verticesp,q ∈ C such thatp does not occur right of the C-bags, andq does not occur left of them. Now, considering the vertex vp ,q this leads to a contradiction, as that vertex must occur both left and right of the C-bags, but by assumption it is not contained in any C-bag, contradicting connectivity of occurrences. Remark 1. Lemma 5 has a much simpler proof for the special case of v being simplicial. It is easy to see that the replacement of v does not generalize to the case of more special neighbors: Consider the 3-claw which has pathwidth one. Making an analogous replacement for the center of the claw, would create a cycle of length six which has pathwidth two. In general, when there is more than one special neighbor, then the modification creates a larger clique minor, than what the degree of v would imply. Remark 2. It can be easily seen that even the special case of simplicial vertices is not correct for treewidth. Application on one vertex of a clique of size four (and treewidth three) gives a graph of treewidth two. D Hardness for Pathwidth and Treewidth Parameterized by a Modulator to a Single Clique In this section, we give the proof that Treewidth and Pathwidth, parameterized by a modulator to a single clique do not have a polynomial kernel unless NP ⊆ coNP/poly. More precisely, we show this when we additionally restrict our input to co-bipartite graphs. As the treewidth equals the pathwidth for these graphs, the same proof can be used for the pathwidth as well as the treewidth version. A tree decomposition of a graph G = (V, E) is a pair ({X i | i ∈ I}, T = (I, F )) with {X i | i ∈ I} a family of subsets of V , and T a tree on edge set F , such that The sets X i are called the bags of the tree decomposition. The width of a tree decomposition ({X i | i ∈ I}, T = (I, F )) is max i∈I |X i | − 1, and the treewidth of G is the minimum width of a tree decomposition of G. If we have a weight function w : V (G) → N then the weighted width of a tree decomposition ({X i | i ∈ I}, T = (I, F )) of G equals max i∈I v∈Xi w(i), and the weighted treewidth of G is the minimum weighted width of a tree decomposition of G. Observe that, contrary to the case of normal treewidth, we do not subtract one; hence the weighted treewidth of a tree in which w(v) = 1 for all vertices is two, rather than one. An alternative characterization of treewidth is with help of elimination orderings. An elimination ordering of a graph is a permutation of its vertices. To eliminate a vertex v corresponds to making its neighbors into a clique and then deleting v. Given an elimination ordering π of a graph G, we obtain a sequence of graphs by eliminating the vertices of G in the order of π. The following proposition is well known, see e.g., [3]. Proposition 1. Graph G has treewidth at most k if and only if G has an elimination ordering in which each vertex has degree at most k when it is eliminated. The kernel lower bound is proven using the framework of cross-composition, which relies on the following notions. Definition 1 (Polynomial equivalence relation [6]). An equivalence relation R on Σ * is called a polynomial equivalence relation if the following two conditions hold: 1. There is an algorithm that given two strings x, y ∈ Σ * decides whether x and y belong to the same equivalence class in (|x| + |y|) O(1) time. 2. For any finite set S ⊆ Σ * the equivalence relation R partitions the elements of S into at most (max x∈S |x|) O(1) classes. Definition 2 (Cross-composition [6]). Let L ⊆ Σ * be a set and let Q ⊆ Σ * × N be a parameterized problem. We say that L cross-composes into Q if there is a polynomial equivalence relation R and an algorithm which, given t strings x 1 , x 2 , . . . , x t belonging to the same equivalence class of R, computes an instance (x * , k * ) ∈ Σ * × N in time polynomial in t i=1 |x i | such that: Theorem 3 ([6] ). If some set L is NP-hard under Karp reductions and L crosscomposes into the parameterized problem Q then there is no polynomial kernel for Q unless NP ⊆ coNP/poly. We start with a number of results on (weighted) treewidth that are folklore or can easily be proved with standard arguments (see, e.g., [17]). Definition 3. Consider a graph G weighted by function w, and an elimination ordering π on the vertices of G. The cost of π is the maximum over all vertices v ∈ V (G) of the weight of N [v] at the time that v is eliminated by π. Proposition 2. Graph G has weighted treewidth at most k if and only if there is an elimination ordering of G of cost at most k. Proposition 3 (Cf. [1]). Let G be a co-bipartite graph on bipartition V (G) := A∪B weighted by function w. For every elimination ordering π on G there is an elimination ordering π which does not cost more than π, such that π first eliminates all vertices of A, and finishes by eliminating all vertices of B. Proposition 4. Let G be a graph with weight function w containing two adja- . Let π be an elimination ordering of G which eliminates w before v, and let the ordering π be obtained by updating π such that it eliminates v just before w. Then the cost of π is not higher than the cost of π. Proposition 5. Let G be a graph with positive integral vertex weights. Let G be the graph obtained from G by iterating the following procedure. As long as there is a vertex v with weight more than one, subtract one from the weight of v and add a new vertex v of weight 1 and with neighborhood N [v]. Then the treewidth of G equals the weighted treewidth of G minus one. Theorem 4. Treewidth parameterized by a modulator to a single clique does not admit a polynomial kernelization unless NP ⊆ coNP/poly. Proof. We show that the NP-complete Cutwidth3 problem cross-composes into TWMSC. We start by defining a polynomial equivalence relationship R. Fix an encoding of instances of Cutwidth3, and choose R such that all strings which do not encode a valid instance are equivalent. For the strings which do encode a valid instance, define two instances (G 1 , k 1 ) and (G 2 , k 2 ) to be equivalent if all of the following hold: k 1 = k 2 , |V (G 1 )| = |V (G 2 )|, |E(G 1 )| = |E(G 2 )|, and for each integer i ∈ {1, 2, 3} the number of degree-i vertices in G 1 and G 2 is the same. Since a set of valid instances on at most n vertices each is partitioned into at most n × n × n 3 equivalence classes, this constitutes a polynomial equivalence relationship. We now show how to cross-compose a set of instances of Cutwidth3 which belong to the same equivalence class of R. If all instances are malformed, then this can be recognized in polynomial time and we simply output a constant-size no-instance. So in the remainder we may assume that all input instances (G 1 , k 1 ), . . . , (G t , k t ) are well-formed and belong to the same equivalence class; in particular k 1 = . . . = k t = k and |V (G 1 )| = . . . = |V (G t )| = n. Order the vertices within each graph by increasing degree, breaking ties arbitrarily. The choice of R, together with the fact that each G i has maximum degree 3, guarantees that each graph has the same number of vertices of each degree. Since Cutwidth on a graph on n vertices can be solved in O * (2 n ) time [5, Theorem 10], we may assume that n ≥ log t. For if n < log t then applying the algorithm by Bodlaender et al. [5] consecutively on each instance can be done in time which is polynomial in the total input size (which is at least t); we could then output a constant-size instance with the appropriate answer as the output of the cross-composition. For similar reasons we may assume n ≥ 2. Finally, we may assume that the number of input instances t is a power of 2 since we can duplicate some instances without changing the value of the OR, increasing the input size by at most a factor two. To construct the instance of TWMSC that encodes the OR of the input instances, we use a two-stage process for the ease of presentation. We first show that the OR of the input instances can be encoded into an instance of Weighted Treewidth parameterized by a modulator to a single clique on a cobipartite graph with partite sets A and B, such that the total weight of the set B is polynomial in n + log t. The set B will be the modulator, which is valid since removing the partite set B from a co-bipartite graph leaves a clique. We then use Proposition 5 to obtain an equivalent instance of TWMSC, and since the total weight of B is sufficiently small this produces an instance of TWMSC that encodes the OR of the input instances, and has a modulator to a single clique whose size is polynomial in n + log t. We now construct a graph G * and weight function w such that computing the weighted treewidth of G * corresponds to computing the OR of the instances of Cutwidth3. The construction is based on the NP-completeness proof for Treewidth by Arnborg et al. [1]. The graph G * will be co-bipartite with partite sets A and B, so V (G * ) := A∪B and A and B are cliques in G * . The graph G * is defined as follows: -For each input graph G i with i ∈ [t], for each vertex j ∈ V (G i ), we add a vertex v i,j of weight n 3 to A which corresponds to vertex j. For a given value of j ∈ [n] we say that all vertices v i,j (for all relevant values of i) are A-representatives of node j. We also add a dummy vertex d i for each instance i ∈ [t] to A of weight n 6 . We turn A into a clique. -The vertex set B consists of three parts: the instance selector vertices B I , the node representatives B N and the edge representatives B E . • The instance selector vertices will be used to encode the binary representation of instance numbers. Since we assumed t to be a power of two, we need log t bits to encode an instance number and therefore 2 log t vertices are used to represent all possible bit values for log t positions. So B I := {a q , b q | q ∈ [log t]}. Each vertex in B I has weight n 5 . We connect the vertices of B I to the vertices of A as follows. We make a vertex v i,j in A (which corresponds to instance i) adjacent to the instance selector vertices of the bit values of the binary representation of i. So for q ∈ [log t], if the q-th bit of number i is 1 then we make v i,j adjacent to a q , and if the bit is 0 then we make the vertex adjacent to b q . The adjacency from dummy vertices d i to the vertices of B I is defined exactly the same through the binary representation of i. • The node representatives B N contain a vertex for each node number in [n]. Recall that all input graphs have the same number of vertices of each degree, and that we sorted the vertices by degree. When we write deg(j) for j ∈ [n] we will therefore take this to mean the value d such that in each input graph, each vertex j has degree d. For each j ∈ [n] we add a vertex x j to the set B N and give it weight n 3 − deg(j). The vertex x j is said to be the (unique) B-representative of node j. The adjacency between B N and A is simple: for each j ∈ [n] we make all A-representatives of j adjacent to the single B-representative of j, and we make all the nodes in B N adjacent to all the dummy vertices d i for i ∈ [t]. • The edge representatives B E contain one vertex for each possible edge in an undirected n-vertex graph. So for {v, w} ∈ [n] 2 we have a vertex e v,w of weight two. Vertex e v,w is adjacent to an A-representative v i,j if instance G i contains the edge {v, w} and j = v or j = w, i.e., the edge representative e v,w is adjacent to instance i's A-representatives of the endpoints of the edge, provided that instance i actually contains the edge. Additionally, all vertices of B E are adjacent to all dummy ver- The construction is completed by turning B := B I ∪ B N ∪ B E into a clique. We set k := t · (n 4 + n 6 ) + n 3 + n 5 log t + k. To complete the first stage, we need to prove that (G * , w) has weighted treewidth at most k if and only if at least one of the input graphs G i has cutwidth at most k. Before proving this claim, we establish some properties of the constructed instance (G * , w, k ). } for a given instance number i ∈ [t] be the subset of the vertices in A corresponding to instance i. Let π be a permutation of S. Consider the process of eliminating the vertices in S from graph G * in the order given by π, and let E-weight(S π(j) ) be the total weight of N [S π(j) ] when eliminating the vertex π(j) for j ∈ [n]. Then E-weight(S π(j) ) = t · (n 4 + n 6 ) + n 3 + n 5 log t + , where := |{{u, v} ∈ E(G i ) | π(u) ≤ j < π(v)}|. Proof. The intuition behind the proof is that the elimination process has two effects on the weight of neighbors of some vertex v ∈ A: on the one hand, eliminated vertices in A are essentially replaced by the representatives in B N in the neighborhood of v, which have slightly smaller weight than the originals; the difference is exactly equal to the degree of the corresponding vertex. On the other hand, the representative of any edge in B E will be added to those neighborhoods, once one of the endpoints is eliminated; recall that those edges contribute a weight of two. Thus, when reaching the first endpoint of an edge, the weight increases by one (by the degree contribution); when reaching the second endpoint this increase is canceled. Together this leads to the contribution of in E-weight(S π(j) ). This idea was used by Arnborg et al. [1] in their NPcompleteness proof for Treewidth. Armed with this intuition, let us proceed with the proof. By definition of G * , all vertices in S have the same set of neighbors in B I so elimination of vertices from S does not affect the adjacency of other vertices in S to B I . Consider a vertex v i,j in S. From the construction of G * it follows that initially, the only vertex of S which is adjacent to the B-representative of j, is the vertex v i,j . Since we only eliminate vertices from S, it follows that a vertex in S is only adjacent to the B-representative of a node number j if that vertex is itself the unique A-representative of j in S, or if the A-representative of j in S was eliminated earlier. Let us use these observations to prove the claim. For an arbitrary value of j ∈ [n] we consider the closed neighborhood of the vertex S π(j) just before it is eliminated. We will study the neighborhood of S π(j) in the sets A, B I , B N and B E consecutively. For convenience, define E-weight X (S π(j) ) for X ⊆ V (G * ) as the total weight of N [S π(j) ]∩X when S π(j) is eliminated. Neighbors in A. Since A is a clique and S ⊆ A, vertex S π(j) is initially adjacent to all vertices of A. Since the only vertices which are eliminated are those in S corresponding to instance i, vertex S π(j) will be adjacent to all vertices for other instances, i.e., to v i ,j for i = i and j ∈ [n], for a total weight of E-weight A\S (S π(j) ) = (t−1)n·n 3 . Vertex S π(j) is also adjacent to all t dummy vertices for a weight of n 6 each. The remaining vertices of A are those in S, and S π(j) is adjacent to those which are not already eliminated. Hence there are n − j + 1 vertices in S which are in the closed neighborhood of S π(j) just before it is eliminated. These have weight n 3 (n − j + 1) so E-weight A (S π(j) ) = (t − 1)n 4 + t · n 6 + n 3 (n − j + 1). Neighbors in B I . Since the neighborhood of S π(j) in B I is not changed by the eliminations, vertex S π(j) has exactly log t neighbors in B I with weight n 5 each so E-weight B I (S π(j) ) = n 5 log t. Neighbors in B N . By construction of G * , vertex S π(j) is adjacent to the unique node in B N which is the B-representative for the vertex for which S π(j) is an A-representative. Initially, S π(j) is not adjacent to other vertices of B N . For each vertex 1 ≤ j < j which was eliminated before j, vertex S π(j) has become adjacent to the vertex in B N which is B-representative for the vertex to which S π(j ) is the A-representative. So E-weight B N (S π(j) ) = j j =1 (n 3 − deg(j )). Neighbors in B E . Initially, vertex S π(j) is adjacent to the edge-representative vertices in B E for which S π(j) represents an endpoint, so to deg(j) vertices with weight two each. For each vertex S π(j ) with 1 ≤ j < j which is eliminated before π(j), S π(j) becomes adjacent to the edge-representatives in B E for edges which are incident on S π(j ) in graph G i . This shows that We can now sum up the weights of the members of the closed neighborhood N [S π(j) ] in each of the respective subsets to establish that E-weight(S π(j) ) equals: To simplify this further, we define E 1 as the set of edges of G i which have one endpoint among the vertices in the range [1 . . . j], and E 2 as the edges of G i with both endpoints among [1 . . . j]. Observe that these definitions imply that We continue the derivation: =t · (n 4 + n 6 ) + n 3 + n 5 log t + |E 1 |. Now observe that by definition, |E 1 | is the number of edges which have one endpoint at or to the left of j, and the other endpoint to the right of j, and hence this is exactly the value of as defined in the statement of the claim; this concludes the proof of Claim 1. The preceding claim relates the cost of the first n eliminations of an ordering of G * to the cutwidth of an instance i, provided that the ordering starts by eliminating the A-representatives of instance i. The next claim shows that these first n eliminations essentially dominate the cost of elimination orderings with this structure. Claim 2. Let S, i, π and E-weight be as defined in Claim 1. Consider an elimination ordering for G * which starts by eliminating S in the order given by π, then eliminates the dummy d i corresponding to instance i, and eliminates the remaining vertices in arbitrary order. The cost of π * is max j∈[n] E-weight(S π(j) ). Proof. By Claim 1, the maximum weight of a closed neighborhood when eliminating the vertices from S is exactly max j∈[n] E-weight(S π(j) ) ≥ t · (n 4 + n 6 ) + n 3 + n 5 log t. We show that after elimination of S, eliminating the dummy d i and all remaining vertices does not incur a cost higher than this. is not the case. Consider the first index 1 < j ≤ n such that all vertices π(j ) for 1 ≤ j < j correspond to the same instance i (i.e., they are of the form v i,j for j ∈ [n]) and π(j) corresponds to instance i with i = i . Let us consider the neighborhood of the vertex π(j) when it is eliminated. By construction of G * , vertex π(j) corresponding to instance i is adjacent to the vertices in B I which correspond to the binary representation of i . Since vertex π(1) was eliminated before π(j), and since vertices π(1) and π(j) are adjacent in G * because they are both members of the clique A, after elimination of π(1) the vertex π(j) has become adjacent to all neighbors of π(1). Since π(1) is adjacent to the vertices of B I corresponding to the binary representation of i, and since the binary representations of i and i must differ in at least one position, the number of neighbors of π(j) in B I at the time it is eliminated is at least 1 + log t, and they have weight n 5 each. Since π(j) is also adjacent to all vertices of A except the j − 1 vertices of weight n 3 which were eliminated earlier, this shows that the weight of the closed neighborhood of π(j) at the time it is eliminated is at least t(n 4 + n 6 ) − j · n 3 + (1 + log t)n 5 . Using that j ≤ n and n ≥ 2 (which we assumed in the beginning the proof of the theorem), it now follows that the weight of π(j) at the time it is eliminated is at least as much as the cost of the canonical elimination ordering. Hence the canonical elimination ordering which we defined earlier has cost no more than π, and since the canonical ordering starts by eliminating v 1,1 , v 1,2 , . . . , v 1,n this concludes the proof of Claim 3. We are now finally ready to prove that (G * , w) has weighted treewidth at most k if and only if at least one of the input graphs G i has cutwidth at most k. First assume that (G * , w) has weighted treewidth at most k . By Proposition 2 this implies that there is an elimination ordering π of G * with cost at most k . By Claim 3 we may assume that there is an instance number i * ∈ [t] such that π starts by eliminating all vertices in the set S := {v i * ,j | j ∈ [n]}. As the cost of π is at most k , the weight of the closed neighborhood of a vertex in S at the time it is eliminated does not exceed k . By Claim 1 this proves that max j∈[n] E-weight(S π(j) ) ≤ k . Plugging in the value for k and the expression for E-weight obtained in the mentioned claim, and cancelling terms on both sides, we find that max j∈[n] |{{u, v} ∈ E(G i * ) | π(u) ≤ j < π(v)}| ≤ k which proves that G i * has cutwidth at most k, when using the ordering on S induced by π. For the reverse direction, assume that G i * has cutwidth at most k, and let π * be an ordering which achieves this cutwidth. Build an elimination ordering for G * by first eliminating the vertices of S := {v i * ,j | j ∈ [n]} in the order induced by π * , then eliminating the dummy d i * , and then eliminating the remaining vertices in arbitrary order. By Claim 2 the cost of this ordering is dominated by the cost of eliminating the vertices of S, which is max j∈[n] E-weight(S π(j) ). If ordering π * achieves cutwidth at most k on G i * , then evaluating the expression for E-weight given by Claim 1 proves that the cost of π is at most k . Using Proposition 2 this proves that (G * , w) has weighted treewidth at most k . To complete the cross-composition of Cutwidth3 into TWMSC, we can transform the weighted graph (G * , w) to the unweighted graphĜ using the transformation of Proposition 5. Since this transformation duplicates the closed neighborhoods of vertices, it results in a co-bipartite graph since the cliques A and B of G * are just transformed into larger cliques inĜ. LetB be the clique inĜ which results from the transformation of clique B in G * . The size ofB is bounded by the maximum weight of a vertex in B (under w) times the size of B. Since both are polynomial in n + log t, this shows that the size ofB is bounded polynomially in n + log t. Now consider the instance of TWMSC which asks ifĜ with the modulatorB to a single clique (becauseĜ is co-bipartite andB is one of the partite sets) has treewidth at most k − 1; by the equivalence between the weighted treewidth of the original graph, and the normal treewidth of the result of the transformation, our constructed instance is equivalent to the OR of the input instances of Cutwidth3. The size of the modulator, which is the parameter of the TWMSC instance, is polynomial in n + log t. This concludes the cross-composition; Theorem 4 follows by applying Theorem 3. Since the pathwidth of a co-bipartite graph equals its treewidth [14] and the graph formed by the cross-composition is co-bipartite, we obtain the following corollary. Corollary 1. Pathwidth parameterized by a modulator to a single clique does not admit a polynomial kernel unless NP ⊆ coNP/poly.
2012-07-20T03:09:36.000Z
2012-07-04T00:00:00.000
{ "year": 2012, "sha1": "c486f941b87a2a81149b124c30d3eefadd442aaa", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1207.4900", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4b47580c82606c4805fe334f02b21f99f3479b76", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
8624069
pes2o/s2orc
v3-fos-license
Bubble CPAP to support preterm infants in rural Rwanda: a retrospective cohort study Background Complications from premature birth contribute to 35 % of neonatal deaths globally; therefore, efforts to improve clinical outcomes of preterm (PT) infants are imperative. Bubble continuous positive airway pressure (bCPAP) is a low-cost, effective way to improve the respiratory status of preterm and very low birth weight (VLBW) infants. However, bCPAP remains largely inaccessible in resource-limited settings, and information on the scale-up of this technology in rural health facilities is limited. This paper describes health providers’ adherence to bCPAP protocols for PT/VLBW infants and clinical outcomes in rural Rwanda. Methods This retrospective chart review included all newborns admitted to neonatal units in three rural hospitals in Rwanda between February 1st and October 31st, 2013. Analysis was restricted to PT/VLBW infants. bCPAP eligibility, identification of bCPAP eligibility and complications were assessed. Final outcome was assessed overall and by bCPAP initiation status. Results There were 136 PT/VLBW infants. For the 135 whose bCPAP eligibility could be determined, 83 (61.5 %) were bCPAP-eligible. Of bCPAP-eligible infants, 49 (59.0 %) were correctly identified by health providers and 43 (51.8 %) were correctly initiated on bCPAP. For the 52 infants who were not bCPAP-eligible, 45 (86.5 %) were correctly identified as not bCPAP-eligible, and 46 (88.5 %) did not receive bCPAP. Overall, 90 (66.2 %) infants survived to discharge, 35 (25.7 %) died, 3 (2.2 %) were referred for tertiary care and 8 (5.9 %) had unknown outcomes. Among the bCPAP eligible infants, the survival rates were 41.8 % (18 of 43) for those in whom the procedure was initiated and 56.5 % (13 of 23) for those in whom it was not initiated. No complications of bCPAP were reported. Conclusion While the use of bCPAP in this rural setting appears feasible, correct identification of eligible newborns was a challenge. Mentorship and refresher trainings may improve guideline adherence, particularly given high rates of staff turnover. Future research should explore implementation challenges and assess the impact of bCPAP on long-term outcomes. Background Over 2.9 million neonatal deaths occur every year, representing 44 % of all under five deaths [1][2][3]. In Rwanda, despite a rapid decline in under-five mortality, the number of deaths in the neonatal period remains high (27/1000 live births) with little change over the past 10 years [4,5]. Major causes of neonatal deaths include preterm birth, birth asphyxia and infections. Recently, complications related to prematurity have surpassed pneumonia and diarrheal diseases as the number one cause death in children, and account for 35 % of all neonatal deaths [1-3, [6][7][8]. Hospital-based interventions targeting these causes are needed to reduce neonatal mortality, particularly in low and middle income countries [9][10][11]. The implementation of hospital-based interventions is challenging in resource limited settings. Specifically, intensive care unit technology for respiratory distress, such as a mechanical ventilation, is often not available due to high costs, maintenance demands and the need for highly trained staff. However, continuous positive airway pressure (CPAP) has been demonstrated to be a simple, low-cost and effective alternative to improve the respiratory status of preterm infants with respiratory distress syndrome [12,13], and decrease the need for conventional mechanical ventilators [12,14]. CPAP helps keep the respiratory tract and lungs open, promotes comfortable breathing, improves oxygen levels and decreases apnea in premature infants. Bubble CPAP (bCPAP) is the least expensive and least complicated CPAP option, making this the preferred technology in resource-limited settings [15,16]. To date, few studies have been conducted to show the impact and feasibility of bCPAP in areas with limited resources. These studies, most of which were conducted in teaching and/or urban hospitals, have shown that bCPAP can reduce the need for mechanical ventilation and can be applied by nurses after a short on-the-job training on the protocol and equipment [12,17]. However, little research has been done on the use of bCPAP in rural resourcelimited settings and hospitals without pediatric specialists. In January 2013, the Rwandan Ministry of Health (MOH), in collaboration with Partners In Health (PIH), introduced a bCPAP program integrated into broader neonatal care services for newborns with respiratory distress in three rural district hospitals (Butaro, Kirehe and Rwinkwavu District Hospitals). Nurses and general practitioners working in the neonatal units in these hospitals with a background in neonatal care services received intensive training on advanced neonatal care, focusing on the bCPAP protocol, safe assembly, maintenance and trouble-shooting of different issues related to bCPAP use. The training was supplemented by ongoing clinical mentorship and intermittent refresher trainings led by PIH and local MOH bCPAP champions. The objectives of this study are to describe the provider adherence to bCPAP protocol for preterm and very low birth weight (PT/VLBW) infants and to describe the outcomes of these infants at the three district hospitals. The ultimate goal is to better understand the use of bCPAP in rural resource-limited settings in order to improve the quality of bCPAP implementation and inform the scale-up of this technology in similar settings. Methods This retrospective cohort study included infants receiving care at neonatal units at Rwinkwavu, Kirehe and Butaro District Hospitals from February 1, 2013 to October 31, 2013. The catchment area included 865,000 people and care at the hospital was obtained after referral from one of the 41 health centers within the districts. These three hospitals were selected for the study as they were the only rural district hospitals providing basic neonatal care using bCPAP in Rwanda in 2013. A team of nurses and general practitioners worked permanently in these units providing care to an average of 25 infants every month in each hospital. Infants who needed intensive neonatal care, including mechanical ventilators, were referred to tertiary hospitals in Kigali city (the capital of Rwanda). Following the training on implementation of bCPAP, Rwinkwavu and Kirehe District Hospitals benefited from fairly consistent mentorship from PIH pediatric specialists during the study period while Butaro hospital had more intermittent specialist presence. Respiratory assessment to determine the need for bCPAP is based on physical examination (such as grunting, nasal flaring and chest retraction) and vital signs (including respiratory rate and/or oxygen saturation). In addition, the etiology of respiratory symptoms and the natural history of that diagnosis are considered. Once the overall assessment is complete, the degree of respiratory distress is categorized as mild, moderate or severe. Moderate to severe signs include moderate to severe grunting, flaring, retractions and respiratory rate >70 or <30 and/or oxygen saturation <90 % (The oxygen saturation was measured using pulse oximeter). Based on the bCPAP protocol used in the three district hospitals, any newborn with a moderate to severe respiratory distress should have been initiated on bCPAP (Fig. 1). Furthermore, preterm (gestational age (GA) <33 weeks) or very low birth weight (<1500 g) infants with any degree of respiratory distress (mild, moderate or severe) should have been initiated on bCPAP. Preterm infants with significant apnea and bradycardia of prematurity were also eligible. Our study population included all PT/VLBW infants admitted in neonatology units at the three hospitals. All term and near term infants (GA ≥33 weeks and/or birth weight ≥1500 g) were excluded as the severity of their respiratory distress was not captured in the patient charts and therefore eligibility for bCPAP could not be ascertained. For infants included in the study, we added a category of unknown to indicate missing data. The following information was extracted from the patient charts and registers in the neonatology and maternity unit: place of birth, birth weight, gestational age, respiratory rate, oxygen saturation, presence of physical signs of respiratory distress (grunting, chest retraction, nasal flaring), bCPAP recommendation and initiation, final disposition (recovered, referred or died) and presence of bCPAP complications (skin injury, pneumothorax, abdominal distention). We categorized PT/VLBW infants with at least one sign of respiratory distress as bCPAP eligible and those without any sign of respiratory distress as bCPAP ineligible. Data was extracted into a standard data collection form, and a file linking the study ID to the mother and neonate ID was kept separately during the data collection and destroyed after data validation. We analyzed data using Stata 12.1 (College Station, TX: StataCorp LP). We used descriptive statistics reporting number and percent of infant characteristics, infants identified as eligible for bCPAP, infants for whom bCPAP was initiated and clinical outcomes based on CPAP eligibility. We also used median and interquartile range for the duration of stay in the hospital. The study received technical and ethical approvals from Rwanda institutional review boards: The Inshuti Mu Buzima Research Committee (IMBRC), the National Health Research Committee (NHRC) and the Rwanda National Ethics Committee (RNEC). As the study used deidentified routinely corrected data, the consent for parents was waived. STROBE (STrengthening the Reporting of OBservational studies in Epidemiology) guidelines were also followed for this study. Results During the study period, 862 infants were admitted in the three hospitals. Of these, 136 (16 %) were identified as PT/ VLBW and included in the analysis (Table 1). Of the 136 infants, 75.7 % (n = 103) were VLBW and 57.4 % (n = 78) were preterm. Most of the PT/VLBW infants (n = 117, 86 %) were born at a health facility, either hospital or health center. The median number of days of stay at the hospital was 19 with an interquartile range of 6-32 days. In assessing the presence of respiratory distress symptoms among PT/VLBW infants, 61.0 % (n = 83) showed at least one sign of respiratory distress ( Table 2). Many of the infants (50.7 %, n = 69) had low oxygen saturation (SpO 2 <90 %) and 38 infants (28.4 %) had chest retraction. In some cases, the clinicians only mentioned infants in respiratory distress without specifying the physical symptoms. One infant did not have documentation of the presence or absence of respiratory distress and thus, bCPAP eligibility could not be determined. Of the 135 PT/VLBW infants whose bCPAP eligibility could be determined, 61.5 % (n = 83) were bCPAP-eligible of which 59.0 % (n = 49) were correctly identified by health providers and for 51.8 % (n = 43) bCPAP was initiated. Twenty-three bCPAP-eligible infants (27.7 %) had no indication of being identified as bCPAP eligible or of being initiated on bCPAP. Information around identification was missing for 13.3 % (n = 11) of infants who were eligible (Table 3). For the 52 infants who were not bCPAP-eligible, 45 (86.5 %) were correctly identified as not bCPAP-eligible and 46 (88.5 %) did not receive bCPAP. Discussion In this study, we assessed the implementation of bCPAP with PT/VLBW infants at three district hospitals in rural Rwanda and found the intervention feasible in a resource-limited rural setting. Over the nine-month period, 45 infants were initiated on bCPAP, demonstrating that bCPAPan evidence-based intervention to improve survival or PT/VLBW infantsis filling a medical care need for neonates. However, only 52 % of bCPAP-eligible infants received bCPAP, suggesting ongoing gaps in correct identification and initiation of eligible infants. We suspect that this low sensitivity might be a result of turnover of nurses and doctors and could be improved with increased onsite mentorship and refresher trainings, particularly to identify early and mild signs of distress promptly for immediate CPAP initiation to gain the full benefit of the intervention. Qualitative research to assess and understand the barriers to implementation experienced by nurses and doctors is also advised. Conversely, 88.5 % of bCPAP ineligible infants were not initiated, indicating that clinicians are not exposing ineligible infants to possible bCPAP side effects and conserving the machines for the infants most in need. Only two of the bCPAP initiated infants were bCPAP ineligible according to medical file documentation, an improvement over a study in Malawi where of the 11 neonates treated with bCPAP, six did not meet initiation criteria [16]. A quarter of infants included in this study died before discharge from the hospital. This mortality rate is similar to outcomes of PT/VLBW infants in similar settings in sub-Saharan Africa [18][19][20]. The highest rate of death in this study, nearly 49 %, occurred in infants eligible for CPAP who died after initiation. Given the low sensitivity of CPAP initiation, we suspect that this group had a higher severity of respiratory distress and other comorbidities compared to infants who were not initiated on CPAP. We were unable to accurately assess the severity of respiratory distress among those who were eligible but not initiated on CPAP; however, we suspect that they were likely to be less severely ill. In addition, our study was conducted in rural hospitals without full-time pediatric specialists on staff; however, similarly high mortality rates among bCPAP initiated infants have been reported in studies conducted in teaching hospitals with more specialized staff [15][16][17]21]. There are several limitations to consider for this study. This study is based entirely on routinely collected data available in the patient file. While we cannot verify the accuracy of diagnosis, we believe the information provided by clinicians is reliable because of their clinical background and expertise. For some cases, however, there was limited documentation from clinicians especially on the severity of respiratory distress. Our study excluded term and near-term infants whose bCPAP eligibility depended on the severity of respiratory distress, which was difficult to capture in patients records. Furthermore, we were unable to assess the degree of distress among eligible infants whom were not provided bCPAP to assess for possible selection bias. In a few cases for the PT/VLBW infants, it was difficult to determine whether the infant was identified for bCPAP or initiated on bCPAP. To improve documentation and resulting quality improvement, we recommend the revision of the neonatology patient chart and onsite training/supervision. Despite these challenges, we believe these results are informative as they represent the first assessment of bCPAP implementation in rural Rwanda and thus provide a basis for informing better service delivery and bCPAP scale-up in similar settings. Conclusion To our knowledge, this is the first study of implementation of bCPAP in rural district hospitals in sub-Saharan Africa. We found that bCPAP is a feasible way to support infants with respiratory distress in resource-limited settings. While the introduction and use of bCPAP in this setting appears promising, there remain challenges in terms of guideline adherence. We believe that providing more intense mentorship and refresher trainings can improve guideline adherence, particularly given the high rates of staff turnover. We also recommend the adaption of clinical charts to facilitate clinical determination of degree of respiratory distress and consequent decision-making. Future qualitative and prospective research is needed to determine challenges encountered by clinicians in using bCPAP as well as delineate the reasons for high mortality among infants put on CPAP. Finally and critically, more research is needed to assess the impact of bCPAP on long-term survival and outcomes for PT/VLBW infants.
2017-06-20T20:27:54.407Z
2015-09-24T00:00:00.000
{ "year": 2015, "sha1": "e968639cb84718e367b151c952cde78a50ccd1c5", "oa_license": "CCBY", "oa_url": "https://bmcpediatr.biomedcentral.com/track/pdf/10.1186/s12887-015-0449-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c818679ca4719e994dd835c089f2f2d7e1f3b0ac", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
125499977
pes2o/s2orc
v3-fos-license
Optimal Consumption in the Stochastic Ramsey Problem without Boundedness Constraints This paper investigates optimal consumption in the stochastic Ramsey problem with the Cobb-Douglas production function. Contrary to prior studies, we allow for general consumption processes, without any a priori boundedness constraint. A non-standard stochastic differential equation, with neither Lipschitz continuity nor linear growth, specifies the dynamics of the controlled state process. A mixture of probabilistic arguments are used to construct the state process, and establish its non-explosiveness and strict positivity. This leads to the optimality of a feedback consumption process, defined in terms of the value function and the state process. Based on additional viscosity solutions techniques, we characterize the value function as the unique classical solution to a nonlinear elliptic equation, among an appropriate class of functions. This characterization involves a condition on the limiting behavior of the value function at the origin, which is the key to dealing with unbounded consumptions. Finally, relaxing the boundedness constraint is shown to increase, strictly, the expected utility at all wealth levels. Introduction In the economic growth theory, capital stock of a society amounts to the total value of assets that can be used to produce goods and services, such as factories, equipment, and monetary resources. Whereas capital can be consumed to give individuals immediate welfare, it can also be used to generate more capital and thus sustain economic growth, which enhances future welfare. As Ramsey [13] pointed out in a deterministic model, sensible financial planning, regarding consumption and saving of capital, is imperative to strike a balance between current and future welfare. In a continuous-time setting, Merton [7] enriched the problem by considering stochastic evolution of the population in a society. The stochastic Ramsey problem, coined by Merton [7], has been investigated in the stochastic control literature through viscosity solution techniques, Banach's fixed-point argument, and the combination of both; see e.g. Morimoto and Zhou [10], Morimoto [8,9], and Liu [6], among others. to accommodate both unbounded consumptions and infinite horizon; see Remark 5.1 for details. In fact, the construction in Proposition 5.1 can be made much more general. For any u ∈ C 1 ((0, ∞)) that is strictly increasing, concave, and whose behavior at 0+ satisfies (5.12) below, we can construct from u a candidate optimal consumptionĉ u , and show that the state process Xĉ u is well-defined and strictly positive; see Corollary 5.1 and (5.15). With the aid of a verification argument, this leads to the full characterization: V is the unique classical solution to a nonlinear elliptic equation among the class of functions u ∈ C 2 ((0, ∞))∩C([0, ∞)) that are strictly increasing, concave, satisfying (5.12) and the linear growth condition; see Theorem 5.1. In [10], where consumptions are uniformly bounded, the value function is only shown to be a classical solution, with no further characterization. Theorem 5.1 fills this void, in a more general setting with unbounded consumptions; see Remark 5.3. Specifically, the identification of (5.12) in Theorem 5.1 is the key to dealing with unbounded consumptions. If one is restricted to bounded consumptions (as in [10]), there is no need to impose (5.12); see Remark 5.2. Finally, we compare our no-constraint optimal consumptionĉ with the optimalĉ L in [10], bounded by L > 0. Two questions are particularly of interest. First, by switching from the bounded strategyĉ L to the possibly unboundedĉ, can we truly increase our expected utility? An affirmative answer is provided in Proposition 6.1: expected utility rises at all levels of wealth (capital per capita), wheneverĉ is truly unbounded. This justifies economically the use of unbounded strategies. Second, for each L > 0, do agents followingĉ L simply chop the no-constraint optimal strategyĉ at the bound L > 0? Corollary 6.1 shows that the relation "ĉ L =ĉ ∧ L" fails in general, suggesting a more structural change fromĉ L toĉ. For the isoelastic utility function U (x) = x 1−γ 1−γ , 0 < γ < 1, we demonstrate the above two results fairly explicitly. The paper is organized as follows. Section 2 introduces the stochastic Ramsey problem with general unbounded consumptions. Section 3 investigates the existence and uniqueness of the state process X, and derives moment estimates of it. Section 4 shows that the value function V is a classical solution to a nonlinear elliptic equation. Section 5 finds an optimal consumptionĉ, and establishes a full characterization of V . Section 6 compares our results with previous literature with bounded consumptions. Appendix A generalizes arguments in [10] to infinite horizon. The Model Consider the canonical space Ω := {ω ∈ C([0, ∞); R) | ω 0 = 0} of continuous paths starting with value 0. Let W be the canonical process on Ω, P be the Wiener measure, and F = {F t } t≥0 be the P-augmentation of the natural filtration generated by W . Given t > 0 and ω ∈ Ω, for anyω ∈ Ω, we define the concatenation of ω andω at time t as (ω ⊗ tω ) r := ω r 1 [0,t] (r) + (ω r−t + ω t )1 (t,∞) (r), r ≥ 0. (2.1) Note that ω ⊗ tω again belongs to Ω. Consider a society in which the labor supply is equal to total population. The capital stock K of the society accumulates from economic output, generated by the capital itself and the labor force. At the same time, K may decrease due to capital depreciation and consumption from the population. Specifically, we assume that K follows the dynamics Here, F : [0, ∞) × [0, ∞) → [0, ∞) is a production function, Y is the labor supply process, λ ≥ 0 is the constant rate of depreciation, and c is the consumption rate process chosen by the population. Throughout this paper, we take F to be the Cobb-Douglas form, i.e. where n ∈ R and σ > 0 are two given constants. In addition, we consider general consumption processes c without any a priori boundedness condition, as opposed to most previous studies in the literature. Specifically, the set C of admissible consumption processes is taken as At each time t ≥ 0, every individual is allotted the capital K t /Y t , which can be consumed immediately or saved for future production. An individual is then faced with an optimal consumption problem: he/she intends to choose an appropriate consumption processĉ ∈ C, so that the expected discounted utility from consumption can be maximized. Specifically, the corresponding value function is given by where β ≥ 0 is the discount rate and U : [0, ∞) → R is a utility function. We will assume that U is strictly increasing and strictly concave, (2.5) The dimension of the problem can be reduced, by introducing the variable x := k/y and the process X t := K t /Y t , i.e. the capital per capita process. Specifically, the value function in (2.4) can be re-written as where the process X satisfies, thanks to Itô's formula, with µ := λ + n − σ 2 . As in [10], we will assume throughout the paper that The goal of this paper is to provide characterizations for the value function V in (2.7), as well as the associated optimal consumption processĉ. The Capital per Capita Process In this section, we analyze the capital per capita process X, formulated as the stochastic differential equation (SDE) (2.8). We will investigate the existence and uniqueness of solutions to (2.8), and derive several moment estimates for X, useful in Sections 4 and 5 for characterizing V in (2.7). The SDE (2.8) is non-standard: the drift coefficient is neither Lipschitz nor of linear growth. Indeed, Lipschitz continuity fails due to the term X α t , and the unboundedness of c may lead to superlinear growth. Consequently, standard techniques to establish existence and uniqueness of solutions (requiring both "Lipschitz" and "linear growth") and to derive moment estimates (requiring "linear growth") cannot be applied here. Remark 3.1. In [10], (2.8) is studied in a simpler setting, where c is assumed to be uniformly bounded (in fact, c t ≤ 1 for all t ≥ 0). This ensures linear growth of the drift coefficient of (2.8), such that some standard techniques and estimates can still be used. Without the aid of standard results, we investigate existence and uniqueness of solutions to (2.8), by constructing solutions directly. As shown in Proposition 3.1 and Corollary 3.1 below, existence can be established in general, yet uniqueness need not always hold. Proposition 3.1. For any c ∈ C and x > 0, there exists a unique strong solution to (2.8), which is strictly positive a.s. Since the function f (y) := y 1−α is well-defined on [0, ∞) and differentiable on (0, ∞), we can apply Itô's formula to Z only up to the stopping time This gives the dynamics of Z up to time τ : We claim that this SDE admits a unique strong solution. For simplicity, let a := 1 − α and b t := −(1 − α)(µ + c t + 1 2 σ 2 α), and define Note that G is well-defined a.s. thanks to c ∈ C; recall (2.3). By definition, G satisfies the dynamics dG t = (−b t + σ 2 a 2 )G t dt + σaG t dW t , for all t > 0. By applying Itô's formula to the product process GZ up to time τ , we get This implies that is the unique strong solution to (3.1), given that Z 0 = z. Now, in view of (3.3) and G t > 0 for all t ≥ 0, we conclude that Z t > 0 for all t ≥ 0 a.s., and thus τ = ∞ a.s. With τ = ∞ a.s., the construction in the proof above implies that the process X t = Z 1/(1−α) t , t ≥ 0, with Z given by (3.3), is the unique strong solution to (2.8), and it is strictly positive a.s. For the case x = 0 in (2.8), uniqueness of solutions fails. are two dinstinct strong solutions to (2.8). Here, G is defined as in (3.2). Proof. Since X ≡ 0 trivially solves (2.8), we focus on showing that X is a strong solution to (2.8). First, since G 0 = 1 = 0, X is continuous at t = 0, i.e. lim t↓0 X t = 0 = X 0 . Now, consider the SDE (3.1), with Z 0 = 0. Due to the term (1 − α)dt, Z will immediately go up from 0, such that τ ′ := inf{t > 0 : Z 0 t = 0} > 0. We can then apply Itô's formula to the process GZ over the interval (0, τ ′ ). Similarly to the proof of Proposition 3.1, we find that Z t = 1−α Gt t 0 G s ds is the unique strong solution to (3.1) up to time τ ′ , given that Z 0 = 0. But the formula of Z entails Z t > 0 for all t > 0 a.s., and thus τ ′ = ∞ a.s. Observe that X t = (Z t ) 1/(1−α) for all t ≥ 0. With τ ′ = ∞ a.s., we can apply Itô's formula to X over (0, ∞), which shows that it is a strong solution to (2.8). Classical moment estimates of SDEs rely on linear growth of coefficients, along with an application of Gronwall's lemma; see e.g. Krylov [5,Chapter 2], especially Corollary 2.5.12. As mentioned before, the drift coefficient of (2.8) does not necessarily have linear growth, unless c is known a priori a bounded process (as in [10]). The explicit formula of X via (3.3) turns out to be handy here. Detailed analysis on such a formula yields desirable moment estimates, without requiring any linear growth condition. Proposition 3.2. Let η := 1 1−α . Given c ∈ C, the unique strong solution X of (2.8) satisfies Moreover, for any ε > 0, there exists C ε > 0 such that Proof. Fix c ∈ C and x > 0. Consider Z t := (X x t ) 1−α . Then, as shown in the proof of Proposition 3.1, Z satisfies (3.1), which can be solved to get the formula (3.3). It follows that where the inequality follows from (u + v) k ≤ 2 k−1 (u k + v k ) for u, v ≥ 0 and k > 1. Observe from (3.2) that This, together with c t ≥ 0, implies that Then, observe that By applying Jensen's inequality to t 0 G −1 s,t ds η , we deduce from the above equality that where the last inequality follows from E[G −η s,t ] ≤ 1, which can be proved as in (3.8). Now, by (3.8) and (3.10), we conclude from (3.6) that E[X t ] ≤ 2 η−1 (x + t η ), as desired. To prove the second part of (3.4), we replace η by 2η in the above arguments. First, (3.8) becomes (3.11) Then, (3.10) becomes (3.12) where the first inequality follows from applying Jensen's inequality to t 0 G −1 s,t ds 2η and the second inequality is due to E[G −2η s,t ] ≤ e σ 2 (t−s) , which can be proved as in (3.11). Finally, using the same calculation in (3.6) with η replaced by 2η, along with (3.11) and (3.12), we conclude that , as desired. To prove (3.5), consider the process Z defined above, as well asZ t := (X y t ) 1−α . As above, Z andZ take the form (3.3), with initial values z = x 1−α andz = y 1−α , respectively. Thus, by (3.8), where the last inequality follows from the observation |u r − v r | ≤ |u − v| r for any u, v ≥ 0 and 0 < r < 1. Indeed, we may assume without loss of generality that u ≥ v and define λ := u/v ≥ 1. Thus, the observation is equivalent to λ r − 1 ≤ (λ − 1) r for any λ ≥ 1 and 0 < r < 1. The latter is true because f (λ) := (λ − 1) r − λ r + 1 satisfies f (1) = 0 and f ′ (λ) = r ( 1 λ−1 ) 1−r − ( 1 λ ) 1−r > 0 for all λ > 1. Next, for any a, b ≥ 0 and ε > 0, observe that where the second line follows from Young's inequality with p = η and q = η η−1 , and the third line is due to (u + v) k ≤ 2 k−1 (u k + v k ) for u, v ≥ 0 and k > 1. Now, for any ε > 0, where the first inequality follows from (3.14) and (3.13), and the second inequality is due to the first part of (3.4). Now, in the last line of the previous inequality, by taking ε ′ : Properties of the Value Function In this section, we introduce, for each L > 0, the auxiliary value function We will first derive useful properties of V L . As L → ∞, we will see that V L converges desirably to V in (2.7), so that V inherits many properties of V L . Morimoto and Zhou [10] studied a similar problem to V L : they took L = 1 and the time horizon to be finite in (4.1). Extending their arguments to infinite horizon gives properties of V L as below. The proof of Proposition 4.1 is relegated to Appendix A, where arguments in [10] are extended to infinite horizon. While this extension can mostly be done in a straightforward way, there are technicalities that require detailed, nontrivial analysis. This includes, particularly, the derivation of the dynamic programming principle for V L ; see Lemma A.2 for details. Given that {V L } L>0 is by definition a nondecreasing sequence of functions, we define Remark 4.1. V ∞ immediately inherits many properties from V L 's. (i) Thanks to Proposition 4.1, V ∞ is concave, nondecreasing, and satisfies ii) The concavity of V ∞ implies that it is continuous on (0, ∞). Hence, by Dini's theorem, V L converges uniformly to V ∞ on any compact subset of (0, ∞). Proof. By (2.5) and (2.6), for any p > 0, there exists a unique maximizer y * (p) > 0 such that From these forms ofŨ andŨ L , we see thatŨ L converges uniformly toŨ on any compact subset of (0, ∞) 2 . This, together with Remark 4.1 (ii), implies that we can invoke the stability result of viscosity solutions (see e.g. [9, Theorem 4.5.1]). We then conclude from the stability and Proposition 4.1 (ii) that V ∞ is a viscosity solution to (4.6). In fact, the convergence of V L to V ∞ is highly desirable. As the next result demonstrates, not only V L but also V ′ L and V ′′ L converge uniformly. This readily implies smoothness of the limiting function V ∞ . Proposition 4.2. V ′ L and V ′′ L converge uniformly, up to a subsequence, on any compact subset of (0, ∞). Hence, , up to a subsequence, for each x > 0. Furthermore, V ∞ is a classical solution to (4.6). Proof. Fix a compact subset E of (0, ∞). Let a := inf E > 0 and b := sup E. For any L > 0, since V L is nonnegative, nondecreasing, concave, and bounded above by x + ϕ 0 (Proposition 4.1), To this end, we will show that there exists where the second and the third inequalities follow from By the uniform boundedness on By the Arzela Ascoli Theorem, this implies V ′ L converges uniformly, up to some subsequence, on E. With V L , V ′ L , andŨ L all converging uniformly on E (recall from the proof of Lemma 4.1 thatŨ L converges uniformly toŨ), (4.7) implies that V ′′ L also converges uniformly on E. Proof. Since V ∞ is nonnegative, concave, and nondecreasing (Remark 4. Then, for any T > 0 and c ∈ C, where the second line follows from Remark 4.1 (i) and the finiteness is due to (3.4). It follows that , for any T > 0 and c ∈ C. Now, fix c ∈ C. By using Ito's formula, for any T > 0, where the inequality follows from V ∞ satisfying (4.6) (Proposition 4.2). As T → ∞, we deduce from Remark 4.1 (i) and (3.4) that We therefore conclude that V (x) = V ∞ (x). The remaining assertions follow from Remark 4.1 (i), Remark 4.2, and Proposition 4.2. While Theorem 4.1 associates V with the nonlinear elliptic equation (4.6), this is not a full characterization of V , as there may be multiple solutions to (4.6). To further characterize V as the unique classical solution to (4.6) among a certain class of functions, the standard approach is to stipulate an optimal control of feedback form, by which one can complete the verification argument; note that the proof of Theorem 4.1 amounts to the first half of the verification argument. As detailed in Section 5 below, although the form of a candidate optimal consumption procesŝ c can be readily read out from the equation (4.6), it is highly nontrivial whetherĉ is a well-defined stochastic process, due to the unboundedness ofĉ. This entails additional analysis of the value function V and the capital per capita process X, as we will now introduce. Optimal Consumption In view of (4.6), one can heuristically stipulate the form of an optimal consumption process aŝ where X is the solution to the SDE (2.8) with c t replaced byĉ t , i.e. the solution to Forĉ in (5.1) to be well-defined, two questions naturally arise. First, it is unclear whether (5.2) admits a solution: Proposition 3.1 is an existence result for (2.8), specifically when c is an a priori given process, without X t involved. Second, even if a solution X to (5.2) exists, it is in question whether X is strictly positive, so that one does not need to worry about the problematic case "X t = 0" in (5.1). For (5.2) to admit a solution, we first observe that it is necessary to have V ′ (0+) = ∞. Indeed, if c := V ′ (0+) < ∞, when X is close enough to zero, the drift coefficient of (5.2) will approach the constant −(U ′ ) −1 (c) < 0, while the diffusion coefficient will tend to zero. This will eventually bring X down to zero. When this happens, the drift and the diffusion coefficients will be precisely −(U ′ ) −1 (c) < 0 and 0 respectively, which will move X further to take negative values. The drift coefficient of (5.2), however, is not well-defined for negative values of X t . A solution to (5 The next result analyzes the behavior of V as x ↓ 0, and particularly establishes V ′ (0+) = ∞. Lemma 5.1. The function V defined in (2.7) satisfies the following: (ii) Assume U ∈ C 2 ((0, ∞)). As x ↓ 0, V ′ explodes and is of the order of x −α . Specifically, Proof. (i) Considerc ∈ C withc ≡ 1. For any x > 0, in view of (3.3), the corresponding capital per capita process X x t is given by where G t is given as in (3.7) with c t replaced by the constant 1. Then, by the definition of V , where G s,t is given as in (3.9) with c t replaced by the constant 1. Since each of these functions is continuously differentiable, we have V ∈ C 3 ((0, ∞)). By using L'Hospital's rule, which implies lim x↓0 xV ′′ (x) = 0. The same argument in turn gives leading to lim x↓0 x 2 V ′′′ (x) = 0. Now, by differentiating both sides of (5.4) and multiplying them by x 1−α , we get where the last term is obtained by noting that U ′ • I is the identity map. As x ↓ 0 in (5.6), we get This is a contradiction by noting that αc > 0 and the limit above is nonnegative (as I is a positive function and V is concave). We therefore conclude that V ′ (0+) = ∞. Now, since V satisfies (4.5) (Theorem 4.1), we have lim sup x↓0 xV ′ (x) < ∞. Take an arbitrary sequence {x n } n∈N such that x n ↓ 0 and x n V ′ (x n ) converges as n → ∞. Let ℓ := lim n→∞ x n V ′ (x n ) < ∞. Similarly to (5.5), which yields lim n→∞ x 2 n V ′′ (x n ) = −ℓ. Recalling that V is a classical solution to (4.6), we have As n → ∞, since V ′ (0+) = ∞ impliesŨ (V ′ (x n )) → 0, we obtain If ℓ > 0, then lim n→∞ x α n V ′ (x n ) = ℓ lim n→∞ x α−1 n = ∞, which would violate the above equality. Thus, ℓ = 0 must hold. Since {x n } n∈N above is arbitrarily chosen, we conclude that lim x↓0 x α V ′ (x) = βV (0+) > 0, where the inequality follows from (i). On the strength of Lemma 5.1, we are ready to present the existence result for (5.2). For any x > 0, there exists a unique strong solution to (5.2), which is strictly positive a.s. Step 2: Show that pathwise uniqueness holds for (5.2). Let x * > 0 be the unique maximizer of sup x≥0 {x α −µx}. Observe that x → x α −µx is strictly increasing on (0, x * ) and strictly decreasing on (x * , ∞). Also, the concavity of V (Theorem 4.1) implies that V ′ is nonincreasing. Since U is strictly concave, U ′ is strictly decreasing, and so is ( Besides the weak solution X in Step 1, let X be another weak solution to (5.2), with (Ω, F, P), W , and the initial value x > 0 all the same as those of X. By the same argument in Step 1, X takes values in (0, ∞) a.s. For each N ∈ N, consider τ N := inf{t ≥ 0 : X t ≤ 1/N }. Proof. The result can be established by following the proof of Proposition 5.2, with V replaced by u. Specifically, Step 1 in the proof can be carried out thanks to u ′ (x) > 0 and (5.12), while Step 2 relies on the concavity of u. Let U denote the class of functions u ∈ C 2 ((0, ∞)) ∩ C([0, ∞)) that are nonnegative, strictly increasing, concave, satisfying (5.12) and the following linear growth condition: there exists C > 0 such that Now, we are ready to present the main result of this paper. ∞)). The function V defined in (2.7) is the unique classical solution to (4.6) among functions in U . Moreover,ĉ ∈ C defined by (5.1), with X being the unique strong solution to (5.2), is an optimal consumption process for (2.7). Proof. We know from Theorem 4.1 and Lemma 5.1 that V ∈ U and it solves (4.6) in the classical sense. By following the arguments in Theorem 4.1, with V ∞ and c therein replaced by V andĉ, we note that the inequality in (4.8) now becomes equality, leading to V (x) = E ∞ 0 e −βt U (ĉ t X x t )dt for all x > 0. This readily shows thatĉ ∈ C is an optimal consumption process for (2.7). For any u ∈ U that solves (4.6) in the classical sense, we can again follow the arguments in Theorem 4.1 to show that u ≥ V . On the other hand, consider the consumption procesŝ x for x > 0, (5.15) where X is the unique strong solution to (5.13), whose existence is guaranteed by Corollary 5.1. Now, in (4.8), if we replace V ∞ and c therein by u andĉ u , the inequality becomes equality, leading for all x > 0. Thus, we conclude that u = V . Remark 5.2. In the characterization of V in Theorem 5.1, condition (5.12) is the key to dealing with unbounded consumptions (recall that (5.12) is part of the definition of U ). If we restrict ourselves to C L in (4.2) for some L > 0 (as in [10]), there is no need to impose (5.12). That is, we require the optimal consumption to be dominated by x 1−α as x ↓ 0. When we are restricted to C L , this requirement holds trivially, thanks to the bound L > 0 for each c ∈ C L . Thus, for V L defined in (4.1), the same arguments in Proposition 5.1, Corollary 5.1, and Theorem 5.1 can be carried out, without the need to impose (5.12). This leads to the characterization: V L is the unique classical solution to (4.3) among the class of functions u ∈ C 2 ((0, ∞)) ∩ C([0, ∞)) that are nonnegative, strictly increasing, concave, and satisfying (5.14). Remark 5.3. In [10], one is restricted to C L in (4.2). The main results, [10, Theorems 4.2 and 6.2], only show that the value function V L is a classical solution and that a feedback optimal consumption exists; there is no further characterization of V L . At the end of [10], the authors very briefly mention, without a proof, that V L is the unique solution. However, the class of functions among which V L is unique, the key ingredient of any PDE characterization, is missing. Theorem 5.1, along with the resulting characterization of V L in Remark 5.2, fills this void. We will demonstrate the use of Theorem 5.1 explicitly in Proposition 6.3 below. [8,9] To the best of our knowledge, Morimoto [8,9] are the only prior works that consider unbounded consumptions in the stochastic Ramsey problem. Our studies complement [8,9] in two ways. First, [8,9] require the production function F (k, y) to satisfy F k (0+, y) < ∞ for all y > 0. This provides technical conveniences: (i) The drift coefficient of the capital per capita process is Lipschitz (see e.g. (11) and (12) in [8]), such that the SDE has uniqueness of solutions even when the initial condition is 0. The value function V is thus well-defined at x = 0, with V (0) = 0. (ii) The continuity of V at x = 0 is ensured, with V (0+) = V (0) = 0, which leads to a short simple proof for V ′ (0+) = ∞ (see the last two lines in the proof of [8,Theorem 4 Second, with unbounded consumptions considered, the framework in [8,9], like ours, suffers the potential issue that the solution X to (5.2) may reach 0 in finite time. The author of [8,9] does not analyze whether or not, or how likely, X will reach 0 in finite time, but simply restricts the Ramsey problem to the random horizon [0, τ X ], where τ X is the first time X reaches 0. However, it is hard to imagine that in practice individuals would allow X, the capital per capita, to reach 0, and enjoy no consumption at all afterwards (This is, nonetheless, what [8, (36)] prescribes). In a reasonable economic model, an optimal consumption process should by itself prevents X from reaching 0, so that there is no need to artificially introduce τ X . In this aspect, our paper complements [8,9], by providing a framework in which τ X = ∞ is ensured under optimal consumption behavior. 6 Comparison with Bounded Consumption in [10] For each L > 0, one can solve the problem (4.1) by modifying the arguments in [10], with an optimal consumption process given bŷ where X is the unique strong solution to (2.8) with c t replaced byĉ L t . Two questions are particularly of interest here. First, by switching from the bounded strategŷ c L , however large L > 0 may be, to the possibly unboundedĉ in (5.1), can we truly raise our expected utility? An affirmative answer will be provided below, which justifies economically the use of unbounded strategies. Second, for each L > 0, do agents followingĉ L simply chop the no-constraint optimal strategyĉ at the bound L > 0? In other words, does "ĉ L =ĉ ∧ L" hold? As we will see, this fails in general, suggesting a more structural change fromĉ L toĉ. Our first result shows that switching fromĉ L toĉ strictly increases expected utility at all levels of wealth (capital per capita) x > 0, wheneverĉ is truly unbounded. Proof. (i) Sinceĉ in (5.1) is optimal for V (Theorem 5.1) and bounded by M < ∞, the definitions of V and V L in (2.7) and (4.1) directly imply V L = V for L ≥ M . (ii) Fix L > 0. First, we claim that there exists ). By this and Theorem 4.1, where the last line follows from V = V L on (0, ∞). This, however, contradicts Proposition 4.1 (ii). To concretely illustrate the above results, in the following we focus on the utility function Lemma 6.1. Assume (6.3). Then, there exist C 1 , C 2 > 0 such that In particular, we have Proof. Consider the constant consumption processc t ≡ 1. For any x > 0, let X denote the unique strong solution to (2.8) with c =c. By the definition of V and (6.3), Recall from Section 3 that X t = (Z t ) 1/(1−α) , with Z explicitly given in (3.3). It follows that where G is defined as in (3.2), with c t =c t ≡ 1, and the second inequality follows from G t > 0 for all t ≥ 0, 1 − α > 0, and 1−γ 1−α > 0. Noting that the process G is independent of x, we conclude from the above inequality that the first part of (6.4) holds. By Theorem 4.1 and (6.3), V satisfies Recall from Theorem 4.1 that V ′ (x) > 0 and V ′′ (x) ≤ 0 for all x > 0. Also, by the standing assumption µ > 0 in (2.9), x α − µx < 0 for x > 0 large enough. Hence, (6.6) implies the existence of x 0 > 0 such that Note that V being nonnegative, concave, and nondecreasing entails x for all x > 0. The above inequality then yields βxV Integrating both sides from x 0 to x ≥ x 0 gives This shows that the second part of (6.4) is true. where the second equality follows from (6.5). On the other hand, by Lemma 5.1 (ii), which directly implies (6.7). Proposition 6.2 admits interesting economic interpretation. An agent's consumption behavior is determined by two competing effects, captured by the parameters γ and α respectively. First, as in the literature of mathematical finance, γ in (6.3) measures the agent's risk aversion: the larger γ, the stronger the agent's intention to consume capital right away (to get immediate, riskless utility), as opposed to saving capital in the form of X, subject to risky, stochastic evolution. On the other hand, α in (2.8) measures how efficient capital is used in an economy to produce new capital: the larger α, the stronger the upward potential of X, and thus the more willing the agent to save capital (i.e. consume less). Now, as in (6.7), when capital per capita X dwindles near 0, (i) if risk aversion of the agent is not so strong relative to the efficiency of capital production (i.e. γ < α), the effect of α prevails, so that the agent (in the limit) saves all capital to fully exploit the upward potential of X; (ii) if risk aversion of the agent is very strong relative to the efficiency of capital production (i.e. γ > α), the effect of γ prevails, so that the agent consumes capital as fast as possible, to reduce risky position in X; (iii) if risk aversion of the agent is comparable to the efficiency of capital production (i.e. γ = α), the effects of α and γ are balanced, leading to bounded, positive consumption of the agent. The next two results focus on the specific case γ = α. The purpose is twofold. First, we demonstrate that the value function V and optimal consumptionĉ can be solved explicitly. Second, as we will see,ĉ is constant (and thus bounded), so that Corollary 6.1 is inconclusive on the failure of "ĉ L =ĉ ∧ L". Explicit calculation shows that "ĉ L =ĉ ∧ L" holds for some, but not all, L > 0. A Derivation of Proposition 4.1 In this appendix, we will establish Proposition 4.1 by generalizing arguments in [10] to infinite horizon. As mentioned in Section 4, [10] studies a similar problem to V L in (4.1), yet under finite horizon and with the specific bound L = 1. As we will see, many arguments in [10] can be modified without much difficulty to infinite horizon. A distinctive exception is the derivation of the dynamic programming principle for V L ; see Lemma A.2 below for details. (ii) We will prove this result by modifying the argument in the first part of [10, Lemma 3.2]. Define ϕ(x) := x + ϕ 0 with ϕ 0 > 0 to be determined later. Fix L > 0. For any c ∈ C L , x > 0, and T > 0, Itô's formula implies Note that −E[ T 0 e −βs σX s dW s ] disappears from the above inequality because · 0 e −βs σX s dW s is a martingale, thanks to the second part of (3.4). By (2.6) and µ > 0, we have sup y≥0 {U (y) − y} < ∞ and A := sup x≥0 {x α − µx} < ∞. We can therefore take ϕ 0 > 0 large enough such that This, together with (A.1), yields Hence, by using Fatou's lemma as T → ∞ and then taking supremum over c ∈ C L , we get the desired result V L (x) ≤ ϕ(x). Finally, note that our choice of ϕ 0 > 0 can be made independent of both L > 0 and x > 0. Indeed, the right hand side of (A.2), which involves ϕ 0 , does not depend on either L or x. Next, we derive the dynamic programming principle for V L , to show that it is a viscosity solution. As explained in detail under (A.5), arguments in [10] only lead us to a weak dynamic programming principle. Additional probabilistic arguments are invoked to upgrade this weak principle. Lemma A.2. For any L > 0, V L is a continuous viscosity solution to (4.3). Proof. Fix L > 0. The continuity of V L on (0, ∞) is a direct consequence of Lemma A.1 (i). In view of [2, Chapter V] and [12,Chapter 4], to prove the viscosity solution property, it suffices to show the following dynamic programming principle: for any x > 0, where T denotes the set of all stopping times. The "≤" relation is straightforward to derive. Indeed, given c ∈ C L , we have, for any τ ∈ T , that Here, the second line follows from [1, Proposition A.1], with c τ,ω ∈ C L defined by c τ,ω s (ω) := c τ (ω)+s (ω ⊗ τ (ω)ω ), s ≥ 0, for each fixed ω ∈ Ω; recall (2.1). The third line, on the other hand, follows from the definition of V L . Now, taking supremum over c ∈ C L gives the desired "≤" relation. The rest of the proof focuses on deriving the converse inequality The same arguments, however, only render the weaker statement (A.5) under infinite horizon. This is because with finite horizon T > 0, one can derive an estimate for E[sup 0≤t≤T X 2 t ], i.e. (2.7) in [10], which ensures that (3.14) in [10] holds simultaneously for all τ ∈ T T . When the time horizon is infinite, one would need a corresponding estimate for E[sup 0≤t<∞ X 2 t ], which is often unavailable. In our case, we only have the estimates (3.4) and (3.5), which ensure that (3.14) in [10] holds only for each deterministic time r ≥ 0. In the following, we will show that the weaker statement (A.5) in fact implies (A.4). First, we claim that for any c ∈ C L and x > 0, the process t 0 e −βs U (c s X x s )ds + e −βt V L (X x t ), t ≥ 0, is a supermartingale. Given 0 ≤ r ≤ t, it holds for a.e. ω ∈ Ω that with u 1 (a) = u 2 (a) and u 1 (b) = u 2 (b), then u 1 ≡ u 2 . Now, we are ready to prove Proposition 4.1. Proof of Proposition 4.1. In view of Lemma A.1, it remains to show that V L belongs to C 2 ((0, ∞)) and solves (4.3). For any 0 < a < b, consider the boundary value problem (A.6) with v(a) = V L (a) and v(b) = V L (b). Thanks to the boundedness of c ∈ C L , the same estimate for |Ũ L (x 1 , p 1 ) − U L (x 2 , p 2 )| in [10, Theorem 4.2] still holds, which means that the condition (5.18) in [9] is true under current setting. We then conclude from [9, Theorem 5.3.7] that there exists a classical solution v ∈ C 2 ((a, b)) ∩ C([a, b]) to (A.6). Since v is also a viscosity solution, Lemmas A.2 and A.3 imply that V L = v on [a, b], and thus V L ∈ C 2 ((a, b)). With 0 < a < b arbitrarily chosen, we have V L ∈ C 2 ((0, ∞)) and solves (4.3) in the classical sense.
2018-11-11T23:28:02.000Z
2018-05-19T00:00:00.000
{ "year": 2018, "sha1": "b21294c4d0ffa9bcf69c6629891341198be04466", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1805.07532", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "96f47cf01e178f273c0b920b4f3fab9f79c9de0e", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Economics", "Computer Science" ] }
141443845
pes2o/s2orc
v3-fos-license
May Measurement Month 2017: Blood pressure screening results from Zambia—Sub-Saharan Africa Abstract Elevated blood pressure (BP) is a growing burden worldwide, leading to over 10 million deaths each year. May Measurement Month (MMM) is a global initiative aimed at raising awareness of high BP and to act as a temporary solution to the lack of screening programmes worldwide. Prevalence of hypertension is reported to reach 34% in some areas of Zambia but public awareness is reportedly low. A majority of individuals do not know that they have high BP and others do not take any medication. An opportunistic cross-sectional survey of volunteers aged ≥18 was carried out in May 2017. Blood pressure measurement, the definition of hypertension and statistical analysis followed the standard MMM protocol. Measurement sites were set-up at shopping malls, markets, sports facilities, churches, higher institutions of learning, and urban clinics. A total of 9607 individuals were screened during MMM17. After multiple imputation, 2438 (25.9%) had hypertension. Of individuals not receiving anti-hypertensive medication, 1706 (19.6%) were hypertensive. Of individuals receiving anti-hypertensive medication, 438 (62%) had uncontrolled BP. The MMM for 2017 was the largest BP screening campaign undertaken in Zambia. The campaign identified 2438 individuals with hypertension who were given heart health advice and/or referred to the local clinic for treatment. These results suggest that a large BP screening campaign based on convenience sampling could be a useful and reasonably inexpensive tool to help raise awareness in the general population and thereby help address the burden of disease caused by hypertension. Elevated blood pressure (BP) is a growing burden worldwide, leading to over 10 million deaths each year. May Measurement Month (MMM) is a global initiative aimed at raising awareness of high BP and to act as a temporary solution to the lack of screening programmes worldwide. Prevalence of hypertension is reported to reach 34% in some areas of Zambia but public awareness is reportedly low. A majority of individuals do not know that they have high BP and others do not take any medication. An opportunistic cross-sectional survey of volunteers aged 18 was carried out in May 2017. Blood pressure measurement, the definition of hypertension and statistical analysis followed the standard MMM protocol. Measurement sites were set-up at shopping malls, markets, sports facilities, churches, higher institutions of learning, and urban clinics. A total of 9607 individuals were screened during MMM17. After multiple imputation, 2438 (25.9%) had hypertension. Of individuals not receiving antihypertensive medication, 1706 (19.6%) were hypertensive. Of individuals receiving anti-hypertensive medication, 438 (62%) had uncontrolled BP. The MMM for 2017 was the largest BP screening campaign undertaken in Zambia. The campaign identified 2438 individuals with hypertension who were given heart health advice and/or referred to the local clinic for treatment. These results suggest that a large BP screening campaign based on convenience sampling could be a useful and reasonably inexpensive tool to help raise awareness in the general population and thereby help address the burden of disease caused by hypertension. Background Hypertension has been recognized as a leading cause of morbidity and mortality in Zambia for a long time. However, the proportion of people aware of having raised blood pressure (BP) remains low. Therefore, May Measurement Month (MMM), 1 an initiative of the International Society of Hypertension (ISH), was very welcome. There has been a paucity of studies documenting prevalence of hypertension in Zambia and none of them have combined data collection with information sharing to raise awareness, modify lifestyle, and/or improve compliance to hypertension medication. The STEPS Survey 2017 2 is the first comprehensive study to produce country-level, house-hold survey data on prevalence of hypertension and related risk factors for noncommunicable diseases in Zambia. In that survey of 4302 respondents, the national prevalence of hypertension was 19.1% (20.5% in men and 17.6% in women) and rose with age to 50.5% (38.6% in males and 59.4% in females) in the 60-69 years age group. Two-thirds (62.2%)of the men and onethird (34.9%) of the women had never ever been screened for hypertension, while 80.0% (91.0% men and 77.3% women) of the respondents with hypertension were not on medication. Only 6.7% (11.4% men and 2.5% women) of all the respondents had controlled BP. These findings were not surprising as the country has reported a rise in complications of hypertension such as heart failure and strokes. 3 The MMM survey, with 9607 respondents, is therefore the largest BP survey that has been conducted in Zambia and addressed the important questions in advocacy for prevention and control of hypertension and its devastating cardiovascular disease complications. Through MMM17, there was interest to increase the level of public awareness of BP measurements as a tool for combatting the devastating effects of hypertension. Through this simple campaign, individuals would be made aware of the importance of BP measurements, risk factors for cardiovascular disease, value of compliance to medication, and access to health information. The data obtained is important for use in advocacy with policy makers and in engagement with the public to increase self-efficacy. 4 Methods The study was designed after the protocol developed by the ISH and as detailed in Beaney et al. 1 Volunteers were trained to measure BP in a cascade manner via video recordings on the MMM website. Screening sites were setup at shopping malls, institutions of learning, churches, recreational grounds, and clinics. Limited mass media campaigns were conducted through television and radio broadcasts. Every occasion was taken to urge the community members to get their BP measured. Following a brief information interaction, consenting adults (>18) were requested to provide a limited amount of information. Due to internet access limitations, the data were entered on paper forms and later transferred to spreadsheets. Three BP readings from the non-dominant arm were made using the OMRON MIT5 automated BP machines, separated by 1-to 3-min intervals. Health information was provided to all whose BP was raised including hospital referral. Hypertension was defined as SBP 140mmHg or DBP 90 mmHg or on medication for hypertension. The data manager at the Centre for Primary Care Research (CPCR) did preliminary cleaning of the data before transferring it to the MMM project team where the data were analysed following the global approach. 1 The major cost of this exercise was in BP machines. One hundred OMRON MIT5 automated BP machines were donated by OMRON through the ISH. The local logistics including transportation of data collection tools and volunteers were locally funded. The initial public campaign for BP measurements was included in the MoH public health messages which were broadcast on the national media (Television and Radio). The National Coordinator appeared on the Zambian National Broadcasting Corporation and Revelation TV, and participated in several radio programmes where hypertension, stroke, and NCDs were being discussed. The screening went on for close to 24 days. The MMM17 Zambia study was co-ordinated from the Centre for Primary Care Research of the University of Zambia School of Medicine with collaborative alliances with the Zambian Ministry of Health and the Zambia Heart and Stroke Foundation (ZAHESFO). An intensified screening programme was conducted on World Hypertension Day (17 May). The University of Zambia (UNZA) Biomedical Research Ethical Committee (UNZABREC) reviewed the survey protocol. Informed consent was obtained from each of the participants. All anonymized data forms were kept in secure cabinets at the Centre for Primary Care Research and were only viewed by approved study personnel. Results Data were collected from 33 sites in Lusaka Province (9), Eastern province (5), Copperbelt province (17), and Southern Province (2) by about 160 volunteers mostly health professions students. A total of 9607 individuals, 5575 (58.0%) women and 4026 (41.9%) men, participated in this survey (Table 1). Of these, 99.4% were of black ethnicity. Mean age was 36 years. Of these, 732 (7.6%) were on anti-hypertensive medication, 195 (2.0%) were known diabetics, 138 (1.4%) reported having had myocardial infarction, and 89 (0.9%) had had a stroke, 697 (7.3%) were current smokers and 943(9.8%) reported taking alcohol once or more per week. The mean BMI was 25.0 (5.0) kg/m 2 . Blood pressure measurements were taken in the left arm in 95.7% of cases and most of the measurements were taken on Wednesdays. The mean age and sex standardized BP (excluding those on treatment for HTN) was 125.6/79.7 mmHg and increased with age (see Supplementary material online, Figure S1). Of the 2438 (25.9%) participants who were hypertensive, 1706 (70.0%) were not currently receiving treatment. This represented 19.6% of all of those not receiving treatment. Of the 706 participants receiving treatment with an imputed BP reading, 438 (62.0%) still had elevated BP, meaning BP was controlled in only 38% of them. Of the 6437 participants who had all three BP readings, there was a consistent average decrease in BP between 1st and 2nd reading of 2.9/1.9mmHg and 2nd and 3rd readings of 1.8/1.0 mmHg. There was a statistically significant change in BP with BMI. While in healthy individuals it was recorded as averaging 3.0/0.7 mmHg higher compared with the underweight group, it was greatly exaggerated in the overweight (8.2/ 4.3 mmHg) and obese (10.1/5.9 mmHg; see Supplementary material online, Figure S2). Higher BP was also noted in those on anti-hypertensive medication, those with diabetes or previous stroke, and amongst those reporting regular alcohol use or smoking tobacco (see Supplementary material online, Figure S3). Discussion Comparatively, MMM17 is the largest synchronized, standardized national screening campaign of any 5 and 34.8% for Lusaka. 6 MMM17 had a larger representation from urban centres of Lusaka and Copperbelt provinces. The large proportion of individuals who did not know that they had hypertension, or had uncontrolled hypertension on medication, signifies the purpose of such screening campaigns. The inclusion of the information sharing on healthy diets and lifestyle modification was novel and much appreciated by the participants. MMM17 demonstrated that a large BP screening campaign based on convenience sampling could be a useful and reasonably inexpensive tool to help raise awareness in the general population and thereby help address the burden of disease caused by hypertension. MMM has been a novel approach to raising awareness among policy makers who celebrated its inclusion on the National Health Week programme and supported its data collection processes. There were also alliances built with other health professions and training institutions in the country that will be strengthened as MMM becomes an annual event. Supplementary material Supplementary material is available at European Heart Journal -Supplements online.
2019-05-03T13:08:57.798Z
2019-04-01T00:00:00.000
{ "year": 2019, "sha1": "3cf5ca9e5e653bcacdd7ef6096d5442cec60b46f", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1093/eurheartj/suz077", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "3cf5ca9e5e653bcacdd7ef6096d5442cec60b46f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
120111830
pes2o/s2orc
v3-fos-license
Directive emission from a single subwavelength aperture in a periodically corrugated silver film We present here an algorithm to evaluate the field in the near zone produced by a finite-size electromagnetic source in a periodic structure, referred to as the array scanning method (ASM) - FDTD method. Using a frequency-dependent silver permittivity model, obtained from measurement at optical and infrared frequencies, we implemented the corresponding modeling equations in the ASM-FDTD algorithm using the Z-transform technique. The developed algorithm is applied to the study of the enhanced radiation of a magnetic line source in a corrugated silver film, and the results indicate that the enhancement is due to the excitation of a leaky mode. We also show that other waves may be excited by the source depending on its location, and how this affects the radiation pattern. INTRODUCTION Highly directive beams at optical wavelengths arising from an illuminated single subwavelength slit or hole in a corrugated metal film such as silver have been observed recently, along with enhanced transmission properties [1]- [5]. It is known that this phenomenon is due to the excitation of a surface plasmon (SP) at the metal-air interface. Numerous investigations have analyzed the role of the geometry and frequency on the beam properties. Here we model the slit by using an equivalent magnetic line-source current. Two possible locations for the source are analyzed. We first show a new algorithm referred to as array scanning method (ASM), whose preliminary results have been published in [6], which enables the evaluation of the field on an infinite periodic structure excited by a single source, such as a magnetic line source. The ASM algorithm requires the analysis of only a single periodic cell of the structure (even though the structure is excited by a single source). When combined with the FDTD method the new algorithm permits savings of both memory and solution time compared with the standard FDTD method (which must include many periodic cells in the stimulation domain in order to obtain an accurate solution). The FDTD method for periodic structures is here implemented with the "phase shift" boundary conditions in the time domain [7]- [11]. This is a new implementation of the FDTD method that permits the analysis of time-varying excitations in periodic structures illuminated by a plane wave with oblique incidence or by a periodic set of line sources phased to produce a beam at an oblique angle. We also use a permittivity dispersive model used for silver based on measurements at optical and infrared frequencies [12], suitable for an FDTD implementation. The dispersive behavior is taken into account in the FDTD method by using the Z-transform technique [13]. In addition to the new algorithm for periodic structures made of dispersive materials, we also provide a simple and physical description of the enhanced directivity phenomena in terms of leaky modes. The slit excites two types of fields: a space wave (similar to the space wave radiated from a source in free space, but now with spatial harmonics present) and a SP mode. This is a general property of the field in periodic structures excited by a localized source and more details are given in [14]. We show that the field at the silver-air interface is dominated by a leaky mode, which is a radiating mode guided by the structure. The leaky mode is the SP mode that becomes leaky (radiating) due to the presence of the periodic corrugations. Therefore, this structure is similar in principle to a leaky-wave antenna that is scanned to broadside. For these structures it is the n = -1 Floquet spatial harmonic of the guided mode ( the one in the visible region) that is radiating. Because of this radiation, the surface plasmon is a leaky mode that has a complex wavenumber. We also show, as a preliminary example, the effect of the space wave in our sample geometry. Although the leaky mode will dominate the radiation pattern near the peak of the beam for a highly directive beam, the space wave may have a considerable influence on the beam shape for less directive situations. To study these effects, it is necessary to perform comprehensive full-wave electromagnetic modeling. Conventional approaches for analyzing such structures via FDTD simulations often use several hundred periodic elements in the modeling. In this paper, the above-mentioned full-wave FDTD simulations combined with the array scanning method (ASM), is found to be much more efficient, since only the FDTD modeling of a single periodic cell is required. ASM-FDTD Method for Near Field Calculation In this paper, only the case of a transverse magnetic (TM z ) field is considered, corresponding to a structure that is invariant in the y direction, illuminated by an infinite magnetic line source. Consider, as an example, the structure in Fig. 1, where a is the period along x and 0 r is the location of the magnetic line source that produces a magnetic field H y , which is henceforth referred to simply as H. The observation point in an arbitrary periodic cell n ( ( 1 ) where the hat ^ tags time-domain (TD) quantities. The "∞" superscript denotes the field due to an infinite phased array of line sources, with phasing wavenumber k x . In the TD the field quantities are complex [7]- [10] and a periodic boundary condition at the edges of the periodic cell are assumed, corresponding to the wavenumber k x . For example, Equation (2) makes it obvious that in this rather unusual TD application the TD field as a complex function. To implement the periodic boundary condition (2) using the FDTD method, the phasing parameter x k needs to be discretized within the fundamental Brillouin zone. In our implementation, an even number kx N of spectral sampling points, uniformly distributed over the fundamental Brillouin zone, is used. Spectral FDTD simulations are carried out at every spectral sampling point k x using the boundary condition described by (2) [10]. Since complex values described in (2) are used in the FDTD implementation, both electric and magnetic field values are complex. The spectral sampling points are located at [ ] After the computed field ( , , ) , , , It should be noted that other quadrature schemes can be used (e.g., Gaussian quadrature) but they do not necessarily increase the numerical accuracy, since the rectangle rule of integration is usually very efficiency for smooth periodic functions. It should also be pointed out that if a real-valued excitation is used, the final value of (1) is also real-valued. The detailed implementation of the spectral FDTD is as in [7]- [11]. Temporal Dispersion in the FDTD Method for the Silver Material To implement the frequency-dependent dielectric behavior of silver at IR and optical wavelengths, we have used a model based on measurement data. This model is described by a Lorentz-Drude formula that approximates the complex permittivity of the silver film in the IR regime as [12] ( ) ( . A detailed definition of these parameters and the values for these parameters for different films can be found in [12]. In Fig. 2 we show the frequency dependence of the real and imaginary parts of the permittivity produced by this model as well as by a lossless Drude model. The major steps to incorporate the Lorentz-Drude model in the FDTD method can be described as follows. 2. Transform (5) via the Z transform to obtain [13] ( ) ( ) ( ) the Lorentz-Drude model as given by (5), as a function of wavelength. Also shown in the result for a lossless silver film, given by the Drude model. The model is from [12]. Based on equations above, ( ) E z can be obtained by 3. Replacing ( ) The model for a lossless silver film is obtained by simply modifying the parameters in (5). For the wavelength of our interest here (around 700 nm), the multi-term Lorentz model (the second term in (5)) can then be removed in (5) The permittivity described by (10) is shown by the purple curve in Fig. 2. It is clear that the model given in (10) is a good approximation to the real part of lossy model in (5). Applying the similar principle as in (9), the field updating equations used in FDTD method then become in the lossless case as FAR-FIELD RADIATION PATTERN FOR THE PLASMONIC CORRUGATED SILVER FILM Here we show the radiation pattern produced by a magnetic line source that is placed at location A or B shown in Fig. 1, on the surface of the corrugated silver film. The geometry parameters are those in the caption of Fig. 1. Locations A and B are in the middle of a groove and on the top surface of the film, respectively. In the past the enhanced directivity effect has been demonstrated for a similar structure with a slit connecting the bottom and the top surfaces, with the bottom face illuminated by a beam. The magnetic line source here is intended to model the equivalent magnetic current that is placed on the slit aperture after the equivalence principle is applied. The far filed pattern due to the magnetic current line source excitation at point A or B can be obtained using the reciprocity theorem. The theorem states that the far field pattern can be evaluated by sampling the magnetic field value at positions A or B when a plane wave is launched towards the structure from different angles of incidence. For this particular case, this theorem can be easily implemented with the use of the periodic boundary technique described above, and the simulation is performed within only one periodic cell. The far-field patterns are plotted in Fig. 3. The maximum always occurs at broadside (i.e., at 0 θ =°) and an extremely narrow beam width is also observed. The "optimum" wavelength is the one that produces the maximum radiated field at broadside. Its value slightly depends on the source location, and the two optimum wavelengths related to the source location at A or B are shown in Table 1. This enhanced directivity, previously noted in various publications [1]- [5] is here explained by noticing that there is a leaky mode (a surface plasmon with complex propagation wavenumber) traveling along the air-silver interface. The magnetic source, either at point A or B, excites both the leaky-mode as well as a space-wave contribution. The strengths of these two very different wave fields depend on the source location, and therefore the total-field radiation patterns produced by an excitation at A or B are slightly different. In Fig. 3, when the source is at point A, the null at 3 θ = ±° is due to the cancellation effect of these two types of contributions (the leaky mode and the space wave). We have experienced other cases where the space-wave field has a more pronounced effect. In general, the space wave has a stronger influence when the structure has more loss, so that the leaky mode is attenuated more rapidly. INTERFACE FIELD FOR THE PLASMONIC CORRUGATED SILVER FILM Since the far-field radiation patterns are closely related to the field at the interface between the air and the silver film, in this section we use the developed ASM-FDTD method to evaluate and analyze the interface field along the x direction (Fig. 1). The depth of the grove is still 40 nm, and a lossy silver model is used. In the simulation, the FDTD mesh size in both the x and z directions is set to be 10 nm. A sinusoidal oscillating magnetic current line source with an operating wavelength of 698.9 A λ = nm resides at point A. The conventional FDTD method is also applied to validate the accuracy of the ASM-FDTD method. However, in the simulation performed by the conventional FDTD method, in order to emulate the infinite periodicity in the x direction, at least 600 periodic until-cells are modeled in the computation domain. Therefore the structure is large enough to attenuate the leaky plasmon mode before it reaches the edges of the structure. The ASM-FDTD technique provides a more efficient way to simulate the infinite structure with a dramatically reduced computer memory requirement. The magnitude of the magnetic field H y (x) at the interface, sampled at each unit cell, is calculated by both the FDTD and ASM-FDTD methods, and the results are plotted in Fig. 4. We see that the solutions obtained by two methods show no noticeable difference up to 50 unit cells away from the source location. Though the silver film is lossy, a stronger exponential decay along the x direction is observed compared to that which would be expected due to loss alone, and this demonstrates that a leaky mode is excited. Since the radiated field (with a narrow beam) is produced by the radiation of the equivalent current on the air-film interface, the far-field pattern is well predicted by the leaky mode that is excited by the source. 4. Interface field calculated along the interface between air and the silver film, when the structure is excited at point A. A lossy silver film is considered, with t = 40 nm.
2019-04-18T13:06:51.234Z
2007-05-04T00:00:00.000
{ "year": 2007, "sha1": "6916da6d347a0c32da6fb22ed1b11ee76f295d20", "oa_license": "CCBY", "oa_url": "https://escholarship.org/content/qt7zp9048g/qt7zp9048g.pdf?t=ouo9rt", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "cc536c606eef3c89a0ab8ea65d2216a4bd8b02a7", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Engineering" ] }
158238698
pes2o/s2orc
v3-fos-license
Access and Stratification in Nordic Higher Education. A review of cross-cutting research themes and issues ABSTRACT The purpose of this review is to investigate cross-cutting research themes and issues related to access and stratification in Nordic higher education (H.E.) (Denmark, Iceland, Finland, Norway and Sweden). We synthesise how recent changes in H.E. policy, practise, and appropriations have influenced educational opportunities along social class, gender and age. In this review we highlight results and conclusions shared by various recent Nordic studies. The emphasis is on the common trends and patterns related to social stratification in access. undergirding access to H.E. On further reflection, the history and traditions of the national systems vary considerably and specific national policies continue to reshape contemporary H.E. differently in each country (Vabø, 2014). In this review the emphasis will be on the common, cross-cutting trends and patterns; the country-specific features will be described as scope and length permits. The first wave of expansion of Nordic H.E., from the late 1960s to the late 1980s, established that universities shall remain similar in substance and quality, with no formalised institutional hierarchies . However, since the 1990s the expansion has occurred by integrating varying systems of institutions as part of H.E.; resulting in an array of institutions that vary in academic orientation, selectivity and prestige Isopahkala-Bouret, 2015;Jóhannsdóttir & Jónasson, 2014;Thomsen et al., 2017). Most recently, H.E. institutions have been involved with merger processes, and the substantial growth in the number of students and degree programmes has slowed down. With such trends, which seem to follow global trends, we believe the egalitarian basis of the Nordic model has been challenged and that both access and opportunity do not mean the same things they once did. We intend to delve deeply into these issues in this review. On closer inspection, Nordic HE systems have transformed from cohesive and standardised models into more complex systems with a variety of institutions, types of programmes and disciplines. Despite the fact that the overall inequality in educational opportunities and access to H.E. has diminished in the Nordic countries (Börjesson, Ahola, Helland, & Thomsen, 2014) as more students receive H.E. degrees, there are widening social biases of access to established universities, especially in the most prestigious disciplinary programmes (Kivinen, Hedman, & Kaipainen, 2012;Nori, 2011;Thomsen, 2015). Students' choice (where would they like to have access; who applies and where) and recruitment patterns (where do students get access; who are selected and where) provide us with indicators of changing valorisations of H.E. programmes, fields and types of study, and institutions Nori, 2011). In this review, by analysing social stratification in access, we are able to ascertain the central role H.E. plays within the changing social democratic welfare states, as well as the effects of policy reforms upon it. The purpose of this review is to investigate cross-cutting research themes and issues related to access and stratification in Nordic H.E.. 1 We synthesise how recent changes in H.E. policy, practise, and appropriations have influenced educational opportunities along class lines (often measured as academic/non-academic family background) gender and age. Other social differences lending to opportunity gaps in access include ethnic background, race, nationality, religion, sexual orientation, prior education and educational achievement. Our intention is to incorporate their intersection with social class and gender to a central point of analysis. The result will shed light on the problematic nature of the institutional stratification for access to H.E. as well as the future opportunities of graduates. The specific research questions in this review are: (1) What are the changes regarding access to Nordic H.E. in recent years? (2) What are the similarities and differences in the changing access patterns within and among the Nordic countries? Defining access and opportunity gap in Nordic higher education For the purposes of this review, access to H.E. refers to the extent to which prospective students have a chance to enter to, participate in, and gain a degree in H.E. By educational law and policy in Nordic countries, the opportunity to enter H.E. is to be as equitable as possible so that every citizen may take full advantage of their educational potential. Increasing access requires making more student places available at universities and other H.E. institutions and creating a demand for these places. Improving access can also mean removing barriers that might prevent some students from participation in certain courses or academic programmes. In the Nordic countries, everyone who meets the requirements for admission has the opportunity to apply for and potentially gain access to H.E. In most cases, eligibility requires that one has to complete upper secondary schooling or specific courses. Admission is based on final school grades, final exam grades, or grades in matriculation examination. In addition, specific disciplines and study programmes apply entrance examination. On further observation, we can see that the growth of H.E. has resulted in an increasing variety of academic level of students, and the admission rates between different institutions and study programmes have considerable differences (Vabø, Naess, & Hovdhaugen, 2016): Some are highly selective and accept only students with highest records of exam results, and others take in all qualified applicants. Occasionally, the rapid expansion of student places has led to lower entry requirements (Vabø et al., 2016). The quality of education is to some extent related to the selectivity of academic degree programmes. For example Finnish students who want to enrol in a research university must first pass a competitive entrance examination in their chosen discipline. What makes the admission process especially tight is the limited number of seats (numerus clausus) in each disciplinary degree programme and therefore only a small portion of qualified applicants (often less than 10%) can gain admission (Isopahkala-Bouret, 2018). Also at Finnish universities of applied sciences, degree programmes have limited number of seats and the admission process can be very competitive. In Sweden, professional programmes at universities have also strictly regulated admission criteria and numerus clausus (Hedmo, 2014). Moreover, in all Nordic countries, there are an increasing number of specific Master's and doctoral programmes, including international Master's programmes, in which students are selected via programme-specific application and admission processes. The term "opportunity gap" refers here to the ways in which social differences contribute to or perpetuate lower educational aspirations, achievement, and attainment for certain groups of students. Factors such as socioeconomic status, gender, age, ethnicity, race, religion, nationality, sexual orientation, disability, past academic performance, special-education status, and family income or educational-attainment levelsin addition to factors such as relative community affluence, geographical location, or upper secondary school facilitiesmay contribute to limited access to educational programmes more for some students than others. Generally speaking, opportunity gap refers to unequal distribution of resources, such as educational access. It is noteworthy that the growth of student places and programmes has not been evenly distributed within Nordic H.E. systems. In Finland, Sweden and Norway, with populations of 5.5 million, 10 million, and 5.25 million respectively, the expansion in access has been directed to the new universities of applied sciences and university colleges, while the traditional university sector has not grown significantly Vabø & Hovdhaugen, 2014). On the contrary, in Denmark, with a population of 5.7 million, universities have increased their share of new entrants, while university colleges have decreasing participation rates . In Iceland, the population is a mere 325,700 people and the biggest growth of H.E. has been accomplished via integration of non-university institutions into a unified university sector (Jóhannsdóttir & Jónasson, 2014). As a perhaps unintended result, access to the most prestigious programmes is more difficult than any other time in recent history, resulting in increasing opportunity gaps. Emerging issues with social inequality The expansion of H.E. in the Nordic countries has led to a persistent inflationary trend toward the need for ever-increasing amounts of degreed education to gain a certain societal status (Aro, 2014;Börjesson et al., 2014;Isopahkala-Bouret, 2015). The decrease in the relative value of H.E. has been accompanied by an increasing importance of the elite H.E. and status dispersions of degrees of all kinds (Isopahkala-Bouret, 2015). As a consequence, although not commonly acknowledged, the relatively egalitarian Nordic H.E. systems are also characterised by conspicuous social differences in access to different H.E. institutions and fields of study Beach & Puaca, 2014;Nori, 2011;Thomsen, Munk, Eiberg, & Hansen, 2013). The system expansion has widened access mainly through a channelling of first-generation students into less prestigious programmes and institutions (Thomsen, 2015). Therefore, it is increasingly important to address fairness of access and opportunities not only in terms of the Nordic equality ideal but also in terms of who gains access to the best institutions and programmes. In the intersectionality of class, gender and other social differences in Nordic H.E. it is critical to understand how some programmes and institutions mindfully target specific groups for recruitment. The result is that programmes are constituted differently according to social distinction. The observation is apparent in the case of nontraditional students, defined here as first-generation students, international students, mature students, and/or students with a low socio-economic background. Despite the easing of financial constraints, in comparison with many other countries, nontraditional students experience the pros and cons of access to H.E. in singularly unique ways, sometimes including marginalisation. Class-related disparities Many of the reproductive patterns of academic, high-/middle-class families found in the international literature are also found in Nordic contexts. Students with the highest amount of cultural capital, measured in the form of the highest level of the education of parents, tend to concentrate in established research-intensive universities, the dominant H.E. institutions. In general, young applicants from affluent urban backgrounds are more likely to end up in prestigious H.E. institutions in metropolitan areas, and into the more elite programmes, which leads to a socio-economically skewed student body (Ahola, 2015;Börjesson, Broady, Le Roux, Lidegran, & Palme, 2016;Kivinen et al., 2012;Munk & Thomsen, 2018;Nori, 2011;Thomsen, 2015). Cultural capital is gained in many ways from many places, but a prospective student's home life, the availability of intellectual resources and discourse, and parents' educational and career patterns deeply influence students' final grades (matriculation exam), entry examination test scores, motivation statements and interview success, as well as availability of support and resources needed in access to H.E. In addition, highly structured social disparities can be assumed on the basis of the status of the upper secondary schools (gymnasia) from which the students come to universities and other H.E. institutions (Haltia, Jauhiainen, & Isopahkala-Bouret, in press). Educational choices reflect the content and form of social capital that students have . Students from privileged backgrounds predominantly choose programmes with high entrance qualification requirements leading to more lucrative career pathways. The establishment of new H.E. institutions and study programmes, such as the universities of applied sciences in Finland, while serving many more students, can also partially explain the increasing unequal H.E. opportunities for students with a nonacademic social background (e.g. Kivinen et al., 2012). The establishment of an inclusive non-university H.E. system goes hand in hand with enhancing the exclusivity of the traditional university sector (Isopahkala-Bouret, 2018). New types of vocationallyorientated degree programmes offer socially diminished credentials and narrower returns in the labour market, and in many cases, forecloses possibilities of graduate study and membership in many professions (Isopahkala-Bouret, 2015). As a consequence, they are not appealing alternatives for privileged students, as most students in these programmes come from non-academic family backgrounds. Gender-related disparities Historically, male students have been over-represented in H.E., but today the majority of H.E. students are female. In Nordic countries about 60% of students are female (OECD, 2016). There are persistent gender differences in terms of access to university. The gates of traditional universities are more open to men. For example, in 2013 in Finland, 24% of female applicants and 27% of male applicants were accepted into elite programmes (Vipunen, 2016). At first glance, the percentage difference is not compelling. On further reflection, admission rates differ by study field, in some cases considerably (Börjesson et al., 2014;Nori & Mäkinen-Streng, 2017). Some fields are simply dominated by men (i.e. engineering and natural science) and the male acceptance rates are typically high, while in other fields (such as education, psychology and art) women outnumber men by a wide margin. Age-related disparities In the Nordic countries, students graduating from H.E. are on average a little older than students in many other countries (OECD, 2016). A person has the chance to ramp up their education through study programmes, preliminary programmes, and even work situations before applying. However, educational opportunities are affected by age as well. When a student applies through the main admission route, at the traditional age, he or she is more likely to be admitted. The possibility of admittance to Finnish universities will become smaller by about 2% for each year the process is delayed (Nori, 2011). This means that a 30year-old applicant's possibility of access is approximately one-fifth lower than that of a 20year-old applicant (Nori, 2011). Older students have in some cases taken an alternative, often longer, route of access, resulting in age and educational status differences (Haltia et al., in press;Thomsen et al., 2013). In Finland, for example, older students can use the socalled Open University gateway. After completing approximately one-third of a degree programme at the Open University with high grade point averages, a student can be admitted to a regular degree programme at the university and the credits obtained can be used directly as part of a degree. However, this route has remained narrow and disputed among the universities (Haltia, 2012). In Sweden, there has also been an access system which gave non-traditional students an alternative route to H.E., but nowadays the share of students using this second chance path is relatively low (Orr & Hovdhaugen, 2014). Similarly, Norway introduced an alternative access route based on prospective students' competence gained outside the formal education system and students with a non-academic background, as well as mature students, have benefitted most from the system (Orr & Hovdhaugen, 2014). Immigration background-, ethnicity-and race-related disparities Increasingly students with an immigration background have gained access to programmes of all kinds, providing diversity, but also raising challenges. The Nordic data are scarce regarding access and opportunity gaps of students from ethnic or racial minorities. At this time, while national and EU immigration policies are generally controversial, policymakers and practitioners are conflicted regarding immigrants' matriculation into H.E. and the extent to which precious resources should be spent on inclusive actions. Internationalising of the H.E. curriculum, including international programmes and English language teaching, is part of the debate, although internationalising policies are usually aloof to issues regarding racial inequality, and besides, racism in academia does not only concern students with a foreign background. The size of the student population with an immigration background, the percentage, and the trends regarding numbers of inquiries are in flux, varying from country to country, institution and field of study. To complicate matters, a fuller picture reveals the juxtaposition of factors involved in the demographic profiles of programmes and H.E. institutions, including self-selection, changes in the professions themselves, and global trends. Also noteworthy is that when former non-university institutions were gradually integrated into the H.E. sector, they brought programmes that were not equally proportioned in class, gender or ethnicity/race. As previously noted, various programmes, such as nursing, teaching, library science and social services, expect a much higher percentage of women. In which H.E. institutions and programmes are students with immigrant backgrounds most welcome? Social selection in access to doctoral studies In Finland and Denmark, universities are the only institutions to supply doctoral degrees. In Norway and Sweden, the degree system is integrated in a way that both universities and university colleges can provide similar degrees up to the doctoral level. In Iceland, doctoral programmes have only been established in the last 15 years; until recently, P.h.D. degrees were undertaken abroad. In all Nordic countries, there has been growth in the overall number of P.h.D. programmes and graduation. Previous studies have assumed that the social origins of the doctoral students do not greatly differ from the social origins of the Master's level students (Triventi, 2013). Accordingly, doctoral students with lower-class backgrounds have already adapted to the academic community and the lifestyle of the fellow students during their earlier studies, and the influence of family background will be ameliorated. On the other hand, there are studies suggesting that family background plays a significant role after the Master's degree level (Mastekaasa, 2006). However, when comparing Master's and doctoral students by the disciplines in the Finnish context, the picture is foggy (Jauhiainen & Nori, 2016). The differences among the disciplines are not uniform; those disciplines that are the most elite at the Master's level (e.g. medicine and law) are not necessarily so at the doctoral level. Student access, and academic capital, was clearly enhanced when parents possessed doctoral degrees or research careers (Mastekaasa, 2006). Results indicate that the selective effect of social origins does not vanish when pursuing the highest degrees in academia. The elite sub-field of Finnish doctoral education is overly represented by "educational inheritors", i.e. students coming from high-capital homes pursuing a degree at younger age than most, especially in the most highly respected and potentially lucrative disciplines in metropolitan areas (Jauhiainen & Nori, 2017). Along the same lines, the younger, discipline-oriented male P.h.D. students are more often invited to join research groups and to enter a research career over a middle-aged female with a professional background (Angervall, Beach, & Gustafsson, 2015). As indicated, access to doctoral programmes follows the segregated lines of maleand female-dominated disciplines and fields. Overall, according to Nordic statistics, in 2014 the share of female doctoral students in five countries was 51% as a whole, although with substantial variation by country and even more by discipline or field of study (NIFU, 2016). For instance, in Iceland the proportion of female P.h.D. students was 61% while in Sweden it was just 47.5% (NIFU, 2016). The fact is, Sweden has the largest female undergraduate population of the Nordic countries, which undergirds the momentum of the gender gap in the doctoral level. There is not much research regarding specific patterns or mechanisms regarding how gender affects access to doctoral studies. We believe the answers can be found in gender skews of specific programmes and the professions they represent. In the Norwegian context, men were slightly more likely than women to access doctoral studies but specific factors to explain the gap, i.e. motherhood and admission bias, are not forthcoming (Mastekaasa, 2005). It is hypothesised that gender gaps result from patterns reproduced in the recruitment and application processes in varying disciplines and programmes (Haake, 2011). The delineated picture of doctoral student population reflects the policy of equal access and opportunity. Accordingly, despite the strict selection processes created during recent decades, universities have recruited P.h.D. students from a wide assortment of backgrounds. At issue are two trends worth watching. One is whether there is a risk that the aims, values and practices dominating doctoral education will standardise it in deleterious ways. Is it possible to achieve standardisation, such as completion time limits, and preserve student versatility? The other is, as we are discussing admission rates, what about graduation rates and other factors, such as variance of duration toward doctoral completion between men and women, as well as variances of age, nationality and ethnicity? The graduation rates reveal a great deal of differentiation. The role of funding and private capital in access to higher education The discussion on access to H.E. in the Nordic context has been dominated by the analysis of cultural capital. Since the major part of the Nordic H.E. industry is publicly funded with low tuition demands, it is a fair assertion that wealth is not as relevant an indicator of access to H.E. as it is in the countries with a large private H.E. contingent. In the U.S., all universities charge tuition, ranging from $20,000 to $80,000 per year, though scholarships and loans are available to high percentages of students. Nordic H.E. systems have been perceived as moderating differences in educational and social background as education is tuition-free (or low fee) for the students and direct student subsidies are provided (Haltia et al., in press;Thomsen et al., 2013). However, the agenda on broad and egalitarian access is in tension with attempts to cut public costs of H.E. All Nordic H.E. systems are experiencing pressure to diminish overall costs. In Finland there have been drastic cuts in public financing during the last two years, for example. The working conditions of the academic staff and conditions for high quality teaching have been affected, and there are more obligations to raise external grant money. Gender, along with race, social class and ethnicity, has been shown to be in determining who is most disadvantaged by these developments. Diminishing public funding increase competitiveness between institutions and programmes (Rinne, Jauhiainen, & Kankaanpää, 2014). There has been clear impact on institutional profiling, and programme selection (Beach, 2013). More to the point, depletion of overall resources has not occurred evenly across disciplines and programmes. The employment conditions and salaries of institutions and faculties in a "non-elite" sector may have worsened more than in elite programmes, resulting in more obvious changes, such as larger class sizes, more mass lecturing, group advisement, peer advisement and less personal supervision (cf., Angervall et al., 2015;Jauhiainen, Jauhiainen, Laiho, & Lehto, 2015). For example, in Sweden, funding to the humanities and the social sciences and educational sciences has already shrunk and together these areas now gross less than half the research funds per annum from the State compared to medicine and less than 40% of the distribution of funds to science and technology (Beach, 2013). These developments have considerable gender influence as female staff and students are often over-represented in fields related to public professions such as teaching, nursing and social services with less commercial funding; furthermore, women also seem to be discriminated against within faculties (Angervall et al., 2015). In the Nordic H.E. systems, the volume and role of private H.E., and even its definition, differ from country to country. For example, the private H.E. sector consists of 15% of the student population in Norway (Vabø & Hovdhaugen, 2014). In Iceland, four universities require substantial tuition fees, and are often classified as private institutions, but nevertheless receive full contributions for their full time equivalent (FTE) teaching contributions for their teaching and some for research from the state (Jóhannsdóttir & Jónasson, 2011). In the Finnish H.E. system, there are no private universities; however, for-profit universities, governed from abroad, have provided degree programmes in Finland since the second decade of this century (Kosunen & Haltia, in press). However, even though private universities are rare, a non-regulated, private course market has emerged alongside the public education system. In Finland, for example private enterprises have started to offer training and tutoring for preparing prospective students to the entrance examinations of public universities (Kosunen & Haltia, in press;Kosunen, Haltia, & Jokila, 2015). The economic threshold to enter some of these private preparatory courses constitutes unique obstacle in access to H.E. The preparatory courses require personal economic investment, i.e. course fees up to €6000 in some of the courses in the most exclusive disciplines. In such cases, economic capital seems to be pivotal in access to H.E. A strong case can be built for monitoring the trends in for-profit tutorial systems in the Nordic countries. Such systems effectively provide advantages to wealthy students. Recommendations for further research on access and opportunity in Nordic higher education We hope this review yields important insights for policymakers in H.E. regarding resource allocation and equitable policy formulation in light of what we know regarding access and opportunity gaps. Here is a list of recommendations for further study: -Which social groups are under-represented in access to Nordic H.E? How have the access patterns evolved in relation to contemporary policy reforms? -Will students from under-represented groups, such as students with migration background or older students, gain access to all types of institutions and programmes, or will they be relegated to specific newer systems of H.E? -Specifically, what mechanisms influence the social strata in terms of access? -How does allocation of (diminishing) resources influence access to H.E. in the Nordic countries? What are the classed and gendered consequences resulting from competitive and commercial funding? How do the emerging processes of privatisation of Nordic H.E. influence access and opportunity? -What are the variances regarding the selection processes in doctoral education? The realisation of equal opportunity in education can be best promoted by eliminating marginalisation practises through egalitarianism and excellence at all stages of education. One concrete method is the prevention of regional inequality of educational institutions. Another is to provide welcoming structures and programmes of inclusion for non-traditional students, for example students with an immigrant background who might otherwise remain on the outside of the H.E. system. Diminishing resources and budget cuts must not include abandoning our resolve to promote the finest ideals of widening access and educational opportunity in Nordic H.E. This review is produced by Cross-Cutting Themes and Issues in Nordic Higher Education Research with special focus on justice and equality in/through education collaboration initiative (2016)(2017)
2019-05-20T13:04:18.614Z
2018-01-02T00:00:00.000
{ "year": 2018, "sha1": "8a73e70b17907e1de821ff3727005b9adda50e2f", "oa_license": "CCBYNC", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/20004508.2018.1429769?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "d4ed1cef2ebf84b784d176e895124bdfa345a2bd", "s2fieldsofstudy": [ "Education", "Sociology" ], "extfieldsofstudy": [ "Sociology" ] }
253157461
pes2o/s2orc
v3-fos-license
Tangent Bundle Filters and Neural Networks: from Manifolds to Cellular Sheaves and Back In this work we introduce a convolution operation over the tangent bundle of Riemannian manifolds exploiting the Connection Laplacian operator. We use the convolution to define tangent bundle filters and tangent bundle neural networks (TNNs), novel continuous architectures operating on tangent bundle signals, i.e. vector fields over manifolds. We discretize TNNs both in space and time domains, showing that their discrete counterpart is a principled variant of the recently introduced Sheaf Neural Networks. We formally prove that this discrete architecture converges to the underlying continuous TNN. We numerically evaluate the effectiveness of the proposed architecture on a denoising task of a tangent vector field over the unit 2-sphere. INTRODUCTION The success of deep learning is mostly the success of Convolutional Neural Networks (CNNs) [1]. CNNs have achieved impressive performance in a wide range of applications showing good generalization ability. Based on shift operators in the space domain, one (but not the only one) key attribute is that the convolutional filters satisfy the property of shift equivariance. Nowadays, data defined on irregular (non-Euclidean) domains are pervasive, with applications ranging from detection and recommendation in social networks processing [2], to resource allocations over wireless networks [3], or point clouds for shape segmentation [4], just to name a few. For this reason, the notions of shifts in CNNs have been adapted to convolutional architectures on graphs (GNNs) [5,6] as well as a plethora of other structures, e.g. simplicial complexes [7][8][9][10], cell complexes [11,12], and manifolds [13]. In [14], a framework for algebraic neural networks has been proposed exploiting commutative algebras. In this work we focus on tangent bundles, a formal tool for describing and processing vector fields on manifolds, which are key elements in tasks such as robot navigation or flocking modeling. Related Works. The renowned manifold assumption states that high dimensional data examples are sampled from a low-dimensional Riemannian manifold. This assumption is the fundamental block of manifold learning, a class of methods for non-linear dimensionality reduction. Some of these methods approximate manifolds with k-NN or geometric graphs via sampling points, i.e., for a fine enough sampling resolution, the graph Laplacian of the approximating graph "converges" to the Laplace-Beltrami operator of the manifold [15]. These techniques rely on the eigenvalues and eigenvectors of the graph Laplacian [16], and they give rise to a novel perspective on manifold learning. In particular, the above approximation leads to important transferability results of graph neural networks (GNNs) [17,18], as well as to the introduction of Graphon and Manifold Neural Networks, continuous architectures shown to be limit objects of GNNs [19,20]. However, most of the previous works focus on scalar signals, e.g. one or more scalar values attached to each node of graphs or point of manifolds; recent developments [21] show that processing vector data defined on tangent bundles of manifolds or discrete vector bundles [22,23] comes with a series of benefits. Moreover, the work in [24] proves that it is possible to approximate both manifolds and their tangent bundles with certain cellular sheaves obtained from a point cloud via k-NN and Local PCA, such that, for a fine enough sampling resolution, the Sheaf Laplacian of the approximating sheaf "converges" to the Connection Laplacian operator. Finally, the work in [25] generalizes the result of [24] by proving the spectral convergence of a large class of Laplacian operators via the Principal Bundle set up. Contributions. In this work we define a convolution operation over the tangent bundles of Riemannian manifolds with the Connection Laplacian operator. Our definition is consistent, i.e. it reduces to manifold convolution [19] in the one-dimensional bundle case, and to the standard convolution if the manifold is the real line. We introduce tangent bundle convolutional filters to process tangent bundle signals (i.e. vector fields over manifolds), we define a frequency representation for them and, by cascading layers consisting of tangent bundle filters banks and nonlinearities, we introduce Tangent Bundle Neural Networks (TNNs). We then discretize the TNNs in the space domain by sampling points on the manifold and building a cellular sheaf [26] representing a legit approximation of both the manifold and its tangent bundle [24]. We formally prove that the discretized architecture over the cellular sheaf converges to the underlying TNN as the number of sampled points increases. Moreover, we further discretize the architecture in the time domain by sampling the filter impulse function in discrete and finite time steps, showing that space-time discretized TNNs are a principled variant of the very recently introduced Sheaf Neural Networks [23,27,28], discrete architectures operating on cellular sheaves and generalizing graph neural networks. Finally, we numerically evaluate the performance of TNNs on a denoising task of a tangent vector field of the unit 2-sphere. Paper Outline. The paper is organized as follows. We start with some preliminary concepts in Section 2. We define the tangent bundle convolution and filters in Section 3, and Tangent Bundle Neural Networks (TNNs) in Section 4. In Section 5, we discretize TNNs in space and time domains, showing that discretized TNNs are Sheaf Neural Networks and proving the convergence result. Numerical results are in Section 6 and conclusions are in Section 7. PRELIMINARY DEFINITIONS Manifolds and Tangent Bundles. We consider a compact and smooth d−dimensional manifold M isometrically embedded in R p . Each point x ∈ M is endowed with a d−dimensional tangent (vector) space TxM ∼ = R d , v ∈ TxM is said to be a tangent vector at x and can be seen as the velocity vector of a curve over M passing through the point x (formal definitions can be found in [29]). The disjoint union of the tangent spaces is called the tangent bundle T M = x∈M TxM. The embedding induces a Riemann structure on M; in particular, it equips each tangent space TxM with an inner product, called Riemann metric, given, for each v,w ∈ TxM, by v, w TxM = iv • iw, where iv ∈ TxR p is the embedding of v ∈ TxM in TxR p ⊂ R p (the d-dimensional subspace of R p which is the embedding of TxM in R p ), with i : T M → TxR p being an injective linear mapping referred to as differential [29], and • is the dot product. The Riemann metric induces also a probability measure µ over the manifold. Tangent Bundle Signals. A tangent bundle signal is a vector field over the manifold, thus a mapping F : M → T M that associates to each point of the manifold a vector in the corresponding tangent space. An inner product for tangent bundle signals F and G is and the induced norm is ||F|| 2 T M = F, F T M. We denote with L 2 (T M) the Hilbert Space of finite energy (w.r.t. || · ||T M) tangent bundle signals. In the following we denote ·, · T M with ·, · when there is no risk of confusion. Connection Laplacian. The Connection Laplacian is a (secondorder) operator ∆ : L 2 (T M) → L 2 (T M), given by the trace of the second covariant derivative defined (for this work) via the Levi-Cita connection [24]. The connection Laplacian ∆ has some desirable properties: it is negative semidefinite, self-adjoint and elliptic. The Connection Laplacian characterizes the heat diffusion equation where U : M × R + 0 → T M and U(·, t) ∈ L 2 (T M) ∀t ∈ R + 0 (see [21] for a simple interpretation of (3)). With initial condition set as U(x, 0) = F(x), the solution of (3) is given by which provides a way to construct tangent bundle convolution, as explained in the following section. The Connection Laplacian ∆ has a negative spectrum {−λi, φ i } ∞ i=1 with eigenvalues λi and corresponding eigenvector fields φ i satisfying with 0 < λ1 ≤ λ2 ≤ . . . . The λis and the φ i s can be interpreted as the canonical frequencies and oscillation modes of T M. TANGENT BUNDLE CONVOLUTIONAL FITLERS In this section we define the tangent bundle convolution of a filter impulse response h and a tangent bundle signal F. Definition 1. (Tangent Bundle Filter) Let h : R + → R and let F ∈ L 2 (T M) be a tangent bundle signal. The manifold filter with impulse response h, denoted with h, is given by where U(x, t) is the solution of the heat equation in (3) with U(x, 0) = F(x). Injecting (4) in (6), we obtain The convolution in Definition 1 is consistent, i.e. it generalizes the manifold convolution [19] and the standard convolution in Euclidean domains (see Appendix A.4). The frequency representationF of F can be obtained by projecting F onto the φ i s basis Given a tangent bundle signal F and a tangent bundle filter h(∆) as in Definition 1, the frequency representation of the filtered signal G = h(∆)F is given by This leads to Ĝ i =ĥ(λi) F i , meaning that the tangent bundle filter is point-wise in the frequency domain. Therefore, we can write the frequency representation of the tangent bundle filter as We note that the frequency response of the tangent bundle filter generalizes the frequency response of a standard time filter as well as a graph filter [30]. TANGENT BUNDLE NEURAL NETWORKS We define a layer of a Tangent Bundle Neural Network (TNN) as a bank of tangent bundle filters followed by a pointwise non-linearity. In this setting, pointwise informally means "pointwise in the ambient space". We introduce the notion of differential-preserving nonlinearity to formalize this concept. Definition 4. (Differential-preserving Non-Linearity) Denote with Ux ⊂ TxR p the image of the injective differential i in x. A mapping σ : , and point-wise non linearity σ(·) is written as A TNN of depth L with input signals {F q } F 0 q=1 is built as the stack of L layers defined in (12), where F q 0 = F q . To globally represent the TNN, we collect all the filter impulse responses in a function set H = ĥ u,q l l,u,q and we describe the TNN u−th output as a mapping F u L = Ψu H, ∆, {F q } F 0 q=1 to enhance that it is parameterized by filters H and Connection Laplacian ∆. DISCRETIZATION IN SPACE AND TIME Tangent Bundle Filters and Tangent Bundle Neural Networks operate on tangent bundle signals, thus they are continuous architectures that cannot be directly implemented in practice. Here we provide a principled way of discretizing them both in time and space domains. Discretization in the Space Domain. The manifold M, the tangent bundle T M, and the Connection Laplacian ∆ can be approximated starting from a set of sampled points (point-cloud). Knowing the coordinates of the sampled points, it is indeed possible to build a specific (orthogonal) cellular sheaf over an undirected geometric graph (see Appendix A.3) such that its Sheaf Laplacian converges to the manifold Connection Laplacian as the number of sampled points (nodes) increases [25]. We assume that a set of n points X = {x1, . . . , xn} ⊂ R p are sampled i.i.d. from measure µ over M. We build a cellular sheaf T Mn following the Vector Diffusion Maps procedure whose details are listed in [24]. In particular, we build a geometric graph Mn, with weights for nodes i and j set as where controls the chosen Gaussian Kernel. We then assign to each node i an orthogonal transformation Oi ∈ R p×d computed via a local PCA procedure, that is an approximation of a basis of the tangent space Tx i M, whered is an estimate of d obtained from the same procedure. At this point, an approximation of the transport operator [29] from Tx i M to Tx j M is also needed. In the discrete domain, this translates in associating a matrix to each edge of the above graph (the restriction maps of the sheaf). For small enough, Tx i M and Tx j M are close, meaning that the column spaces of Oi and Oj are similiar. If they were coinciding, then the matrices Oi and Oj would have been the same up to an orthogonal transformation Oi,j satisfying Oi,j = Oi T Oj. However, the subspaces are not coinciding due to curvature. For this season, the transport operator approximation Oi,j is defined as the closest orthogonal matrix [24] to Oi,j, and it is computed as Oi,j = Mi,jV T i,j ∈ Rd ×d , where Mi,j and Vi,j are the SVD of Oi,j = Mi,jΣi,jV T i,j . We now build a block matrix S ∈ R nd×nd and a diagonal block matrix D ∈ R nd×nd withd ×d blocks defined as where Di = deg(i)Id, deg(i) = j wi,j is the degree of node i, and ndeg(i) = j wi,j/(deg(i)deg(j)). Finally, we define the (normalized) Sheaf Laplacian as the following matrix which is the approximated Connection Laplacian of the discretized manifold. A sheaf T Mn with this (orthogonal) structure is also said to be a discrete O d −bundle and represents a discretized version of T M. We introduce a linear sampling operator Ω X n : L 2 (T M) → L 2 (T Mn) to discretize a tangent bundle signal F as a sheaf signal fn ∈ R nd (a 0-cochain of the sheaf) such that We are now in the condition of plugging the discretized operator and signal in the definition of tangent bundle filter in (7), obtaining Following the same considerations of Section 4, we can define a discretized space tangent bundle neural network (D-TNN) as the stack of L layers of the form where (with a slight abuse of notation) σ has the same point-wise law of σ in Definition 4. As in the continuous case, we describe the u − th output of a D-TNN as a mapping Ψu H, ∆n, {x q n } F 0 q=1 to enhance that it is parameterized by filters H and the Sheaf Laplacian ∆n. As the number of sampling points goes to infinity, the Sheaf Laplacian ∆n converges to the Connection Laplacian ∆ and the sheaf signal xn converges to the tangent bundle signal F. Combining these results, we prove in the next proposition that the output of a D-TNN converges to the output of the corresponding TNN as the sample size increases. Theorem 1. Let X = {x1, . . . , xn} ⊂ R p be a set of n i.i.d. sampled points from measure µ over M ⊂ R p and F a bandlimited tangent bundle signal. Let T Mn be a cellular sheaf built from X as explained above, with = n −2/(d+4) . Let Ψu H, ·, · be the u − th output of a neural network with L layers parameterized by the operator ∆ of T M or by the discrete operator ∆n of T Mn. If: • the frequency response of filters in H are non-amplifying Lipschitz continuous; • the non-linearities are differential-preserving; • σ from Definition 4 is point-wise normalized Lipschitz continuous, • Ω X n F is a bandlimited sheaf signal then it holds for each u = 1, 2, . . . , FL that: with the limit taken in probability. Proof. See Appendix A.2. Discretization in the Time Domain. The discretization in space introduced in the previous section is still not enough for implementing TNNs in practice. Indeed, from Definition 1, we should learn the continuous time functionh(t), and this is generally infeasible. To make TNNs and their training implementable, we discretize the functionh(t) in the continuous time domain with a fixed sampling interval Ts. We replace the filter response function with a series of coefficients h k =h(kTs), k = 0, 1, 2 . . . . With Ts = 1 and fixing K samples over the time horizon, the discrete-time version of the convolution in (6) can be thus written as which corresponds to the form of a finite impulse response (FIR) filter with shift operator e ∆ . We can now inject the space discretization in the finite-time architecture in (21), obtaining an implementable manifold filter on the discretized manifold (cellular sheaf) T Mn as h k e k∆n fn. NUMERICAL RESULTS We assess the consistency of the proposed framework by designing a denoising task 1 . We work on the unit 2-sphere (M = S2) and its tangent bundle. In particular, we uniformly sample the sphere on n points X = {x1, . . . , xn}, and we compute the corresponding cellular sheaf T Mn, Sheaf Laplacian ∆n and signal sampler Ω X n as explained in Section 5 (also obtainingd = 2). We consider the tangent vector field over the sphere given by depicted in Fig. 1 for a realization of X with n = 200. At this point, we add AWGN with variance τ 2 to iF obtaining a noisy field iF, then we use Ω X n to sample it, obtaining fn ∈ R 2n . We test the perfomance of the TNN architecture (implemented with a DD-TNN as in (23)) by evaluating its ability of denoising fn. We exploit a one layer architecture with 1 output feature (the denoised signal), and 5 filter taps. We train the architecture to minimize the MSE 1 n fn − fn,1 2 between the noisy signal fn and the output of the network fn,1 via the ADAM optimizer [31], with hyperparameters set to obtain the best results. We compare our architecture with a 1-layer Manifold Neural Network (MNN) architecture (implemented via a GNN as explained in [19]); to make the comparison fair, iF evaluated on X is given as input to the MNN, organizing it in a matrix Fn ∈ R n×3 . We train the MNN to minimize the MSE 1 n Fn − Fn,1 2 F , where F is the Frobenius Norm and Fn,1 is the network output. It is easy 1 https://github.com/clabat9/Tangent-Bundle-Neural-Networks Fig. 1: Visualization of the embedded tangent vector field iF to see that the "two" MSEs used for TNN and MNN are completely equivalent due to the orthogonality of the projection matrices Oi. In Table 1 we evaluate TNNs and MNNs for two different sample sizes (n = 200 and n = 800), for three different noise standard deviation (τ = 10 −2 ,τ = 5 · 10 −2 and τ = 10 −1 ), showing the (again equivalent) MSEs 1 n fn − fn,1 2 and 1 n Fn − Fn,1 2 F , where fn is the sampling via Ω X n of the clean field and Fn is the matrix collecting the clean field evaluated on X . The results are averaged over 5 sampling realizations and 5 noise realizations per each of them. As the reader can notice from Table 1, TNNs always perform better than MNNs, due to their "bundle-awareness". Moreover, the mean performance remains stable as the number of points decreases, but the variances increase, meaning that having more sampling points(thus a better estimation of the Connection Laplacian) results in a more stable decision of the network. CONCLUSIONS In this work we introduced Tangent Bundle Filters and Tangent Bundle Neural Networks (TNNs), novel continuous architectures operating on tangent bundle signals, i.e. manifold vector fields. We made TNNs implementable by discretization in space and time domains, showing that their discrete counterpart is a principled variant of Sheaf Neural Networks. The results of this preliminary work, in addition to the introduction of a novel tool for processing manifold vector fields, could lead to a deeper understanding of topological neural architectures in terms of transferability and stability, with the opportunity of designing proper signal processing frameworks on tangent bundles and cellular sheaves. We plan to investigate these problems as well as applying TNNs to real-world complex tasks. A. APPENDIX A.1. Proof of Proposition 1 Proposition 1. Given a tangent bundle signal F and a tangent bundle filter h(∆) as in Definition 1, the frequency representation of the filtered signal G = h(∆)F is given by: Proof. By definition of frequency representation in (8) we have: Injecting (7) in (27), we get: For the linearity of integrals and inner products, we can write: Finally, exploiting first the self-adjointness of ∆ and then the eigenvector fields definition in (5), we can write: which concludes the proof. A.2. Proof of Theorem 1 Theorem 1. Let X = {x1, . . . , xn} ⊂ R p be a set of n i.i.d. sampled points from measure µ over M ⊂ R p and F a bandlimited tangent bundle signal. Let T Mn be a cellular sheaf built from X as explained above, with = n −2/(d+4) . Let Ψu H, ·, · be the u − th output of a neural network with L layers parameterized by the operator ∆ of T M or by the discrete operator ∆n of T Mn. If: • the frequency response of filters in H are non-amplifying Lipschitz continuous; • the non-linearities are differential-preserving; • σ from Definition 4 is point-wise normalized Lipschitz continuous, • Ω X n F is a bandlimited sheaf signal then it holds for each u = 1, 2, . . . , FL that: with the limit taken in probability. Proof. We define an inner product for sheaf signals f and u on a general cellular sheaf T Mn as: and the induced norm ||f || 2 T Mn = f , f T Mn . Under the assumption that the points in X are sampled i.i.d. from the uniform probability measure µ given by the induced metric on M and that T Mn is built as in Section 5, the inner product in (32) is equivalent to the following inner product for tangent bundle signals F and U: and the induced norm ||F|| 2 T Mn = F, F T Mn , where µn = 1 n n i=1 δx i is the empirical measure corresponding to µ. Indeed, from (1) and due to the orthogonality of the transformations Oi in Section 5, (33) can be rewritten as: where fn = Ω X n F and un = Ω X n U, respectively. We denote with L 2 (T Mn) the Hilbert Space of finite energy tangent bundle signals w.r.t. the empirical measure µn (or, equivalently, the Hilbert Space of finite energy sheaf signals w.r.t the norm induced by (32)). In the following, we will denote the norm || · ||T Mn with || · || when there is no risk of confusion. We now define bandlimited sheaf signals, Lipshitz continous tangent bundle filters and non-amplifying tangent bundle filters. The non-amplifying assumption is reasonable, because the filter functionĥ(λ) can always be normalized. In [25], the spectral convergence of the constructed Sheaf Laplacian in (15) based on the discretized manifold to the Connection Laplacian of the underlying manifold has been proved, and we will exploit that result for proving the following proposition. Proposition 3. (Consequence of Theorem 6.3 [25]) Let X = {x1, . . . , xn} ⊂ R p be a set of n i.i.d. sampled points from measure µ over M ⊂ R p . Let T Mn be a cellular sheaf built from X as explained in Section 5, with = n −2/(d+4) . Let ∆n be the Sheaf Laplacian of T Mn and ∆ be the Connection Laplacian operator of M. Let λ n i be the i-th eigenvalue of ∆n and φ i n the corresponding eigenvector. Let λi be the i-th eigenvalue of ∆ and φ i the corresponding eigenvector field of ∆, respectively. Then it holds: where the limits are taken in probability. Proof. These proposition is a consequence of Theorem 6.3 in [16]. Indeed, we rely on the operator introduced in Definition 6.1 in [25] with α = 1 and hn = n −2/(d+4) (our ), here denoted as Γ : L 2 (T M) → L 2 (T M), and on the operator Γ = −1 Γ − id , where id is the identity mapping. It is straightforward to check that: for j = 1, . . . , n. We now show that the eigenvectors sampled on X and eigenvalues of Γ correspond to the eigenvectors and eigenvalues of ∆n. Let us denote the the i − th eigenvector and eigenvalue of Γ with φ n i and − λ n i , respectively. We have: If we apply the mapping i to the last two equalities of (38) and we exploit the orthoghonality of Oj, we obtain: where the second equality applies the definition of Ω X n in (16). Therefore, we have: with the limit taken in probability, i = 1, . . . , n. Injecting the empirical measure in (41) and exploiting the results in (34) and (40), we obtain: The results in (41) and (42) combined with the a.s. convergence of the empirical measure µn to the measure µ conclude the proof. For the sake of clarity, in the following we will drop the dependence on the NNs output index u; from the definitions of TNNs in (12) and D-TNNS in (19), we can thus write: Ψ H, ∆n, Ω X n F − Ω X n Ψ H, ∆, F = xn,L − Ω X n FL . Further explicating the layers definitions, at layer l we have: x n,l − Ω X n F l with x q n,0 = Ω X n F q for q = 1, . . . , F0. Exploiting the normalized point-wise Lipschitz continuity of the non-linearities and the linearity of the sampling operator Ω X n , we have: x n,l − Ω X n F l ≤ The difference term in the last LHS of (44) can be further decomposed for every q = 1, . . . , F l−1 as: h q l (∆n)x q n,l−1 − Ω X n h q l (∆)F q l−1 ≤ h q l (∆n)x q n,l−1 − h q l (∆n)Ω X n F q l−1 The first term of the last inequality in (45) can be bounded as x q n,l−1 −Ω X n F q l−1 with the initial condition x q n,0 −Ω X n F q 0 = 0 for q = 1, . . . , F0. Denoting the second term with D n l−1 , and iterating the bounds derived above through layers and features, we obtain: Therefore, we can focus on each difference term D n l and omit the feature and layer indices to simplify notation. Considering that F and Ω X n F are bandlimited, we can write the convolution operation as follows:
2022-10-28T01:15:56.194Z
2022-10-26T00:00:00.000
{ "year": 2022, "sha1": "9efb6887431d994495e5fca167cc7e163a1d75b0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "c789f1b7321e35b636dafba3e82d8c02f5a5e0a0", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
209450153
pes2o/s2orc
v3-fos-license
COMPUTATIONS FOR COXETER ARRANGEMENTS AND SOLOMON’S DESCENT ALGEBRA II: GROUPS OF RANK FIVE AND SIX . In recent papers we have refined a conjecture of Lehrer and Solomon expressing the character of a finite Coxeter group W acting on the graded components of its Orlik-Solomon algebra as a sum of characters induced from linear characters of centralizers of elements of W . The refined conjecture relates the character above to a decomposition of the regular character of W related to Solomon’s descent algebra of W . The refined conjecture has been proved for symmetric and dihedral groups, as well as for finite Coxeter groups of rank three and four. In this paper, we prove the conjecture for finite Coxeter groups of rank five and six. The techniques developed and implemented in this paper provide previously unknown decompositions of the regular and Orlik-Solomon characters of the groups considered. Introduction Let (W, S) be a finite Coxeter system.In previous articles [2][3][4] we proposed a conjecture relating the character ω of the Orlik-Solomon algebra of W to the regular character ρ of W. Based on a conjecture of Lehrer and Solomon [9], in this paper we prove the conjecture for the Coxeter groups of type B 5 , B 6 , D 5 , D 6 , and E 6 .These computations, together with the remarks about reducible Coxeter groups following Theorem 2.3 of [2] and the proof of the conjecture for groups of type A in [3], prove the conjecture for all finite Coxeter groups of rank five and six.Our result is stated for these groups as the following theorem. Theorem 1. Suppose that W is a finite Coxeter group of rank five or six and that R is a set of conjugacy class representatives of W. Then for each w ∈ R there exists a linear character ϕ w of C W (w) such that ρ = w∈R Ind W C W (w) ϕ w and ω = w∈R where is the sign character of W and for z ∈ C W (w), α w (z) denotes the determinant of the restriction of z to the 1-eigenspace of w in the complex reflection representation of W. Let A (W) be the Orlik-Solomon algebra of W. The strategy for proving Theorem 1 is to decompose the CW-modules CW and A (W) into direct sums and prove a refinement of Theorem 1 for each summand.This method is somewhat stronger than directly proving Theorem 1, because it requires the solution to be compatible with the direct sum decompositions of CW and A (W).This method also has the advantage that it splits the problem into smaller problems and provides additional insight into how the representations of W on CW and on A (W) are related. The decomposition of CW comes from idempotents e λ in the descent algebra Σ (W) constructed in [1].These idempotents are indexed by subsets of S up to conjugacy in W. A class of conjugate subsets of S is called a shape of W. Denote the set of shapes of W by Λ.In [1] it is shown how to construct a quasi-idempotent e L for any L ⊆ S and then e λ is the sum of the quasi-idempotents e L where L runs over the subsets in the shape λ.It is also shown that {e λ | λ ∈ Λ} is a complete set of primitive orthogonal idempotents of Σ (W).Since Σ (W) is a subalgebra of CW and 1 = λ∈Λ e λ , we conclude that CW = λ∈Λ e λ CW as a CW-module.Denoting the character of e λ CW by ρ λ we have The corresponding decomposition of A (W) comes from Brieskorn's Lemma.Let T be the set of reflections in the complex reflection representation V of W. Recall that the Orlik-Solomon algebra A (W) may be defined as the quotient of the exterior algebra with generators {e t | t ∈ T } by the ideal generated by elements of the form k i=1 (−1) i e t 1 e t 2 • • • e t i • • • e t k for all sets {t 1 , t 2 , . . ., t k } ⊆ T of linearly dependent reflections.Here, we say that a set of reflections is linearly dependent if the linear forms defining their reflecting hyperplanes are linearly dependent in the dual of V.For t ∈ T we denote the image of the generator e t in A (W) by a t .Thus, an arbitrary element of A (W) can be expressed as a linear combination of monomials Each monomial a t 1 a t 2 • • • a t k determines a subspace of V, namely the intersection of the fixed point spaces of the reflections t 1 , t 2 , . . ., t k .If X is a subspace of V, then we denote by A X the span of all monomials with fixed point space equal to X. Then taking A λ to be the sum of the A X for which X is the fixed point set of some conjugate of W L for some L ∈ λ, we have a decomposition A (W) = λ∈Λ A λ .Denoting the character of A λ by ω λ we have Finally, we choose a set of conjugacy class representatives compatible with the decompositions above.A conjugacy class in W L is called cuspidal if the fixed point set in the reflection representation of W L of any of its elements is trivial.Now if we choose a fixed representative L (λ) of each shape λ and let C L(λ) be a set of representatives of the cuspidal classes in W L(λ) , then by Theorem 3.2.12 of [5] ( Suppose that L ⊆ S. The homogeneous component of A (W L ) of highest degree is called the top component of A (W L ).On the other hand, W L is also a Coxeter group and thereby admits a system of quasi-idempotents as in [1], now denoted by e L J for J ⊆ L to distinguish them from the quasi-idempotents e J in CW.Note that this notation does not agree with that used in [ Consider the following refinement of Theorem 1.In the statement of this theorem we use the fact, proved in [8,Theorem 3 Theorem 2. Suppose that (W, S) is a finite Coxeter system of rank five or six, that L ⊆ S, and that C L is a set of representatives of the cuspidal conjugacy classes of W L .Then for each w ∈ C L there exists a linear character ϕ w of C W (w) such that where for n ∈ N W (W L ), α L (n) denotes the determinant of the restriction of n to the subspace of fixed points of W L in V. To prove Theorem 1 we prove Theorem 2 for the representative L (λ) of each shape λ.Then the characters ϕ w that satisfy Theorem 2 with L = L (λ) as λ varies over all shapes prove the first equality of Theorem 1 because where the last equality follows from transitivity of induction and (1.2).A similar argument proves the second equality in Theorem 1.We prove Theorem 2 in §3 and §4. Implementation As in [2], we have implemented the calculations for this article in the computer algebra system GAP [11] in conjunction with the CHEVIE [6] and the ZigZag [10] packages.In addition to our comments about the implementation in [2] we make the following remarks about the techniques new to this paper and improvements to old techniques. 2.1.The Extension ρ L .In this subsection we develop a formula for the character ρ L of N W (W L ) for L ⊆ S. First we review the definitions of the constructions used in the process. If J ⊆ L then the parabolic transversal of W J in W L is the set X L J of elements w ∈ W L satisfying (sw) > (w) for all s ∈ J, where is the usual length function of W with respect to S. Then X L J can be calculated directly from the definition or by using the ParabolicTransversal function supplied by the ZigZag package. In order to use the formula for ρ L below, we need to be able to decompose an element of N W (W L ) into the product of an element of W L and an element of the normalizer complement N L of W L .Recall that N L consists of certain elements of the parabolic transversal X S L of W L in W. Therefore, the decomposition of an element of N W (W L ) into a product nw with n ∈ N L and w ∈ W L is a special case of the more general decomposition of an element of W into a product of a coset representative in X S L by an element of W L .In ZigZag this decomposition is implemented as the ParabolicCoordinates function. The quasi-idempotents e L J are defined in [1] by means of the matrix M = (m KJ ), whose rows and columns are indexed by the subsets of S and whose (K, J)-entry is The matrix M can be calculated directly from the definition or by calling the method Mu supplied by the ZigZag package.Then putting where the numbers a y are such that e L L = y∈W L a y y.For fixed n ∈ N L we define a right action where in the last equality we have used the fact that w −1 ∈ O n (y) if and only if y ∈ O n w −1 , and if so, then the value of Tr (γ (wn, | by the calculation above.In conclusion, we obtain the following formula. Here we have used the descent set D (y) = {s ∈ L | (sy) < (y)} to derive the formula D(y)⊆L\J n LJ for a y . 2.2.The Extension ω L .In this subsection we discuss the calculation of ω L for L ⊆ S. As this calculation is almost identical to the calculation of ω, we begin with ω and discuss the minor modifications needed to calculate ω L at the end.For computational purposes, rather than working with the set T of reflections in W, it is simpler to work with the positive roots of W. The positive roots are stored in CHEVIE as vectors in the roots component of a Coxeter group record, the first half being the positive roots and the second half being the negative of the first half.This means that whenever a calculation involving roots results in a negative root, we need to replace the negative root with its positive counterpart. With this convention the generator a t of A (W) is denoted by a r , where r is the positive root orthogonal to hyperplane fixed by t.To simplify the notation, we will denote a r simply by r.This also reflects the way one implements A (W) on a computer.Namely, the elements of A (W) are represented by linear combinations of sequences r 1 r 2 • • • r q of positive roots.We will also assume that any element r 1 r 2 • • • r q satisfies r 1 < r 2 < • • • < r q , explicitly sorting the factors and inserting the appropriate sign ±1 whenever the factors become unsorted.Here < denotes a fixed total order on the positive roots, which can be simply be taken to be the order in which the roots appear in the roots component of the record for W. Now since CHEVIE implements the element w of W as a permutation σ w of the roots in V, it follows that if t is the reflection defined by the root r, then the conjugate w −1 tw is the reflection defined by r.σ w , which we simplify to r.w.Therefore, the action of W on A (W) is given by (r We use the non-broken circuit basis B of A (W) described in [2] to calculate its character ω.While this works exactly as in [2], we briefly describe some improvements to the algorithm that make the calculations in this paper possible.Let n = |S| be the rank of W and recall that the non-broken circuit basis of A (W) consists of elements of the form r 1 r 2 • • • r n not containing certain sequences called broken circuits as subsequences.A broken circuit r i 1 r i 2 • • • r i q has the property that there exists a positive root r with r > r i q for which r i 1 r i 2 • • • r i q r is dependent, so the defining relation for A (W) implies that (2.1) Therefore, any element not in B can be expressed as a linear combination of lexicographically larger elements of A (W) by applying (2.1) to a broken circuit subsequence.This observation is the rationale for the procedure for expressing an arbitrary element of A (W) in terms of the non-broken circuit basis, but it also leads to a significant improvement in the calculation of ω.Namely, to calculate the value of ω at an element w ∈ W, one in principle runs through all basis elements b ∈ B, expressing b.w as a linear combination of elements of B using (2.1) and storing the coefficients of the result into the rows of a matrix m.Then m represents the linear transformation w of A (W) and ω (w) is the trace of m.We observe that if at any point in the calculation of b.w we arrive at a monomial lexicographically larger than b, then this monomial cannot contribute to the trace of m.Such calculations can therefore be terminated.Furthermore, the matrix m itself exists only in concept.In practice we need only its diagonal entries.Therefore, we use the following algorithm. Observe that in the last line of the algorithm, we have inserted r at the end of the first argument of COEFF for notational convenience.Moving the factor to its proper position will introduce a sign ±1.Then to calculate ω (w) we simply calculate b∈B COEFF (b.w, b). Finally, to calculate the character ω L for L ⊆ S we calculate the non-broken circuit basis of the top component of A (W L ).Observe that an element w ∈ N W (W L ) is implemented as a permutation σ w of the roots in V, so to apply w to an element r 1 r 2 • • • r q of A (W L ) each r i must be replaced with its corresponding root in V.In CHEVIE this can be accomplished with the rootInclusion component of the W L record.Then the permutation σ w can be applied directly, followed by replacing each root with the corresponding root in the reflection representation of W L using the rootRestriction component of the W L record.With this modification, we proceed exactly as in the calculation of ω above. Proof of Theorem 2 when L = S Observe that if L = S then ρ S = ρ S , ω S = ω S , and N W (W L ) = W. Observe also that α S (w) = 1 for all w ∈ W since the space of fixed points of W is the zero subspace of V. Therefore, to verify Theorem 2 we need to find a character ϕ w of C W (w) for each w ∈ C S such that In this section we exhibit these characters for each irreducible Coxeter group W of rank five or six.Once the characters ϕ w are specified, one verifies (3.1) by routine calculations, so we limit ourselves to displaying the characters Ind W C W (w) ϕ w (denoted simply by ϕ w ), , ρ S , and ω S only for the group W = W (E 6 ). Because each character ϕ w is one-dimensional, it suffices to list its values on a generating set for the group C W (w).For the group W (E 6 ) we have constructed generating sets for the groups C W (w) ad hoc.In type B generating sets for C W (w) are known, while in type D generating sets for C W (w) can be determined as described below.We use the notation for these generating sets from [2], which we now briefly review. The cuspidal classes of W (B n ) are indexed by partitions of n.We always display partitions in non-decreasing order without punctuation.With the labeling of the elements of S given by the diagram < • n we define the following elements of W (B n ), where we denote the elements of S by 1, 2, . . ., n rather than s 1 , s 2 , . . ., s n to improve legibility. Then each c i centralizes the element w λ = c 1 c 2 • • • c k , which we take to be the representative of the cuspidal class labeled by λ.Whenever λ i = λ i+1 the element ) is generated by the elements x i for all i satisfying λ i = λ i+1 , together with the elements c m(j) for all j appearing as parts of λ.We remark that the elements defined in (3.2) and (3.3) coincide with the elements c i and x i defined in [2].The character ϕ c λ of C W (w λ ) is denoted simply by ϕ λ .We view W (D n ) as a reflection subgroup of W (B n ) generated by the reflections 1 = 121 and 2, 3, . . ., n.Then w λ ∈ W (D n ) whenever λ has an even number of parts.In fact, such elements w λ are representatives of the cuspidal classes of W (D n ) and the centralizer We observe that the last factor j + k − λ i + 1 of (3.3) is at least λ i + 1 and that the other factors are greater than j + k − λ i + 1.This means that 1 is never a factor of x i so that x i ∈ W (D n ).However, (3.2) shows that 1 occurs as a factor of c i exactly once, making the replacement of 121 by 1 impossible.This shows that c i ∈ W (D n ).Nevertheless, generators of C W(D n ) (w λ ) can often be found among products of an even number of the elements c i . In each of the following subsections we present the results of our calculations for the finite irreducible Coxeter groups of rank five and six.For each cuspidal class representative w we display a generating set of C W (w), where the generators are written as words in the Coxeter generators.At each generator, we display the value of the character ϕ w .If ζ is an eigenvalue of w on V, we denote the determinant of the representation of C W (w) on the ζ-eigenspace of w in V by det | ζ .If ϕ w is a power of det | ζ for some ζ, then we also indicate this.By Springer's theory of regular elements [12], the centralizer C W (w) is a complex reflection group when w is a regular element.When this is the case, we identify C W (w) as such a group. For n 1 we denote the n th root of unity e 2πi/n by ζ n , the cyclic group of size n by Z n , and the symmetric group on n letters by S n . 3.1.W = W (E 6 ).We begin with W(E 6 ) and present the calculations that lead to the proof of Theorem 2 for this group.For the other groups of rank five and six we present only the basic information described above. Define the characters ϕ d = ϕ w d in the following table, where the conjugacy classes of W are labeled by their Carter diagrams d.Here the elements of S are labeled as in the Coxeter graph and r denotes the reflection defined by the highest root of W. Finally, the values of the characters ϕ W d together with ρ S and ω S are shown in the following table. ).The characters defined in the following table satisfy Theorem 2 for W = W (B 5 ) when L = S. λ Gen Word ϕ λ C W (w λ ) Det ).The characters defined in the following table satisfy Theorem 2 for W = W (B 6 ) when L = S. Recall that the normalizer in W of W L factors as the semidirect product of W L and a normalizer complement [7].When the semidirect product is a direct product, W L is called bulky.It is shown in [4] that Theorem 2 holds if either W L is bulky or the rank of W L is two or less.Also, it is shown in [3] that Theorem 2 holds if W L is a direct product of Coxeter groups of type A. Thus, to prove Theorem 1 it suffices to prove Theorem 2 for all pairs W, W L where the rank of W is five or six and L is a proper subset of S for which the following hold. (1) W L is not bulky in W, (2) W L has rank at least three, and (3) W L is not a direct product of Coxeter groups of type A. After consulting the table of bulky parabolic subgroups in [2], it remains to consider the pairs shown in Table 1. Table 1.List of pairs W, W L to be considered for Theorem 2 We consider each such pair W, W L separately in the following subsections.For each pair we indicate representatives of the cuspidal conjugacy classes of W L , generators of the centralizers of these representatives, and linear characters of the centralizers that satisfy the conclusion of Theorem 2. Additionally, we also give the values of ρ L , ω L , α L , and for the pair W(B 5 ), W(A 2 B 2 ) and the pair W(E 6 ), W(D 4 ). In the following sections we use the symbol w n to denote a representative of the n th conjugacy class of a group in the list of conjugacy classes returned by the command ConjugacyClasses in GAP.We denote the longest element of W by w 0 and the longest element in W L by w L .As in §3 the symbols 1, 2, . . ., n denote the elements of S. and x ∈ CW L .Observe that N W (W L ) acts on CW L on the right by x. (wn) = n −1 xwn.Using this action we define the map γ (wn, x) : CW L → CW L by γ (wn, x) (v) = (xv) .(wn) = n −1 xvwn for v ∈ CW L .The idempotent e L L ∈ CW L determines a CN W (W L )-stable decomposition of the group algebra of W L , CW L = e L L CW L ⊕ 1 − e L L CW L .Calculating the trace of the action of γ wn, e L L with respect to a basis of CW L adapted to this decomposition, we find that ρ L (wn) = Tr γ wn, e L L since γ wn, e L L sends 1 − e L L CW L to e L L CW L .Using the linearity of γ in its second argument we can further refine this to ρ L (wn) = y∈W L a y Tr (γ (wn, y)) COEFF (Individual coefficient with respect to B) With respect to the non-broken circuit basis B of A (W) this algorithm takes as input a monomial a = r 1 r 2 • • • r n ∈ A (W) and a basis element b ∈ B. It returns the coefficient of b when a is expressed with respect to B. 9 = χ (6,6) + χ (6,6) ϕ N W (W L ) 11 1, §7].In analogy with A (W L ) the homogeneous component e L L CW L of CW L is called the top component of CW L .We denote the characters of W L afforded by the top components of A (W L ) and CW L by ω L and ρ L respectively.Now the top components of A (W L ) and CW L are naturally W L -stable subspaces of A (W) and CW and it turns out that they are N W (W L )-stable by [3, Proposition 4.8].Thus they afford characters ω L and ρ L of N W (W L ) which are extensions of ω L and ρ L .Furthermore, if L ∈ λ then by the same proposition, W = W (D 5 ).The characters defined in the following table satisfy Theorem 2 for W = W (D 5 ) when L = S. W = W (D 6 ).The characters defined in the following table satisfy Theorem 2 for W = W (D 6 ) when L = S. Proof of Theorem 2 when L is a proper subset of S
2013-03-08T16:17:58.000Z
2012-01-23T00:00:00.000
{ "year": 2012, "sha1": "ca7ca3a276920a790060a6e901f1e7e9260948bf", "oa_license": "publisher-specific-oa", "oa_url": "https://doi.org/10.1016/j.jalgebra.2012.11.047", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "57ae53eb8f26ca3694de811f7c5808ae65cda294", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
249488082
pes2o/s2orc
v3-fos-license
Absence as an affordance: thinking with(out) water on the inland waterways This article extends our understanding of inland waterways by theorising the temporary absences of water in canals and rivers as possibilities for action, that is, affordances. The interplay of temporary absence and presence of water in the inland waterways provides a range of potentialities for various activities and practices. Affordance theory can help us to further theorise material absences and position them as important elements of performing, practicing and interpreting place. We show how temporary absence of water can create spatial, historical and communicative affordances, affording the movement of boats, revealing and recreating the past and raising environmental awareness. The paper is based on semi-ethnographic research on the rivers and canals in the United Kingdom and Italy, featuring document analysis, participant observation and semi-structured interviews with various waterway users. Introduction Water has been utilised, transformed and adapted by humans throughout history in a vast range of ways for different purposes be they physical, biological, social, economic or cultural. We work, live and spend time on and near water, enjoying its nearly universal appeal to human sensibilities 1 -yet water can also represent contamination, pollution, decay or danger. 2 The power of water lies in its socio-cultural as well as 'elemental' force and potential to cause damage in the form of its abundance, such as floods, as well as cause problems through its scarcity, as is the case with droughts. For these reasons, water has started attracting more attention in the social sciences recently, 3 even if there has been less focus on the inland water bodies compared to maritime and oceanic waters. 4 As we will show in this paper, water is powerful not only when present but also when absent. We will focus on the interplay of the material presence and absence of water within the inland waterways as spaces where presence includes absence and absence includes presence. 5 Geography, along with the other social sciences, is indeed 'mostly concerned with what is present, observable, tangible and measurable'; this is particularly the case with the studies published within the larger 'material turn' or 'new materialist' approaches. 6 Nevertheless, recent decades have seen growing theorisation of the concept of absence as a mediating phenomenon between the material and immaterial. 7 Some of these works are inspired by Derrida's 8 deconstructionist hauntology and spectrality. 9 Another strand of this research focussing on absences takes the phenomenological approach, dealing with various relational material absences, as well as the built environment. 10 As Meier et al. 11 note, absence 'arises only, and without exception, in lived experience'. They point out that trying to draw a clear line between the materialist and phenomenological may not always be possible or desirable. In this paper, we take this as our starting point and suggest that affordance theory could help us to further think about material absences and position them -and their vast range of potentials -as important elements of constructing, performing, practicing and interpreting space and place. Indeed, affordances, possibilities of action, 12 are determined not only by the physical characteristics of the environment, but also depend on our individual perceptions of it. 13 Affordances can be concealed or visible, obsolete or current: a specific affordance is only evident for a particular individual based on their life experience, competences or socio-cultural background. 14 We argue that affordances do not only emerge in presences, but also absences, exemplifying this in our study of the temporary absence of water in rivers and canals. While there have been studies utilising affordance theory in order to apprehend watery materialities, for instance in terms of the household water consumption, 15 no dedicated attention has been paid to whether there are any affordances in water's temporary absence, that is, in the interplays of the presence and absence of water. We will ask: can some instances of water's temporary absence in the canals and rivers of the UK and Italy be understood as affordances? The paper is structured as follows: first, we will give a theoretical overview of absences and affordances, especially in relation to water. After introducing our study methods and data collection (document analysis, participant observation and semi-structured interviews), we will present our results, showing how the interplays of presences and absences of water on the British and Italian inland waterways can afford movement of boats, reveal and recreate the past and raise environmental awareness. The paper concludes with the discussion on how understanding absence as an affordance -affording mobility, historicity and communication -can help us further our theoretical thinking about watery materialities in particular landscapes and environments. The absences of material presences Not only are there absences in every presence, but also presences in every absence. 16 This is not paradoxical, since 'what is absent comes to be present not in and for itself, but only insofar as it is attended to as being absent' -the awareness of absence is a necessary precondition for understanding the relationship between these two concepts. 17 This becomes especially clear in places such as cemeteries and museums, where, confronted with absence of the deceased people and the past lifeworlds, we often engage with their absence in embodied ways. As Meyer and Woodthorpe 18 argue, in sites like these, absence is agentive since they 'do something to and something with the absent -transforming, freezing, materialising, evoking, delineating, enacting, performing, and remembering' it. Derrida's 19 hauntological approach also applies criticism to the exclusive focus on that which is present and similarly insists on rejecting the binary approach to notions of presence and absence. Instead, we are encouraged to engage with the relational temporalities and spatialities of spectrality. 20 People derive their understanding of absence and presence largely from the space-time-related distinctions of here and there, and near and far. 21 However, as Wylie 22 notes, the hauntological approach 'at one and the same time displaces, and is the condition of, received understandings of the constitution of space and time, presence and absence. The spectral is thus the very conjuration and unsettling of presence, place, the present, and the past'. In this sense, an absence of something is a form of perception and knowledge that also involves the embodied and sensory. Sometimes we can see, touch, hear, smell or taste the absence. Absent water invites us to focus on 'the concrete side of absence', 23 on its materiality. With the help of the geographies of ruins and hauntings 24 it is useful to consider material absences simultaneously tangible and intangible: meaningful, affective, embodied and experiential, not things in themselves, but relational and ever-emergent conditions. 25 Edensor defines these key notions of material absence as follows: Material absence may be signified in numerous ways: a gap clearly identifies that which is not there; a replacement heralds the absence of an original, especially where it stands out from its older co-constituents; a residue or trace reveals the absence of that which caused it or to which it belonged; a thing may have altered over time to become a shadow of its former self; a repaired and renovated thing may contrast with its former appearance and constituency; [-] knowledge and belief about a durable thing and its style and purpose may evaporate; and the associations present only in memory about the meaning of a thing may be superseded, whether as part of embodied memory or reverie. 26 Edensor discusses how a number of buildings and infrastructure in Manchester, some demolished and disappeared, some modified and some in ruins, embody the absent presence of the area's working class. The ever-changing temporal patchwork of past and present usages is how 'the past haunts the presence by its absence [-] and especially possesses its mundane spaces'. 27 In order to fully engage with the absent materialities, however, it is important to clarify the key distinction between present absences and absent presences, namely that '"the presence of absence" refers to the absence itself, while the other expression, "the absence of presence", refers to the missing thing'. 28 Absence can be a powerful signifier, as well as a tool for constructing space into something that is familiar or desirable. In a national park in British Columbia, for instance, 'nature' is constructed through the 'constitutive absences' 29 of people, built environment, modern culture and technologies by the managers and visitors of the park -but complicated by the presence of the First Nations. These ways of re-constructing nature have historical-geographic implications. The notion of absence therefore invites deeper analysis of environmental politics, cultural representations, spatialities and materialities. As Bille, Hastrup and Sørensen 30 argue, 'what may be materially absent still influences people's experience of the material world'. They also note that an absence can have the same effect and concreteness as a presence because 'the absent can have just as much of an effect upon relations as recognisable forms of presence can have'. 31 It is therefore the absences of presence -the absent presences -upon which our paper focusses. Concentrating on the material and temporary absence of water in waterways, we study the 'oscillation or a flickering between present-presence and absent-presence'. 32 Former industrial canals with their absent industries can be converted into leisure spaces or cultural heritage sites to tell stories of urban regeneration, whilst the presence of new users can contribute to removing certain practices or people from the waterways. 33 Furthermore, there are some absences on the waterways that are seen or felt only by those possessing particular information or local knowledge, such as former workers or people familiar with the local history. The subsequent water-landscape imaginaries are therefore not merely constituted by what is explicitly on display, but also of (temporary) absent presences that allow particular actions and practices. As Fowles argues, 'absences push back and resist. They prompt us into action. And like present things, absences also have their distinctive affordances and material consequences that are not only prior to meaning but can, of their own accord, direct the process of signification itself'. 34 We will therefore now turn our attention to the notion of affordance. Watery affordances Affordance theory focusses on activities that can be enabled by a particular physical environment or its elements. According to Gibson, affordances are determined not only by the physical characteristics of the environment but depend heavily on perceptions of its inhabitants: 'What we perceive when we look at objects are their affordances, not their qualities. [-] [W]hat the object affords us is what we normally pay attention to', making an affordance 'equally a fact of the environment and a fact of behaviour'. 35 In wider social sciences, 'affordance' has recently become a useful concept for overcoming the nature-culture dualism, and it is strongly linked to notions of space, place, embodiment and multi-sensory experiences. In elaborating on his actornetwork-theory (ANT), Latour 36 also relied on the environmental psychology of Gibson, 37 discussing objects as agentive transformers and modifiers of both meaning and action. However, as put by Erofeeva, 'actor-network theory is object-centered as it emphasizes the visible performativity of technologies (inscriptions) and ignores the action possibilities provided by them. Meanwhile, contemporary affordance theory helps to extend the agency of things to their potential performances'. 38 Our understanding of affordance in this paper is largely based on the work of Ingold and Edensor. 39 Ingold theorises affordances as a relational way to overcome the dualistic thinking that compels people to juxtapose concepts like culture/nature or human/environment, and focus on how people are attuned to the world that continually comes into being through their own activities, practices and corresponding competences, pointing out that 'perceiving an affordance, and acting in its realization, are not distinct and sequential but one and the same: to act is to attend'. 40 Furthermore, Edensor 41 highlights the spatial dimensions, defining affordances as 'spatial potentialities, constraining and enabling range of actions'. It is therefore important to pay attention to the material properties of the environments that afford -either permit or prevent -particular embodied practices and performances. As Edensor shows, affordances of particular environments both constrain and enable a range of actions: The surfaces, textures, temperatures, atmospheres, smells, sounds, contours, gradients, and pathways of places encourage humans -given the limitations and advantages of their normative physical abilities -to follow particular courses of action, producing an everyday practical orientation dependent upon a multisensory apprehension of place and space. 42 Boating, for instance, means negotiating and sometimes managing water, but also being controlled by it; water has the power to influence the boater and their mobility as well as to modify the canal landscape, as it becomes the means through which agencies are exercised. 43 Water is an independent non-human materiality that both demands and attracts human attention. While people can open and close the lock gates and steer the boats, all these activities would be futile without water 44 : 'water's capacities, affordances and behaviours co-generatively shape human bodies and people's social lives'. 45 Furthermore, 'water is not all the same', 46 for instance, the still, sometimes stagnant canal water differs considerably from the flowing, at times turbulent water in rivers, both of which have different affordances. Affordances also depend on particular perceptions and needs and are not merely physical properties of the environment: a waterway that affords a living environment to fish will afford catching that same fish to the human being. Affordances are therefore relational; they emerge dynamically in various interactions between humans, non-humans and materialities unfolding over time. They are also temporal since we can distinguish between 'obsolete' and 'present' affordances: the former are those that have been relevant in the past, but have been abandoned as currently irrelevant while the latter are in active use in the present. 47 This understanding of affordances opens the possibility that in some cases it is the absence of something, be it material or immaterial, that allows for an affordance to be realised. As we will show in this paper, the constant interplay of presences and absences can be instrumental to the development of certain affordances on the inland waters. Study methods and research context In the UK, the mostly privately owned canals were built for industrial purposes from the mid-18th century onwards and the 'enclosed canals were absently present "behind the wall" in [-] cities'. 48 In northeastern Italy, where the focal point of the canal and river network is Venice and its lagoon, construction of the canals started in the Middle Ages the publicly owned waterways were similarly used for moving goods and people. In the UK, there was a fast decline of inland waterways transport with the arrival of railways in the 19th century; however, in Italy the decline started only in the beginning of the 20th century. 49 Today, the majority of the UK canals are owned by the Canal and River Trust (a charity, est. 2012 after the dissolution of state-owned British Waterways), while in Italy the major rivers and canals were managed by a public institution, Magistrato alle Acque (Magistrate for the Waters) until 2014, when it was dissolved and the control went to the Ministry of Infrastructure and Transport. In both countries, canals and rivers are now used for leisure (boating, rowing, fishing, swimming, walking, cycling, etc.) but also for irrigation and hydroelectricity. In the UK, there is also a community of 'liveaboard' boaters dwelling on the inland waterways 50 ; this is not the case in Italy, where the working boaters carrying cargo (barcari) left the rivers in the 1950s and a contemporary liveaboard community never emerged. Our study adopts a semi-ethnographic method, drawing on a variety of secondary and primary sources analysing data collected in the UK and Italy. The data collection included three main stages: document analysis, participant observation and semi-structured interviews. The first stage consisted of collecting secondary data: archival documents, media stories as well as various policy, management and marketing materials relating to the British and Italian inland waterways. It also included online research on various local associations involved in river and canal conservation, navigation and management authorities, as well as boating, rowing, walking and cycling groups. This allowed us to develop the research questions for the next stage, collecting ethnographic data. The second stage of fieldwork, deploying participant observation adopted mobile methods of data collection 51 and consisted of both fieldwalking as well as fieldboating on various stretches of rivers and canals, allowing us to experience the waterways both from the land as well as the water (UK: Ashton Canal, Rochdale Canal, Bridgewater Canal, River Irwell, Manchester Ship Canal; Italy: Rivers Brenta, Piave, Adige, Sile and Bacchiglione and the Battaglia and Bisato Canals). In addition to these separate trips, the researchers undertook two walks together: along the Rochdale Canal in Manchester and along River Sile in Treviso. The walks and boat trips allowed us to explore the everyday geographies of the inland waterways and experience the history and everyday life of the canals and rivers, learn the skills of steering the boats and working the locks, experience being a passenger on trip boats, as well as to observe the plethora of different and sometimes contradicting practices performed by people travelling, living, working and volunteering along the waterways. We used field diaries, photos and voice memos to capture our observations, conversations, thoughts, feelings and reflections. These first two phases of data collection (documentary research and participant observation) in turn informed the third stage, in which we conducted 20 in-depth semi-structured interviews (10 in each country) with people involved in various activities both on and near inland water such as boating, walking, fishing, cycling, rowing, kayaking and volunteering. The ethical requirements of our respective universities were followed during the data collection. This different data was treated as text and analysed thematically using the NVivo data management software, identifying the recurring items and patterns within the data. Coding was initially undertaken by the two authors separately. After this, the authors compared and merged the initially identified codes and conducted the final level of data analysis together: identifying the relations and connections between the themes based on the identified codes via iterative and recursive reading and re-reading, coding and recoding of the data. 52 In this paper, we focus on one key theme that emerged from the coding process, the presence and absence of water, which we will discuss below, focussing especially on their interplay. Analysis and discussion: the affordances of absent water The absence of water presents us with several important affordances: the human manipulation of the absence (and presence) of water making mobility possible, revealing the past as well as the environmental situation. We will unpack these key affordances of the temporarily absent water in detail below. Spatial affordances: mobility Water is almost never present in its pure H2O form and is instead always a conglomeration of other entangled materialities, 53 subsequently negotiated and interpreted by the human actors. 54 The resulting watery landscapes combine both biological-physical as well as socio-cultural realities featuring moving water, people and various materialities in space 55 and are often physically heavily modified: artificial lakes, reservoirs and canals are constructed; locks, dams and flood defences are built. In these activities the presence and absence of water is regulated in terms of its flow rates, directions, quantities and can be understood in terms of mobility, socioculturally meaningful movement. With its lively, vibrant and changeable materiality, water is 'a physical manifestation and a powerful metaphor of the conceptual proposition that is mobility: at once affording movement of the humans and objects, whilst being constantly on the move itself'. 56 The absence or presence of a certain amount of water in the lock (a structure which is used to raise and lower the boats in the canal onto higher or lower terrain as required) is the key affordance on the waterways in terms of boatmobility. Put simply, the lock has to be emptied or filled by the boaters before the boat can go through; there, water, human bodies, the stone of the lock chamber walls, the wood and iron of the lock gates all come together in order to make it possible for the boat to continue its journey (see Figure 1). Boaters on the UK narrow canals operate these locks themselves. An important rule in the etiquette of boating is determined by the absence or presence of water in the lock: the boat, which does not have to fill or empty the lock to go in, has the right to enter the lock first, which is important for successful boating practice. One boater, Phil (67), explains the situation when two boats are approaching the lock from different directions: You don't steal the water from another boat -that's the simple rule. The lock is set for them, if they're at the top and the lock is full, then you let them go in. And when you approach a lock on thought to set it, if you're coming down and the lock is empty, you walk to the bottom gates and have a look to see if there's another boat coming up, before you start filling it. As is evident from the quote above, the boaters talk about 'empty' and 'filled' locks in terms of the amount of water present. A lock, of course, cannot be 'empty' in absolute terms (even when emptied for maintenance works). Absence of water in the lock is therefore relational, it is a matter of degree. From the boater's point of view, an 'empty' lock means that the potential for changing the boundaries of where the water can flow has been realised. Yet boating as an activity is characterised by the liquidness, the unpredictability of water, becoming especially evident when water is either absent or in short supply. If the lock is broken, the amount of water cannot be manipulated by the boater and the watery agency takes over, as the water flows freely. The physical work of opening and closing the gates and winding the paddles is a complicated embodied choreography, the sole purpose of which is to manipulate the interplay of the temporary absence and presence of water in the canal locks. This process requires the boater to collaborate with water, lock, the boat, as well as the necessary artefacts required in the form of specialist canal boating equipment. The end result from the boater's perspective is an 'empty' or 'full' lock where the relative absence or presence of water in the lock is accomplished by the human-material cooperation of winding the paddles, monitoring the water levels and pushing the lock gates. Here, the mobile qualities of canal water in the lock, which are usually characterised by a relative stillness, are sharply contrasted to the turbulence occurring around the filling and emptying of the locks when the water rushes in or out of the lock. If the lock is broken, however, the absence and presence of the water cannot be regulated, which means that the boat has to stay immobile. Absence of water in the canal, such as an empty pound (a stretch between two locks) can therefore have many meanings and corresponding practical implications. If the locks are abandoned and not maintained, the water will exercise its agency, flowing out of the canal. This, in turn, has direct consequences in terms of facilitating or disrupting the mobility of water and, consequently, boatmobility. Furthermore, the presence or absence of water is also determined by the presence or the absence of technological functionality and the political decisions of governing stakeholders that establish the particular affordances for mobility along the waterways. The absences or presences of water that afford (boat)mobility can therefore be interpreted in terms of the interaction and agency of myriad hybrid layers: material and elemental, technological, (local) knowledge-based, political, cultural and social. Historical affordances: interacting with industrial heritage and local histories In addition to allowing and directing boatmobility, water's temporary absence and presence in the waterways is also vital for other purposes, which can be utilitarian and practical but also culturally meaningful. In the broadest sense, the waterways that were used for various industrial as well as transport purposes later became characterised by the absence of the very usages or industries that created or needed them. Consequently, some canals were filled in, some were abandoned and partially drained of water both in the UK and Italy, thus becoming 'an illustration of the lingering "wreckage" inherent in (post) industrial modernity'. 57 Moving on to more specific, and localised absences, today, temporarily absent water in the canal affords the work necessary to care for the (heritage) infrastructure, as all the varied canal users, be they boaters, walkers, runners, cyclists, dog-walkers, anglers or commuters 'are dependent upon the work of more than human infrastructural assemblages'. 58 In order to carry out this care work towards the infrastructure, the various navigation authorities have the power to impose stoppages, which means draining a whole section of a canal of water for planned maintenance works, or to close certain lock(s) at certain times. In addition to the utilitarian function -the maintenance works are needed to enable the operational working -the draining serves another important function: revealing the otherwise normally hidden material elements of industrial heritage, making it available for consumption. The absence of water therefore affords the experience of the tangible and intangible heritage and local history: When I'm working with the construction people, you're actually getting into the heart of the canal. [-] And when you've got a stoppage and you drain the lock and take it apart, then you can actually see! And being a mechanical engineer, I'm interested in that sort of thing. How did they build them in 1780 like that? How can they still be working in 2016? That's what really interests me I think (Richard, 67, a retired engineer and a volunteer for a UK navigation authority). The Canal and River Trust (the biggest inland navigation authority in the UK) regularly holds open days during these works, to inform the public about its activities. The draining of the lock chambers, normally filled with water (albeit at different levels), becomes a visitor attraction, offering the backstage access of not only viewing the lock from the towpath but also walking inside the empty lock chamber. In making the regular maintenance works into an attraction, the Trust organised 'the World's first DJ set in a drained canal lock chamber [where] visitors [could] dance the night away to some much-loved Hacienda classics' 59 in the Rochdale Canal in Manchester in 2017. Here, the absence of water afforded the continuous human modification and alteration of the water-landscape and the canal's transformation into a visitor attraction and leisure space. The volunteers impersonated both historical and imagined characters from industrial and canal history, blurring 'the boundaries between supposedly stable ontological categories (e.g. living/dead, being/nonbeing and presence/absence)'. 60 The location further afforded engagement with more recent history, as the empty lock hosting the dance party is nearby the now demolished (in)famous music venue, The Hacienda, which was instrumental in forming the city's 1980s 'Madchester' music scene, and which, now physically absent, replaced with an apartment block, is still a spectral presence in the lives of many people living in the city. All this takes place in the context of recent urban change and gentrification, waterfront redevelopment and the emergence of new middle-class communities of the canal-side areas with their particular leisure and consumption practices. The temporary absence of water can also reveal 'obsolete affordances' 61 -some heritage and history only becomes visible when the water disappears. In the summer of 2018, due to the effect of drought combined with the intensive use of the river's water for agricultural purposes, the water levels in the River Adige, in Italy, were reduced significantly, revealing the remains of a number of burci, traditional Italian wooden riverboats, abandoned on the mud bottom of the waterway (see Figure 2). Today, there are no visible traces of the docks and the famous shipyard of the river village of Piacenza d'Adige, a former port on the extensive regional network of navigable waterways. The absence of water, however, revealed the past through '"knowing" people and places' 62 -in order to identify the site of the river village, the observer had to be able to recognise the particular type of boat in this particular location. For the untrained eye, however, these remains would not have meant anything more than just some abandoned boat carcasses at the bottom of a dried-up river. It is, therefore, the interchanging absence and presence of water that creates this particular affordance -an opportunity to observe the visible remnants of the history of this particular site. Its absences ask to be investigated, deconstructed and reconstructed, imagined and finally 'understood through the trace, an absent-presence gesturing to other traces in infinite deferral'. 63 This particular historical affordance is realised only with the precondition of having knowledge about the local history, which allows 'attending to the material history of the waterways, collapsing the boundary between past and present'. 64 The ongoing transformation of rivers and canals, both intentional and unintentional, produces residual traces and clues of the past. The (temporary) scarcity of water, for instance, can be intentional, induced by a dam or a planned engineering intervention, or unintentional, caused by the natural fluctuation in the hydrological cycle, or a combination of both (as in the case of River Adige). The absence of water reveals the presence of heritage and history in this continuous repositioning; the obsolete affordances of carrying cargo are replaced by present affordances of consuming history and heritage. The former spaces of work and everyday life become visible and present when the water disappears, and as such, the absence of water becomes full of new meanings, while affording various acts and practices. The absent water in the canal or river can reveal present or obsolete properties of the watery place and therefore contribute to both a Ruinenlust and to a certain nostalgic imagination of the space and place as utilised and filled by past narratives. Communicative affordances: uncovering the watery layers As humans, we are somewhat unfamiliar with under-water spaces; however, we need to pay attention not only to the landscape 'surface' but also to what is hidden. 65 When a section of the Rochdale Canal was drained for maintenance works in 2017 in central Manchester, the sudden (relative) absence of water did not just reveal the normally submerged parts of canal locks. As the opaque water was drained from the section, it also uncovered the large quantities of rubbish (from beer bottles to bicycles) that had been lying at the bottom of this urban waterway. This made the documented, 66 yet somewhat abstract awareness of the pollution in the waterways tangible and visible. When we walk along, or boat on a river or canal, the opacity of the water as well as its sheer presence can mask what is underneath. A drained lock or a stretch of waterway (because of the drought, maintenance works, carelessness or vandalism) can therefore uncover and display pollution in the waterway. The presence of water can partially cover the litter while its (partial) absence can reveal the litter's presence. To restate, the absence and presence are not to be understood in an all-encompassing, absolute way, but instead absence and presence of water emerge relationally and sometimes rhythmically. During hot and dry summers, the water level can be very low, allowing people to visually consume landscapes that are normally submerged. When water is present, the flow (another important aspect linked to the interplay of presence and absence of water) carries lightweight rubbish downstream, hiding these traces of human consumption in some places and revealing them in others. When there is little water, the riverbanks are exposed and reveal that what is normally hidden, which can include litter that has remained stranded on the shrubs or trees that grow on the riverbanks. Therefore, the relative absence of water can reveal landscapes -and smellscapes -of presence-absence. On the River Sile, nearby the Venetian Lagoon, an association called 'Open Canoe Open Mind' works with young people to improve the environmental education of those living along the river by cleaning the waterway. Local nature guide and volunteer with the association, Cristian Bertolini, explains: Normally nobody sees what's under the water. Only when there is little water in the river people are able to understand how much waste there is. The data collected over the past 2 years is clear, we recover 35 kg per month between Sant'Elena and Musestre. Over 80% of these are recyclable plastics, and over 70% come from household waste. But why do they end up in the water? This temporary absence of water therefore reveals a significant environmental problem. The visual evidence of waste, however, can turn, as is the case of the volunteer organisations, into a good environmental (education) practice (see Figure 3). The volunteers do not only pick litter floating on water, they also use nets to fish rubbish out from beneath the water's surface. The reason they can do this, however, is because of the partial absence of water, making the litter visible. The volunteers on River Sile also collect the rubbish and place it on the riverbank for everyone to see to demonstrate what Hetherington 67 reminds us: 'that which has been turned into rubbish tends to have the ability to return'. People who walk along the river will be able to see both the volunteers working as well as the amount of collected rubbish, otherwise obfuscated and partially hidden by water. Disposing of this waste is therefore 'as much a spatial as a temporal category' and disposal becomes an act of 'placing absences'. 68 The situation is similar in the UK as the Canal and River Trust has identified plastic pollution as one of the key problems: 'You'd be surprised to discover what's under the surface of our canals and rivers. From shopping trolleys to traffic cones, there's a lot of litter. Rubbish underwater can be dangerous for wildlife, as well as for anglers and people using boats and canoes'. 69 The Trust, Inland Waterways Association and other organisations organise volunteer working events that include litter picking both on the canal towpath as well as from the water. For example, one such volunteer, and also a boater, Barry (68), explains how rubbish in the canal can be hazardous for boaters, particularly when it gets stuck around the boat's propeller, adding that: We've frequently had, in certain areas, still even now, when you get into a town, industrial environment, you'll find there's rubbish floating on the canal and you'll get rope and stuff, plastic bags and bits of old cloth. In the cases discussed above, environmental awareness is partially brought about by the temporary absence of water. It is only when the water temporarily disappears that the full extent of the environmental situation at that moment in time is revealed and communicated through the visual affordance provided by the absent water: as water disappears, rubbish 'emerges and becomes an affective mobile materiality through a waste-related collective event and action'. 70 What is more, the littering discussed above (but also other polluting behaviours such as boaters burning coal or running diesel engines) further reiterate the hybridity of canals and rivers, being simultaneously 'natural' and 'unnatural' entities. 71 Temporarily absent water therefore affords a better communication about the state of the waterways and the subsequent opportunity for volunteers and others to take care of, and take responsibility for, their environment. Conclusion: what does the temporary absence of water afford on the inland waterways? Over time, inland waterways, regardless of whether they are engineered canals or natural rivers, have become increasingly 'constructed' with dams built and locks installed, affording an evergreater control over the absence and presence of water. Taking a non-presentist perspective for investigating the human relationships with water, we suggest thinking about water otherwise: in terms of the interplays of absence and presence. Instead of focussing on what is present, we have asked how the temporary absence of water is experienced and interpreted as well as what these absences could afford us. In doing that, we have shown the power and agency of water as it becomes an affordance even when absent. Water can simultaneously be present and absent, as its quantity or level is a subject of constant change and volatility. It is therefore through constant fluctuations that the temporary absence of water becomes an affordance and allows us to go beyond the present to focus on what, how and when water is not there. 72 It is important to reiterate that we have not studied the permanent absence of water, or the natural rhythmical interchanges such as ebb and tide, but instead focussed on its less studied temporary and liquid presences and absences. Indeed, water's fluid qualities are essential for spatial affordances as they make the mobility of water (and mobility of various materialities in and on water) possible and the human actors have to constantly negotiate it. The boaters' mobility depends on the amount of water in the waterway, a supply that comes from reservoirs, rivers and streams, as well as pumping stations retrieving water from underground and its subsequent manipulation through the usage of locks. The boaters depend on this water as well as on their own ability to negotiate its temporary presences and absences. Furthermore, it could be even argued that all contemporary boating practises are afforded by absence: absence of the industry for which the waterways were originally built and modified, absence of debris and rubbish in the water as well as the relative absence of water elsewhere in lakes or reservoirs used as water sources for the navigable waterways. Material absence can also be essential for understanding time as a nonlinear fluctuation of different temporalities. The temporary, sometimes managed absence of water can create historical affordances, such as engaging with and imagining the local histories and heritage. It can also become a way for apprehending various materialities and their respective affordances in particular times and environments. 73 The properties and capacities of the natural and built environment are always already there; however, it depends on the awareness, perceptions and skills of a particular actor whether and how the various affordances will be understood, perceived or used. The temporary absence of water in canals and rivers can uncover what is not normally available to our sensory perception whether through sights, smells or other sensations. The presence of waste can subsequently be made embodied through environmental action 74 and can also communicate certain information, such as the environmental condition of the waterways. All these affordances -spatial, historical and communicative -are the result of the interplays of the temporary presences and absences of water. They also reveal a difference between intentional and unintentional absences, planned by various (technical) interventions or caused by natural fluctuations in the hydrological cycle of water. Canals and rivers are a rich water-land-scape of various affordances that emerge both in presences and, as we have shown in this paper, also in absences: 'the ways in which relatedness between people and place is forged, disrupted, and challenged [are partly owed] to absent presences'. 75 As we have demonstrated, the absence of water can afford various opportunities, actions and potentialities and absences as affordances should be understood processually in their spatio-temporal contexts. This study of interplays of the temporary presences and absences of water in the rivers and canals has therefore allowed us to unpack how they can afford spatial mobility, connecting and engaging with the (imagined) past as well as uncovering and communicating some otherwise hidden material layers and stories of the waterways. These spatial, communicative and historical affordances should not be seen as separate or juxtaposed but rather as concurrent conditions that reveal the entangled dimensions of water as simultaneously present and absent. As we have discussed in this paper, absence and presence should not be considered in absolute terms but instead as relational; as such, they are continuously blurring the boundaries of natural and cultural, embodied and representational. Further studies on the absence of water and its respective affordances could help future research to better understand various environmental and societal issues, such as scarcity or over-abundance of water (in the form of draughts and floods) or lack of biodiversity as experienced by both humans and non-humans. It could also be productive to apply the theoretical notion of absence as an affordance to other topics such as for example placemaking, virtual realities or interpretations of heritage in order to better understand the meanings and imaginations as well as experiences and practices afforded by that what is not necessarily there.
2022-06-09T15:16:02.594Z
2022-06-07T00:00:00.000
{ "year": 2022, "sha1": "1bb7fd7593df2339de9436b842bca65f2326ef7b", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1177/14744740221100838", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "8989ad545c7a8ae7761d193443617422aa2b4d81", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [] }
17692079
pes2o/s2orc
v3-fos-license
X-ray Surveys of Distant Galaxy Clusters I review recent observational progress in the search for and study of distant galaxy clusters in the X-ray band, with particular emphasis on the evolution of the abundance of X-ray clusters out to z~1. Several on-going deep X-ray surveys have led to the discovery of a sizeable population of clusters at z>0.5 and have the sensitivity to detect clusters beyond redshift one. These surveys have significantly improved our understanding of cluster evolution by showing that the bulk of the population of galaxy clusters is not evolving significantly since at least z~=0.8, with some evolution limited to only the most luminous, presumably most massive systems. Thus far, a well defined sample of very high redshift (z>~1) clusters has been difficult to assemble and represents one of the most challenging observational tasks for the years to come. Introduction The redshift evolution of the abundance of galaxy clusters has long served as a valuable tool with which to test models of structure formation and set constraints on fundamental cosmological parameters (e.g. [15], [1]). Being recognizaeble out to large redshifts, clusters are also ideal laboratories to study the evolutionary history of old stellar systems, such as E/S0s, back to early cosmic look-back times (e.g. [31], [33]). It is therefore not surprising that a considerable observational effort has been devoted over the last decade to the construction of homogeneous samples of clusters over a large redshift baseline. Until a few years ago, however, the difficulty of finding high redshift clusters in deep optical images and the limited sensitivity of early X-ray surveys had resulted in only a handful of spectroscopically confirmed clusters at z > 0.5. As a result, the evolution of the space densities of clusters, even at moderate look-back times, has been the subject of a long-standing debate (e.g. [10], [18]). Searches for X-ray clusters With the advent of X-ray imaging in the 80's, it was soon recognized that X-ray searches for galaxy clusters have the advantage of revealing physicallybound systems out to cosmologically interesting redshifts and thus offer the unique opportunity to construct flux-limited samples with well-understood selection functions. Pioneering work in this field was carried out by Gioia et al. (1990) and Henry et al. (1992) based on the Einstein Medium Sensitivity Survey (EMSS). By extending significantly the redshift range probed by previous samples [14] (based on non-imaging X-ray data), the EMSS survey has been for years the basis for several intensive follow-up studies (e.g. CNOC survey [6]). The ROSAT-PSPC detector, with its unprecedented sensitivity and spatial resolution, made clusters high contrast, extended objects in the X-ray sky and has thus allowed for a significant leap forward. ROSAT data have provided the means to carry out large contiguous area surveys of nearby clusters with the ROSAT All-Sky Survey (RASS) ( [13]; Böhringer, this volume), as well as much deeper serendipitous searches based on single pointings. On-going X-ray surveys of distant galaxy clusters which utilize PSPC archival data include the ROSAT Deep Cluster Survey (RDCS [24], [25]), the Serendipitous High-Redshift Archival Rosat Cluster survey (SHARC [9], [4]), the Wide Angle Rosat Pointed X-ray Survey of clusters (WARPS [28], [20]), the CfA large area survey ( [34], [35]), and the RIXOS survey ( [7]). An additional survey is being carried out in the North Ecliptic Pole (NEP, [19]; Gioia, this volume), using the deepest area scanned by the RASS. Strategies and Selection Functions Most studies have adopted a similar methodology but somewhat different strategies. Cluster candidates are selected from a serendipitous search for extended X-ray sources above a given flux limit in deep ROSAT-PSPC pointed observations. Particular emphasis is given in these searches to detection algorithms which are designed to probe a broad range of cluster parameters (X-ray flux, surface brightness, morphology) and to deal with the confusion effect at faint flux levels. A popular and well-suited approach is that of multi-scale analysis based on wavelet techniques (e.g. [24], [34]). By covering different solid angles at varying fluxes these surveys probe different regions in the X-ray luminosity-redshift plane (i.e. the N (L X , z) distribution peaks at slightly different positions). Fig.1 illustrates the effective sky coverage of the EMSS, compared to that of two ROSAT surveys ( [25], [34]). The EMSS has the greatest sensitivity to the most luminous, yet most rare, systems but only a few clusters at high redshift lie above its bright flux limit. On the other hand, deep ROSAT surveys probe instead the intermediate-tofaint end of the X-ray Luminosity Function (XLF). As a result, they have lead to the discovery of many new clusters at z ∼ > 0.4. The RDCS, has pushed this search to the faintest fluxes yet, providing sensitivity to the highest redshift systems (including z ∼ > 1) with L X ≈ L * X , whereas the CfA survey has covered a significantly larger area at high fluxes thus probing the interesting bright end of the XLF at z ∼ < 0.6. Extensive optical follow-up programs associated with these surveys have, to date, lead to the identification of roughly 200 new clusters or groups, and have increased the number of clusters known at z > 0.5 by about a factor of Figure 1: Comparison between the effective sky solid angle covered by three cluster surveys as a function of the X-ray flux (EMSS [18], CfA Survey [34], RDCS [25]). five. As an example, out of more than 100 clusters spectroscopically identified in the RDCS, roughly one-third lie at z > 0.4 and a quarter at z > 0.5. The fact that very few have been discovered so far at z > 0.85 is not due to a lack of sensitivity of X-ray searches at these redshifts, but rather reflects the difficulty of carrying out the spectroscopic confirmation with 4m-class telescope. Since cluster candidates in such surveys are selected on the basis of their spatial extent, a challenging task is to understand and quantify selection effects at varying fluxes. With the PSPC PSF degrading rapidly at large off-axis angles across the detector, the survey becomes surface brightness limited below a given flux. This important effect can be accounted for by modelling the sky coverage of a given survey as function of flux and intrinsic size of the clusters ( fig.1). An overestimate of the solid angle covered at low fluxes and its corresponding search volume can lead to overestimating the amount of evolution of the cluster population ( [7]). Furthermore, the surface brightness (Σ) dimming at high-z can be a serious source of incompleteness in the faintest flux bins and depends critically on the unknown steepness of the Σ-profile of X-ray clusters at high redshift, as well as its evolution. Again, the task of the observer is to understand the X-ray flux in a given survey below which this effect becomes important. An additional source of incompleteness, which will be difficult to quantify until the next generation of high resolution Xray imagers become available, may be caused by clusters hosting X-ray bright AGN. A discussion of the methods which are most effective in quantifying the selection function of X-ray surveys goes beyond the purpose of this review. On purele empirical grounds, the importance of these effects will become apparent when it will be possible to compare the number densities of distant clusters selected of the basis of their angular extent with the NEP survey, which set out to identify all the X-ray sources down to a given flux over a 80 deg 2 area, regardless of their spatial extent. 3 Evolution of the Cluster Abundance out z ≃ 0.8 One of the primary goal of the aforementioned X-ray surveys is to study the redshift evolution of the cluster abundance at a given X-ray luminosity. This is characterized by the z-dependent XLF or its projections along the redshift and flux axis, respectively, i.e. the number counts, N (> S), and the redshift distribution, N (z). Such distribution functions of observables can then be directly compared with theories of structure formation. The Local XLF The determination of the local (z ∼ < 0.3) XLF obviously plays a crucial role in assessing the evolution of the cluster abundance at higher redshifts and much progress has recently been made in this direction. independent RASS survey in the southern sky ( [12] and Böhringer, this volume). Complementing data are provided by the RDCS and the survey by Burns et al. ( [5]) which probes the very faint end. An excellent agreement is apparent between all these independent determinations, all having faint end slopes in the range 1.75 − 1.85 and consistent normalizations. This is quite remarkable considering that all these surveys used completely different selection techniques (from pure optical to pure X-ray) and independent datasets. This situation contrasts with that exsisting only two years ago, when different surveys were finding faint end slopes in the range 1.1 − 2.2. This discrepancy was possibly due to the completeness levels and sky coverages of early samples which were not fully understood. It would thus appear that the local cluster abundance, N (L X , z ≃ 0), is now well established and can be safely used as a reference for studying the evolution at higher redshifts. Moreover, the BCS analysis at z < 0.3 [13] shows that the evolution of the bright end found by the EMSS at z > 0.3 (fewer high luminosity clusters) does not extend to lower redshifts. The Cluster LogN-LogS A summary of the observed cumulative cluster number counts is given in fig.3. This compilation includes both shallow and deep surveys (CfA, RDCS, WARPS) so as to cover more than three decades in flux. Once again, we note an encouraging agreement at the 2σ level among independent determinations. The slight difference between the RDCS and the Vikhlinin et al. survey at low fluxes may be due to different prescriptions used in these samples to evaluate the "total flux" of the clusters, a measurement which inherently depends on the assumed Σ-profile of a cluster in background limited regime. Although the LogN -LogS is not a very robust diagnostic tool to investigate the evolution of the cluster population, particularly for the most luminous, rare systems, we note that the observed counts are consistent with no-evolution predictions based on a fit of the local XLF in fig.2. from the EMSS. Fig.4 shows that the number densities from different samples are in general in very good agreement in regions of overlapping luminosities. Further inspection of the XLF in bins of increasing redshift fails to show any significant evolution out to z ≃ 0.8 ( [25]). By combining independent analyses based either on the XLF ( [4], [25]) or the LogN -LogS ( [20], [25], [35]), it emerges that the volume density of clusters per unit luminosity has remained constant within the present uncertainties, over a wide range in luminosities (2 × 10 42 ∼ < L X ( erg s −1 ) ∼ < 3 × 10 44 ). This L X range encompasses the bulk of the cluster population, from poor groups to moderately rich clusters with L X ≃ L * X ≈ 4×10 44 erg s −1 (roughly the Coma cluster). These results are not in conflict with the EMSS findings of a steepening of the XLF at luminosities in excess of the local L * X , a result which is consistent with the more recent analysis of the CfA survey in the highest luminosity bin [35]. The Cluster XLF at higher redsfhits The latest cluster XLF derived from a flux-limited sample (F X [0.5-2.0 keV]> 3.5 × 10 −14 erg cm −2 s −1 ) of 81 clusters spectroscopically identified in the RCCS survey is shown fig.5. The picture described above is further confirmed. It also appears that in order to make a significant step forward in understanding the evolution of the most luminous systems at high redshifts (z > 0.7), a new survey covering at least 10 deg 2 at F X ≃ 1 × 10 −14 erg cm −2 s −1 is needed. This is within reach with several years of serendipitous pointings to be accumulated with XMM and AXAF. Figure 6: Cartoon summarizing the observational status of X-ray cluster evolution. Very little is presently known about the cluster population at z > 0.8, although bona-fide X-ray clusters have been detected out to z = 1.27 [32] [26] and diffuse X-ray emission detected around radio sources to even higher redshift [11]. 4 Cluster searches at z ∼ > 1 Fig.6 summarizes our current understanding of cluster evolution in the (L X , z) plane at the end of the ROSAT era. A major unknown concerns the cluster abundance beyond redshift of one. If one assumes that the evolutionary trend in the cluster population continues past z = 1, the observed N (L X , z) can be extrapolated (taking also into account the estimated incompleteness at the faintest flux levels) to predict that ROSAT-PSPC searches must be still sensitive to the feeble X-ray emission from L * X clusters at z > 1. For example, in the RDCS one would expect up to a dozen X-ray luminous clusters at z ∼ > 1. However, in order to identify these clusters, near-IR deep imaging and spectroscopy with 8m-class are required. The efficacy of near-IR searches for high-z cluster searches has recently been proven by Stanford et al. [32] who have identified a cluster at z = 1.27 in a near-IR field galaxy survey. A corresponding extended X-ray source was found by the same authors in a deep ROSAT pointing, as well as serendipitously in the RDCS candidate sample. More recently, another RDCS faint candidate has been confirmed at z = 1.26 using IR imaging and Keck spectroscopy [26]. The X-ray luminosities of both these systems are around 10 44 erg s −1 in the [0.5-2.0 keV] band. These findings show that the combination of deep X-ray observations and near-IR imaging is an efficient method by which to identify massive clusters at z ∼ > 1 in a serendipitous fashion, thus allowing statistical estimates of the cluster abundance to be made. The much improved sensitivities of XMM and AXAF will make this method particularly attractive. At even higher redshifts, a viable method to identify clusters is to target powerful radio sources (e.g. [11]). Deep ROSAT pointings on these sources have revealed the existence of diffuse X-ray mission out to z ≃ 1.8 which most likely arises from hot intra-cluster gas trapped in deep cluster potential wells at such early epochs [11]. Discussion Remarkable observational progress has been made over the last years in determining the abundance of galaxy clusters out to z ∼ 1, as is underscored by the convergence of the results from several independent studies. At the begining of the ROSAT era, only a few years ago, controversy sorrounded the usefulness of X-ray surveys of distant galaxy clusters. This prejudice arose from an overinterpretation of the early results of the EMSS survey which, as of today, remain basically correct. Although in the early analysis of Gioia et al. [17] it was clearly stated that the evolution of the XLF was limited only to the very luminous systems, this detail was often overlooked for many following years. Indeed, this evolution was believed to extend through the bulk of the cluster population (L X ∼ < L * X ) not adequately probed by the EMSS at high redshifts. The original contoversy concerning cluster evolution inferred from optical and X-ray data finds a possible explanation in this. Optical surveys ( [10], [23]) have shown no dramatic decline in the coomoving volume density of rich clusters out to z ≃ 0.5. This was considered to be in contrast with the EMSS findings. However, these optical searches covered limited solid angles (much smaller than the EMSS) and therefore did not probe adequately the seemingly evolving high end of the cluster mass function. The theoretical interpretation of the new results on the evolution of the cluster abundance is still ambiguous. The implications that these findings have for models of cluster formation have been discussed by several authors (e.g. [21], [22], [8], [2], [3]). By following a phenomenological approach, one can constrain cosmological parameters and evolutionary parameters of the intra-cluster medium by matching models with observed distributions, such as N (L X , z), N (z), N (> S). This analysis has shown [2] that without additional observational inputs from the temperatures of high-z clusters or a better understanding of the physics governing the evolution of their gaseous component, it is difficult to draw firm conclusions on the value of the density parameter Figure 7: Constraints at 90% c.l. on the Ω 0 − A plane by matching the XLF as a function of redshifts from various X-ray surveys [2]. The parameter A describes the redshift evolution of the X-Luminosity-Temperature relation: Ω 0 . As an example, fig.7 shows the degeneracy between Ω 0 and the evolutionary parameter of the X-Luminosity-Temperature (L-T) relation. Recent measurements of cluster temperatures at moderate redshifts indicate that the L-T relation does not evolve significantly out to z ≃ 0.5 ([29], [19]) (i.e. A ≈ 0), which would favour a low-Ω universe. Future Prospetcs The next decade promises to be particulalry exciting for cluster astrophysics. The new ROSAT samples described herein will figure prominently in studies for years to come. The new generation of optical and near-IR mosaic imagers and highly efficient multiplexing spectrographs on 8-meter class telescopes (Keck, VLT, GEMINI) will probe a region of the parameter space of redshift, solid angle and limiting flux which has been completely unexplored with 4m-class telescopes and conventional detectors. The Advanced Camera aboard HST (2000) will add sub-kpc morphological information to this multi dimensional data set and will permit detailed studies of their lensing patterns. The impact of AXAF, XMM, and Sunyaev-Zeldovich measurements are described elsewhere in this volume (see contributions from K.Romer; M.Pierre; J.Bartlet). The next generation of X-ray satellites and the already available large optical telescopes should open the possibility of determining masses of distant clusters via X-ray temperature measurements, virial analysis and gravitational lensing studies. Carrying on such observations for an even limited number of clusters, extracted from a well defined statistical sample, will determine both the evolution of the cluster internal dynamics and the value of the cosmological density parameter. work presented here.
2014-10-01T00:00:00.000Z
1998-10-04T00:00:00.000
{ "year": 1998, "sha1": "43e1c02770a1643f970810c0d0b86046ed56822d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "43e1c02770a1643f970810c0d0b86046ed56822d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
11734182
pes2o/s2orc
v3-fos-license
Abdominal wall hernia in a rural population in India — Is spectrum changing ? Hernia is a common word that most surgeons are familiar with. A retrospective study was planned to analyse the spectrum of abdominal wall hernias in a rural population in India. Majority of the patients were of 40 70 yrs. Male to female ratio was 7:1. Incidence of groin hernias showed an increasing trend with advancing age. Out of total 320 cases, inguinal hernias were predominating in 77.81% cases. Ventral hernias were present in about 18% cases. However, femoral hernias were rare. We concluded that spectrum of abdominal wall hernias is almost the same all over the globe despite having differences in their socioeconomic and educational status. INTRODUCTION Hernia is derived from a latin word meaning "a rupture".Abdominal wall hernias are frequently encountered in surgical practice accounting for 15% -18% of all surgical procedures [1,2].Worldwide, more than 20 millions hernias are operated per year [3].More than 750,000 hernias in USA and approximately 125,000 hernias in United Kingdom are operated per year [4].The incidence of abdominal wall hernia in different countries varies from 100 -300/100000 per year [3].We could not find Indian data despite literature and medline search.Approximately 75% cases of all abdominal wall hernias [5] belong to groin.Lifetime risk of developing inguinal hernia is 15% -27% in men and 3% in women [6].Although males are affected more commonly (7:1), the incidence of femoral hernia is four times higher in fe-males [7].The incidence of hernia increases with advancingage.Indirect hernia is twice as common as direct hernia.Inguinal hernias are more common on right side.In recurrent inguinal hernia, direct type is twice as common as indirect [1].The incidence of recurrence is less than 1% in Lichtenstein repair [7].The incidence of congenital hernias is more common in low birth weight babies.Incisional hernias are more common in males [8].Midline ventral hernias are the next common variety of abdominal wall hernia after inguinal hernia.According to their locations, these are further classified into umbilical, paraumblical and epigastric hernia.Traumatic and obturator hernias are rare [9]. MATERIALS AND METHODS A retrospective study of 320 cases of abdominal wall hernias operated at BPS Govt Med College for Women khanpur kalan sonepat India from February 2012 to February 2013.The data was collected from their casesheets available in hospital.The data was compiled and analysed. RESULTS Over a period of one year, 320 cases of abdominal wall hernias were operated at our institute which constituted 22% of the operative workload of the general surgery department.Age varies from 3 months to 82 years with an increasing incidence with age (Table 1). The male to female ratio is 7:1 as shown in Table 2. Groin and incisional hernias are more common in men but ventral hernias showed a female dominance (Table 2).Inguinal hernia was commonest variety followed by paraumblical, umbilical, epigastric, incisional, obturator, traumatic and femoral hernia in decreasing order depicted in Table 3. low in our study as compared to western countries [8] but ventral hernias showed a opposite trend in our study (Table 4). Prostatism was commonest associated illness followed by hypertension and respiratory diseases (Table 5).Seroma formation was commonest complication in post operative period followed by wound infection and pyrexia despite routine use of antibiotics in all cases shown in Table 6. DISCUSSION Being a commonly performed general surgical operation, abdominal wall hernia comprises a significant proportion of total surgical work load in most of the centres.Although it has been reported to constitute 15% -18% of total surgical operations but a slightly higher prevalence of 22% in present study may be due to rural population having agriculture as main profession.Smoking, old age and neglected urinary obstructive symptoms may be the other contributory factors.Inguinal hernia constituted 77.81% of total abdominal wall hernias which is in accordance with literature.Inguinal hernia is twice more common on right side due to late descend of right testis and more frequent failure of closure of right processus vaginalis [9,10].In present study the ratio of right to left is 1.45:1.The incidence of direct and indirect hernia is almost equal however, indirect hernias are more common below 50 years of age than direct but direct hernia predominates as the age advances which may be attributed to physiological wear and tear of fibromuscular tissues, prostatic hypertrophy and comorbid illnesses.The incidence of abdominal wall hernia increases with aging as revealed in present study also except for first decade of life which might be due to presentation of congenital hernias in this decade.The youngest patient was only 3 months old while oldest was of age 82 years.The median age was 42 years which is more as compared to western countries but less as compared to African countries [1,9]. It may be depending on awareness and education of the population and other limiting factors like negligence being the only earning member of the family and low socioeconomic status. Males outnumbered females by a ratio of 7:1 which is more as compared to 5:1 of US [11].It may be do to the trend that males are more involved insternous work in agriculture while females are predominantly entitled the household works.This trend holds true in all abdominal wall hernias except paraumblical and epigastric hernias where females are predominant.In females, paraumblical hernia is the commonest type followed by epigastric and umbilical hernias which is in contrary to literature where umbilical hernia followed by incisional hernia are in trends [12].It may be due to high fertility rate and malnutrition.A low prevalence of congenital umbilical hernia in females may be another contributory factor.Unexpectedly high incidence of epigastric hernia in females than males in our study is again in contrary to the literature explainable by above mentioned factors.Surprisingly low incidence of groin hernias in our female population as compared to literature could not be reflecting the true incidence due to underreporting.The male to female ratio of inguinal hernia is very high in our study ascompared to literature which may be due to casual attitude of rural females to this disease leading to underreporting of the disease. The order of hernia frequency in our study in decreasing order is inguinal, paraumblical, umbilical, epigastric, incisional, obturator and femoral hernia which obviously differs from literature mentioning the frequency as: inguinal, femoral, umbilical and others.The incidence of femoral hernia is very low in our study as compared to literature where it comes as third commonest type of hernia.It may be due to some racial factors as wider pelvis has been thought to be an important factor as an etiological agent of femoral hernia.Bilateral inguinal hernias were present in 20 cases out of which 32 were direct hernias. Umbilicalhernias are also not uncommon and have been documented as an acquired disease in more than 90% cases [13].Multiparity, obesity, malnutrition and raised intra abdominal pressure have been assigned as predisposing factors.In present study also they constitute approximately 18% of the total cases which is in concordance with other studies. Incisional hernia was noticed in approximately 3% of the cases which is significantly lower than USA and UK where it has been reported in 6% -10% of cases but in accordance with African literature [8,9].It may be due to underreporting of these cases as our population is relatively less disease conscious. Abdominal wall hernias have been reported more prevalent in low socioeconomic status which holds true for our rural population also [14].Concomittent medical disease has been reported as having role in its etiology as well as in increasing morbidity and mortility in postoperative period.Symptoms of prostatism were present in approximately 16% cases while hypertension and diabetes were present in approximately 11% and 5% cases respectively. In the past decades the complication rates of abdominal wall hernias have drastically improved with the use of prosthetic mesh.We have used mesh unanimously in all cases except pediatric, strangulated and small ventral hernias.No recurrence has been reported till date.Complication rate of approximately 11% has been noticed in our study which is in the range of 4% -12% mentioned in literature [10].Seroma formation was commonest complication noted in approximately 5% of the cases followed by wound infection in 3% cases.Complications like irreducibility, obstruction and strangulation are decreasing reflecting awareness of population, better health facilities and improving quality.In our study only 7.5% cases were operated in emergency as compared to 35% in Nigeria.However all complications were managed conservatively without need of mesh removal. CONCLUSION The spectrum of abdominal wall hernias is more or less constant throughout the world despite having differences in educational, economic and social status.Early diagnosis, easily accessable health facilities and health education are important to prevent complications. Table 4 . Epidemiology of abdominal wall hernias in various countries.
2017-10-02T05:11:05.075Z
2013-07-29T00:00:00.000
{ "year": 2013, "sha1": "d1c561c0143dbe819be5222ad5889fbd5bc89110", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=36282", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "d1c561c0143dbe819be5222ad5889fbd5bc89110", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
208918024
pes2o/s2orc
v3-fos-license
[5,15-Bis(2-methylpropyl)porphyrinato]nickel(II) The title compound, [Ni(C28H28N4)], crystallizes with two independent molecules in the unit cell, one of which is located on an inversion center. Both macrocycles exhibit a planar conformation with average deviation from the least-squares-plane of the 24 macrocycle atoms of Δ24 = 0.043 Å for the first molecule and 0.026 Å for the molecule located on an inversion center. The average Ni—N bond lengths are 1.955 (2) and 1.956 (2) Å in the two molecules. The molecules form π–π dimers of intermediary strength with a mean plane separation of 3.36 (2) Å. The title compound, [Ni(C 28 H 28 N 4 )], crystallizes with two independent molecules in the unit cell, one of which is located on an inversion center. Both macrocycles exhibit a planar conformation with average deviation from the least-squaresplane of the 24 macrocycle atoms of Á24 = 0.043 Å for the first molecule and 0.026 Å for the molecule located on an inversion center. The average Ni-N bond lengths are 1.955 (2) and 1.956 (2) Å in the two molecules. The molecules formdimers of intermediary strength with a mean plane separation of 3.36 (2) Å . Á is the deviation from the least-squares-plane of the 24 macrocycle atoms and N-Ni-Nadj is the angle between neighboring pyrrole units. Jentzen et al. (1996) Data collection: APEX2 (Bruker, 2005); cell refinement: SAINT (Bruker, 2005); data reduction: SAINT; program(s) used to solve structure: SHELXS97 (Sheldrick, 2008); program(s) used to refine structure: SHELXL97 (Sheldrick, 2008); molecular graphics: XP in SHELXTL (Sheldrick, 2008); software used to prepare material for publication: SHELXTL. Comment meso-Alkylporphyrins are increasingly used in porphyrin chemistry, but their structural chemistry is less well established . The compound is another example for the expanding body of Ni(II) porphyrins with a planar macrocycle Jentzen et al., 1996;. In the crystal this allows the formation of πaggregates which are characterized by a mean plane separation of 3.36 (2) Å, a center-to-center distance of 4.88 (2) Å, a slip angle of 133.5 (1) ° which, according to the classification given by Scheidt & Lee (1987), results in a lateral shift of the metal centers of 3.54 (2) Å. Thus, the π-π-stacks are of intermediary strength. The compound forms part of a series of Ni(II) 5,15-dialkylporphyrins with different steric demand of the meso residue. The respective tert-butyl derivative (Song et al., 1996) is clearly the most nonplanar one with the shortest Ni-N bond length and largest deviation from planarity ( Table 2). The iso-propyl derivative (Song et al., 1998) shows still significant out-of-plane deformations, while Ni(II)porphyrin without any non-hydrogen residues is planar (Jentzen et al., 1996). The title compound has the sterically least demanding alkyl residue and exhibits an almost planar macrocycle. However, as indicated by the different N-Ni-N adj bond angles, the compound still exhibits some degree of in-plane distortion, which becomes more pronounced with larger meso alkyl residues. Experimental The compound was prepared as described by Wiehe et al. (2005) and crystallized from CH 2 Cl 2 /CH 3 OH. Refinement All nonhydrogen atoms were refined with anisotropic thermal parameters. Hydrogen atoms were refined with a standard riding model (C-H distance 0.96 Å, U iso = 0.05). Figure 2 View of the π-aggregates formed by the title compound in the crystal. [5,15-Bis(2-methylpropylporphyrinato]nickel(II) Crystal data [Ni(C 28 Special details Geometry. All e.s.d.'s (except the e.s.d. in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell e.s.d.'s are taken into account individually in the estimation of e.s.d.'s in distances, angles and torsion angles; correlations between e.s.d.'s in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell e.s.d.'s is used for estimating e.s.d.'s involving l.s. planes. Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > σ(F 2 ) is used only for calculating R-factors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger.
2018-04-03T04:41:03.571Z
2012-08-23T00:00:00.000
{ "year": 2012, "sha1": "1bc0adda2c08fb509c0bca3b95ae2c836307e0d2", "oa_license": "CCBY", "oa_url": "http://journals.iucr.org/e/issues/2012/09/00/rn2107/rn2107.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d54a062cdadeb91813fce8ee3e89c807d47fcd48", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Computer Science", "Medicine" ] }
1759732
pes2o/s2orc
v3-fos-license
Use of television, videogames, and computer among children and adolescents in Italy Background This survey determined the practices about television (video inclusive), videogames, and computer use in children and adolescents in Italy. Methods A self-administered anonymous questionnaire covered socio-demographics; behaviour about television, videogames, computer, and sports; parental control over television, videogames, and computer. Results Overall, 54.1% and 61% always ate lunch or dinner in front of the television, 89.5% had a television in the bedroom while 52.5% of them always watched television there, and 49% indicated that parents controlled the content of what was watched on television. The overall mean length of time daily spent on television viewing (2.8 hours) and the frequency of watching for at least two hours per day (74.9%) were significantly associated with older age, always ate lunch or dinner while watching television, spent more time playing videogames and using computer. Those with parents from a lower socio-economic level were also more likely to spend more minutes viewing television. Two-thirds played videogames for 1.6 daily hours and more time was spent by those younger, males, with parents that do not control them, who watched more television, and who spent more time at the computer. The computer was used by 85% of the sample for 1.6 daily hours and those older, with a computer in the bedroom, with a higher number of computers in home, who view more television and play videogames were more likely to use the computer. Conclusion Immediate and comprehensive actions are needed in order to diminish time spent at the television, videogames, and computer. Background Public health and preventive campaigns targeted to early adolescence have mainly focused on reducing unhealthy behaviours such as physical and sport inactivity, eating patterns, television (TV) viewing, and videogame playing [1][2][3]. Indeed, a negative relationship exists between the amount of time spent watching TV and children and adolescents health status, including overweight [4,5], school and verbal performance [6,7], perceived cognitive and attention abilities [8,9], and violence or bullying [10,11]. The family structure is also likely to have an important influence on sedentary behaviours, and parental status, family size, and number of siblings may be related with higher levels of TV viewing. The American Academy of Pediatrics has expressed concerns about the amount of time that children and adolescents spend viewing TV and has issued guidelines urging parents to limit total media time per day of quality programming, to remove TV from children's bedroom, and to monitor the shows children and adolescents are watching [12]. Similarly, Physical Activity and Fitness Objectives in the Healthy People 2010 has recommended that the proportion of students in grades 9 through 12 who view TV for two or more hours should be less than 25% [13]. In Italy, among adolescents aged 14-18 years 85.9% watched TV and 71.8% used internet at least three times a week [14], and the obesity prevalence rates was 5-6% in girls and 9% in boys aged 11-15 years [15] and 6% in boys and 5% in girls aged 11-year-old [16]. To the best of our knowledge, no previous epidemiologic survey has overall determined TV viewing, videogames playing, and computer use in children and adolescents in Italy. Thus, since it would be useful to obtain such data, the purposes of this study conducted in one region in Italy among a large sample of children and adolescents were: (a) to describe common practices about TV, videogames, and computer; (b) and to determine what association exist between these behaviours and different aspects of the family structure. Methods The data for this cross-sectional survey were collected during the period between March and May 2007 from 1034 children and adolescents aged 11 to 16 years randomly selected from a random sample of 5 public schools in the geographic area of the Campania region, in the South of Italy. Before the study, a meeting with the head of each school was arranged to present the study, and permission and collaboration were obtained. All parents of the selected students received an envelope with a letter informing them of the research project, describing the study, the voluntary nature of participation, and assurance of participation privacy and anonymity as no personal identifiers were included in the questionnaire. These policies were also printed explicitly on the front page of the questionnaire. Parent(s) provided the informed consent for their child participation before survey administration. The survey instrument was a self-administered anonymous structured questionnaire. On the day of the survey, in each classroom, a member of the research team gave oral instructions about filling in the questionnaire to the students who had secured parental consent. To preserve student privacy and to allow for anonymous participation, questionnaires were distributed and collected by a member of the research team, with no teacher involvement and no list of names or identifying information was created. Those who administered the questionnaire were advised only to respond to students' queries about the procedure and to guarantee the independent completion of the questionnaire. The questionnaire collected information on the following items: socio-demographic characteristics; number of TVs and computers in the home, time spent on viewing TV (video inclusive), playing videogames, computer use, and performing sport activity; parental control over viewing TV, playing videogames, and using computer. Videogames playing indicated playing games on either consoles or computers, whereas computer use indicated any use other than games. Each participant was asked to indicate, in the "yes/no" format, if he/she views TV, plays videogames, and uses the computer. For each positive response, in order to assess the exposure, students were asked to indicate, in an open-ended format, the average daily amount of time they spent either in the home or elsewhere. In addition, for each meal or snack, the children were asked whether they participated in any of the following activities while eating: watching TV, playing videogames, or using the computer. Each question was measured on a five-point Likert scale and the possible responses were "never", "rarely", "sometimes", "often", and "always". The first four categories regarding TV for the questions about lunch and dinner were grouped into not always eat during TV viewing. The students were also asked to indicate, in an open-ended format, the average weekly amount of time of performing sports activity. This variable was thereafter transformed into minutes per day of performing sports activity. Finally, each student was asked to indicate, in the "yes/no" format, if he/she has a TV and a computer in the bedroom and if the parent(s) control or supervise TV watching, playing videogames, and using the computer. Health care professionals measured in the classroom height and weight with digital scales (weight to the nearest 0.1 kg) and a portable stadiometer (height to the nearest 0.1 cm) while the children were not wearing shoes. The body mass index (BMI) of each child was calculated as weight in kilograms divided by height in meters squared (kg/m 2 ) and internationally accepted gender-specific and age-specific cut-off points for BMI were adopted to categorise children as overweight or obese [17]. Prior to study commencement, a pilot survey was conducted to test question formats and sequence to an appropriate cognitive and reading level. The study protocol was reviewed and approved by the Ethics Committee of the Second University of Naples. Statistical analysis Multiple logistic and linear regression analyses were used. Four models were developed including those variables that were considered to be potentially associated with the following outcomes of interest: viewing TV for at least two hours per day (Model 1); mean minutes per day of TV viewing (Model 2); mean minutes per day of videogames playing (Model 3); mean minutes per day of computer using (Model 4). The following explanatory variables were included in all models: age (continuous), gender (0 = male, 1 = female), number of siblings (0 = 0, 1 = 1, 2 = 2, 3 =≥ 3), both parents in the household (0 = no, 1 = yes), parent's working activity as a two dummy variables with unemployed as the reference group (lower managerial, artisans, commercial: 0 = no, 1 = yes; high professional, managerial: 0 = no, 1 = yes), and mean minutes per day of performing sport activity (continuous). These other predictor variables were included: in Models 1 and 2, number of TVs in the home (continuous), routinely viewing TV in the bedroom (no = 0, yes = 1), always eat lunch or dinner during TV viewing (no = 0, yes = 1), TV in the bedroom (no = 0, yes = 1), and parental control in viewing TV (no = 0, yes = 1); in Models 3 and 4, mean minutes per day of TV viewing (continuous); in Models 1 to 3, mean minutes per day of computer using (continuous); in Models 1, 2, and 4, mean minutes per day of videogames playing (con-tinuous); in Model 3, playing videogames not with somebody else (no = 0, yes = 1), and parental control in playing videogames (no = 0, yes = 1); in Model 4, number of computer in the home (continuous), computer in the bedroom (no = 0, yes = 1), parental control in computer using (no = 0, yes = 1), and computer use to play (no = 0, yes = 1). In the logistic regression models the adjusted Odds Ratios (ORs) and the corresponding 95% confidence intervals (CIs) were calculated for each independent variable. All statistical tests were two-sided and a p-value ≤ 0.05 was considered to be statistically significant. All statistical procedures were performed by using Stata software (Version 10) [18]. Results Of the 1034 questionnaires distributed, 47 were excluded due to inconsistent information because the participants reported, for example, not playing videogames but he/she indicated the amount of time of playing per day. A total of 987 questionnaires were considered for a response rate of 95.4%. A description of the general characteristics of the study population is provided in Table 1. The mean age was 13.7 years, more than half were males, almost all lived with the parents, the mean BMI was 23.1, and 22.8% were classified as overweight (including obese). The self-identified practices regarding TV, videogames, and computer of the study participants are presented in Table 2. Overall, 58.4% always ate lunch or dinner in ) and linear (2-4) regression analyses indicating associations between several variables and the different outcomes (Continued) more minutes per day on TV viewing (Model 2). Almost two-thirds played videogames (59.6%) for 1.6 hours per day and the multivariate linear regression analysis reported that this amount was significantly higher in those younger, males, who watched more TV, who spent more time at the computer, and when the parents do not control when they play videogames (Model 3). Finally, 85% stated that they used the computer for an average of 1.6 hours per day. Multiple linear regression model indicated that the adjusted daily mean minutes of computer using was significantly higher in those older, who had a computer in the bedroom, who had a higher number of computer in the home, who view more TV, and who play more time with videogames (Model 4). Discussion This study seeks to assess common practices about TV viewing, videogames playing, and computer using and also to identify which variables are associated with these behaviours among a large sample of children and adolescents 11-to 16-year-olds in one region in Italy. It is important to emphasize that the survey responses indicate an inordinately high amount of time spent in viewing TV with an exposure of 2.8 hours per day. Early studies showed similar values with 3.1 hours in children aged 12 to 13 in the United States [19], 3.13 hours per weekday in 13 and 15 years in New Zealand [20], and 3-3.57 in girls and boys with a mean age of 13 years in Belgium [21]. However, substantially lower exposure was reported with 1.9 hours per day in boys and girls aged 15-16 from three regions in Europe [22], 1.91 in girls at age 11 in the United States [4], and 2.2 in children 11-14 years old in France [23]. International comparisons, however, must be taken with caution mainly because of differences across the studies such as, for example, the characteristics of the sample and the study's methodology. Multivariate analyses in this study provided evidence of strong interrelations among several behaviours: the frequency of TV viewing was positively associated with the increased frequency of playing videogames and using computer. This finding is consistent with other surveys [4,24]. Approximately 75% of the sample spent at least two hours per day in watching the TV. . Age was a significant predictor of this behaviour, since the frequency of those who watched TV ≥ 2 hours per day was significantly higher in later childhood. Moreover, an association has been observed between parental working activity, an indicator of socio-economic status, and amount of TV watching per day, with respondents with lower managerial, artisans, or commercial parents spending more minutes. A disturbing discovery was the large percentage (89%) who acknowledged the presence of a TV in the bedroom, although no evidence was found to link this variable with the daily amount of time viewing TV. In previously crosssectional studies, lower prevalence were observed with values of 49.8% and 55.2% in children aged 11-12 years in the United States [29] and of 23.5% and 35% in the already mentioned Belgium study [21]. Another disturbing discovery was the frequency in which parents consistently monitor their children's TV viewing (49%), videogames playing (32%), and computer using (58.1%), but also in this case there was no significant relationship between amount of TV viewing and parental control. Moreover, the multivariate analysis showed that in only one model the supervision has an important role, because children play videogames a significant higher amount of time if the parents do not control when they play. With regard to videogames playing and computer using, in the current study the mean daily usage was 1.6 hours for both. These findings are consistent with the results from the United States on similarly aged children with a mean time spent daily of videogames playing and computer using of 1.49 and 1.19 hours, respectively [19]. In Belgium, the daily time spent on videogames playing was 54 minutes among boys [21]. Multivariate models revealed that the amount of times exposed to videogames and computer were related to the TV viewing with both that increase as TV view increases. Furthermore, respondents who spent more time on videogames playing were more likely to be males and younger, whereas those who spent more time using computer were older. Some potential limitations of the present study need to be acknowledged. First, the observational nature does not permit definitive establishment of a causal relationship between predictors and outcomes. The data reported are interesting with respect to the magnitude and the direction of the different relationship, but they cannot untangle the direction. For example, it is not possible to assume that if parents control their children when they play videogames they would be spending less time playing. Although this seems reasonable, it remains to be verified. Moreover, although in the multivariate analysis we have attempted to control for theoretically relevant predictors of the outcomes, it is possible that there are other unmeasured potential confounding variables, such as other family, social, and environmental characteristics, that influence the outcomes. Second, the methodology used for the quantification of watching TV, playing videog-ames, or using computer was self-report by the respondents. Participants were assured that their responses would not be shared with others and anonymity while answering the questionnaire may guarantee honest and more accurate self-report. Nonetheless, it was likely that existed the possibility that students may either over-report or underreport information about the media time. Distortions motivated by the desire to please and reluctance to tell the true might also be present. However, it has been proved that a self-administered questionnaire is valid for group comparisons regarding watching TV, playing videogames, or using computer [24]. Despite these limitations, it should be recognized that an important strength of the study is the very high participation rate which allows many comparisons providing representative and generalizability information from a large and heterogeneous sample of adolescents in one region in Italy and to extend the perspective research on different factors associated with watching TV, playing videogames, or using the computer. Conclusion These results extend the understanding about TV viewing, videogames playing, and computer using and their relationships with different variables in children and adolescents and immediate and comprehensive actions are needed in order to diminish time expended at the TV, videogames, and computer.
2016-05-04T20:20:58.661Z
2009-05-13T00:00:00.000
{ "year": 2009, "sha1": "b5a85c74a1ea9cd5d08dcccdf2512814999584a3", "oa_license": "CCBY", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/1471-2458-9-139", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "48347aac7c24839192eac7c3787351730bb23da8", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
2817736
pes2o/s2orc
v3-fos-license
Bruch’s membrane abnormalities in PRDM5-related brittle cornea syndrome Background Brittle cornea syndrome (BCS) is a rare, generalized connective tissue disorder associated with extreme corneal thinning and a high risk of corneal rupture. Recessive mutations in transcription factors ZNF469 and PRDM5 cause BCS. Both transcription factors are suggested to act on a common pathway regulating extracellular matrix genes, particularly fibrillar collagens. We identified bilateral myopic choroidal neovascularization as the presenting feature of BCS in a 26-year-old-woman carrying a novel PRDM5 mutation (p.Glu134*). We performed immunohistochemistry of anterior and posterior segment ocular tissues, as expression of PRDM5 in the eye has not been described, or the effects of PRDM5-associated disease on the retina, particularly the extracellular matrix composition of Bruch’s membrane. Methods Immunohistochemistry using antibodies against PRDM5, collagens type I, III, and IV was performed on the eyes of two unaffected controls and two patients (both with Δ9-14 PRDM5). Expression of collagens, integrins, tenascin and fibronectin in skin fibroblasts of a BCS patient with a novel p.Glu134* PRDM5 mutation was assessed using immunofluorescence. Results PRDM5 is expressed in the corneal epithelium and retina. We observe reduced expression of major components of Bruch’s membrane in the eyes of two BCS patients with a PRDM5 Δ9-14 mutation. Immunofluorescence performed on skin fibroblasts from a patient with p.Glu134* confirms the generalized nature of extracellular matrix abnormalities in BCS. Conclusions PDRM5-related disease is known to affect the cornea, skin and joints. Here we demonstrate, to the best of our knowledge for the first time, that PRDM5 localizes not only in the human cornea, but is also widely expressed in the retina. Our findings suggest that ECM abnormalities in PRDM5-associated disease are more widespread than previously reported. Electronic supplementary material The online version of this article (doi:10.1186/s13023-015-0360-4) contains supplementary material, which is available to authorized users. Background Brittle cornea syndrome (BCS) is an autosomal recessive connective tissue disorder predominantly affecting the cornea, skin and joints [1][2][3][4]. Extreme corneal thinning (220-450 μm) (normal range 520-560 μm) is the hallmark of the condition and affected individuals are at high risk of corneal rupture, leading to irreversible blindness [1,2]. Other ocular features include blue sclerae, keratoconus, keratoglobus and high myopia. Extra-ocular manifestations include deafness, joint hypermobility, skin hyperelasticity, arachnodactyly, and developmental dysplasia of the hip [2]. BCS has been described in patients in the absence of extra-ocular features [2] and diagnosis prior to ocular rupture is possible in the presence of a high index of clinical suspicion. A role for PRDM5 in bone development [6] and corneal development and maintenance [4] has been suggested. However, the exact localization of the protein in the human eye has not been described. We performed immunohistochemistry (IHC) in human eyes and found that PRDM5 localizes both to the corneal epithelium and the retina. Aiming to gain insights into a role for this protein in the retina, we examined the deposition of ECM proteins in the retinas of BCS patients with a PRDM5 Δ9-14 mutation using IHC, and found ECM abnormalities within Bruch's membrane. We also report abnormal expression of ECM components in fibroblasts from a BCS patient with high axial myopia and choroidal neovascularization carrying a novel p.Glu134* mutation in PRDM5, a patient who had not sustained a corneal rupture. These data suggest that ECM abnormalities in PRDM5-related disease are more widespread than previously reported, and suggest a role for PRDM5 in the retina. Participants and clinical evaluation Informed written consent was obtained and investigations conducted in accordance with the principles of the Declaration of Helsinki, with Local Ethics Committee approval (NHS Research Ethics Committee reference 06/Q1406/52). Patients with BCS P1 and P2, with PRDM5 Δ9-14; and P3, with PRDM5 p.Arg590* have been previously described [4]. Diagnosis of BCS in P4, with PRDM5 p.Glu134*, was based on clinical examination and confirmed by mutation analysis of ZNF469 and PRDM5. Detailed ophthalmic examinations included anterior segment examination by slit lamp biomicroscopy, measurement of corneal thickness using pachymetry, retinoscopy, color photography, fluorescein angiography, and optical coherence tomography (OCT). A systemic workup including full blood count, coagulation screen and renal function analyses was performed. Clinical samples BCS-affected ocular tissue was obtained from the Department of Histopathology, Manchester Royal Infirmary. Human ocular tissue samples from control individuals were obtained from the Manchester Eye Bank (Table 1). Informed written consent and ethics committee approval was granted (14/NW/1495). Immunoblotting Fibroblast cell lysis and preparation of nuclear extracts was performed according to Schnitzler GR [7]. Total protein content was quantified using a BioRad protein quantification BCA assay (BioRad Laboratories). Skin fibroblasts nuclear extracts were subjected to standard SDS-PAGE using a custom-made antibody PRDM5 Ab2 [6,8] at a concentration of 1 μg/ml, and GAPDH at a concentration of 2 μg/ml (Santa Cruz sc-47724) on equal amounts of nuclear fraction protein. Membranes were blocked with TBST (0.1 % Tween 20) containing 5 % non-fat dry milk, and incubated with primary antibodies overnight. Visualization was performed with an Histology and immunohistochemistry Histological analysis was carried out in accordance with standard diagnostic protocols. 4 μm paraffin-embedded slides were stained with hematoxylin and eosin and elastin with van Gieson. Immunohistochemistry was performed using PRDM5 Ab2 [6,8] and mouse monoclonal antibodies against collagen I (ab90395, Abcam); collagen III (ab6310, Abcam) and collagen IV (ab6311, Abcam). PRDM5 AB2 epitope is situated within the region corresponding to N-terminal amino acids 60-142. Staining was performed on a Ventana Benchmark XT Automated Immunostaining Module (Ventana Medical Systems) together with the XT ultraView Universal Red Alkaline Phosphatase detection system for all antibodies except PRDM5, where DAB was used as the chromogen. Antigen retrieval was performed separately using heat-induced antigen retrieval for PRDM5, and no pre-treatment for collagens I, III and IV. Primary antibodies were diluted in Dako REAL™ Antibody Diluent (Dako, Agilent Technologies, UK) to the indicated optimal dilutions of 3.5 μg/ml for PRDM5; 3 μg/ml for collagen I; 10 μg/ml for collagen III; 2.5 μg/ml for collagen IV. Sections of patient eye tissue were processed in parallel with the control tissue and were collected, sorted and fixed in an identical manner. Tissue section slides were masked for origin and scored for detection of cells showing nuclear PRDM5 staining subjectively by an independent human observer using a binary scale (positive or negative). Tissues were considered positive when >20 % of the cells displayed nuclear PRDM5 staining [8]. Cell culture and immunofluorescence (IF) Polyclonal rabbit anti-fibronectin (FN) antibody, mouse anti-tenascin monoclonal antibody (clone BC-24), recognizing all the tenascins (TNs), and TRITC-conjugated rabbit anti-goat antibody were from Sigma Aldrich; mouse anti-α5β1 (clone JBS5) and anti-α2β1 (clone BHA.2) integrin monoclonal antibodies; and goat antitype I collagen, anti-type III collagen and anti-type V collagen antibodies were from Millipore Chemicon Int. Inc. (Billerica, MA). FITC-and TRITC-conjugated goat anti-rabbit and anti-mouse secondary antibodies were from Calbiochem-Novabiochem Int. (San Diego, CA, USA). Antibody dilutions were: anti-tenascin and anti-α5β1: 2 μg/ml; anti-α2β1: 4 μg/ml; anti-FN and anti-type I collagen, 10 μg/ml; anti-type III and type V collagen: 20 μg/ml. Primary dermal fibroblast cultures were established from skin biopsies by routine procedures, maintained and harvested as described [9,10]. 1.0 × 10 5 cells were grown for 48 h on glass coverslips, fixed in methanol and incubated with the specified antibodies as reported [9,10]. For analysis of integrins, cells were fixed in 3 % paraformaldehyde and 60 mM sucrose, and permeabilized in 0.5 % Triton X-100. Cells were reacted for 1 h at room temperature with 1 μg/ml anti-α5β1 and anti-α2β1 integrin monoclonal antibodies. Cells were subsequently incubated with 10 μg/ml FITC-or TRITCconjugated secondary antibodies. IF signals were acquired by a CCD black/white TV camera (SensiCam-PCO Computer Optics GmbH, Germany) mounted on a Zeiss fluorescence-Axiovert microscope, and digitalized by Image Pro Plus program (Media Cybernetics, Silver Spring, MD). Quantitative PCR Extracted total RNA was reverse-transcribed into singlestranded cDNA using a High Capacity RNA-to-cDNA Kit (Life Technologies, Paisley, UK), according to the manufacturer's instructions. RT-PCR and data analysis was performed as previously described [4]. The assay numbers for the mRNA endogenous control (GAPDH) and target gene were: GAPDH (Hs02758991_g1*) and ITA8 (Hs00233321_m1*) (Life Technologies). Cycles to threshold (CT) values were determined for each sample and its matched control and relative mRNA expression levels determined by the 2 −ΔΔCt method, providing the fold change value [11]. Error bars representing 95 % confidence intervals around the mean are represented for all experiments. P-values were derived using the 2tailed T-test with significance level set at 0.01 to compare results between mutant and wild-type cells. One-way ANOVA and Dunnett's multiple comparison posttest using mean values and standard error were also performed on fold change means in all groups assessed. Results PRDM5 mutations, functional consequences, and associated phenotypes A summary of clinical samples used in this study is shown in Table 1. The mutation PRDM5 Δ9-14, carried by P1 and P2 (whose clinical details are described in Burkitt-Wright et al. [4]), is an in-frame deletion mutation that we show here results in the production of a smaller, internally deleted protein confirmed by western blot analysis on skin fibroblasts of P1 (Fig. 1). The mutation p.Arg590* (P3) (whose description is also provided in Burkitt-Wright et al. [4]), also results in a protein truncation confirmed by western blot analysis on skin fibroblasts (Fig. 1 Fig. 2b and c, arrows), leading to retinal exudation (Fig. 2d,*). Management of CNV consisted of three consecutive monthly intravitreal injections of the anti-vascular endothelial growth factor (VEGF) agent ranibizumab (Lucentis, Novartis, Basel, Switzerland), achieving a complete resolution of the exudation and significant VA improvement to 0.70. In a follow-up period of 5 years there has been no disease recurrence. Systemic examination showed marfanoid habitus, scoliosis, arachnodactyly and hyperextensible joints. Genetic analyses revealed a novel homozygous nonsense mutation in PRDM5, c.400G > T p.Glu134*, confirming a diagnosis of BCS. The mutated nucleotide is in exon 4 (Fig. 2e) and is predicted to result in nonsense-mediated decay. This variant was not present in dbSNP132, the Exome Variant Server, or the 1000 Genomes Project (all accessed January 5 th 2015). Western blot analysis on skin fibroblasts of P4 did not detect a protein product, however overlap between the location of the mutation, and location of the antibody epitope was present. PRDM5 is expressed in the adult human cornea and retina Investigation of PRDM5 expression by immunohistochemistry shows nuclear PRDM5 expression in the corneal epithelium (Fig. 3a) and extensive nuclear and cytoplasmic PRDM5 expression in the retina (Fig. 3c), with absent expression in the retinal pigment epithelium (Fig. 3e, arrow) (control post-mortem eye #2). PRDM5 staining was also not apparent in Bruch's membrane. Expression was seen in cells of neuroectodermal origin particularly the retina, with the exception of nuclei of the smooth muscle of the ciliary body (mesodermal origin) and nuclei of the corneal epithelium (surface ectodermal origin) (Fig. 3 and Additional file 1: Table S1). Immunohistochemistry using PRDM5 Ab2 in P1 demonstrates loss of PRDM5 retinal nuclear and cytoplasmic staining (Fig. 3e). Structural abnormalities in Bruch's membrane in BCS patients Two patients, cousins, aged 10 (P1) and 21 (P2) with PRDM5-associated disease (previously described by Burkitt-Wright et al. [4]) were studied in order to assess whether there are any functional consequences to their retinas. As collagen expression is associated with PRDM5 levels, we studied the expression of PRDM5, collagens I, III, IV and elastin in Bruch's membranes of eyes from these two subjects. Both patients carried a large homozygous deletion (Δ exons 9-14) in PRDM5. The axial lengths of the two eyes were 20 mm (P1) and 21.8 mm (P2), determined histologically. We found absent or decreased staining in Bruch's membrane for collagens type I, III and IV in both BCS samples versus a control sample (Fig. 4). Van Gieson staining demonstrated normal elastin staining of Bruch's membrane in both samples (Additional file 1: Figure S1). No histological evidence of breaks in Bruch's membrane was evident. Extracellular matrix abnormalities in skin fibroblasts from BCS patient P4 with the novel PRDM5 mutation p.Glu134* Indirect immunofluorescence performed on skin fibroblasts from patient P4 showed downregulation of structural collagens I and III, fibronectin, all the tenascins (detected by a single antibody), and integrin receptors α2β1 and α5β1. The expression of type V collagen was disorganized in comparison to the control (Fig. 5). Discussion PRDM5-related disease is known to affect the cornea, skin and joints [1][2][3][4][5]. Here we show that PRDM5 localizes not only in the human cornea, but is also widely expressed in the retina. PRDM5 expression has been reported to be predominantly nuclear, for example in intestinal crypts where stem cells reside, with cytoplasmic protein from a patient with the presumed null mutation p.Glu134* does not produce any bands other than the non-specific band at 60 kDa expression in some tissues, including colonic villi [8]. We show both cytoplasmic and nuclear PRDM5 expression in the retina. We show reduced expression of major collagenous components of Bruch's membrane [12] (Additional file 1: Figure S2) in two patients with a deletion of exons 9-14 of PRDM5 (Fig. 4). An association between PRDM5 and altered collagen expression has been shown in previous studies [4][5][6]. Here we show that PRDM5 mutations lead to notable differences in the expression of ECM proteins in Bruch's membrane that may impinge on its structural integrity. We identified myopic choroidal neovascularization (CNV) [13] in a 26-year old lady with PRDM5-related BCS, suspected to have the disease due to the presence of corneal thinning and marfanoid features. High myopia is a significant risk factor for the development of both CNV and retinal detachment [13] and has been described in a number of patients with BCS [2]. Patients with BCS may therefore benefit from daily monocular monitoring with an Amsler chart, to detect early metamorphopsia (visual distortion), with urgent referral to an ophthalmologist if metamorphopsia develops. CNV has been described in a number of connective tissue disorders including pseudoxanthoma elasticum [14], Beals-Hecht syndrome [15], Ehlers-Danlos syndrome [16], and osteogenesis imperfecta [17]. Associations between connective tissue disease and CNV have however seldom been investigated at the histological and cellular levels. PRDM5 mutations may further contribute to weaknesses at the level of Bruch's membrane caused by myopia, a hypothesis consistent with our immunohistochemical results, although IHC was only performed on two patients with a PRDM5 Δ9-14, which results in the production of a truncated protein product. Retinal basement membrane abnormalities have been also noted upon ultrastructural studies of retinas from patients with Alport syndrome, caused by mutations in different transcripts of the collagen IV gene [18]. We looked at expression of fibronectin, integrins and tenascins in dermal fibroblasts. Fibronectin is present in Bruch's membrane (Additional file 1: Figure S2), and integrins α2β1 and α5β1 are the major integrin receptors for collagen and fibronectin, respectively. Integrin α8 is the major tenascin receptor [19] and homozygous mutations in tenascin X cause a subtype of Ehlers-Danlos syndrome [20,21]. When present in dermal fibroblasts, the PRDM5 p.Glu134* mutation results in the absence of tenascin staining (Fig. 5). These findings are consistent with our previous study examining patient fibroblasts from patients with the Δ9-14 and p.Arg590* PRDM5 mutations [4]. Here we confirm that expression of integrins α2β1, α5β1 is markedly reduced (Fig. 5). We also note reduced RNA expression levels of integrin α8 in two patients (Additional file 1: Figure S3). Integrin α8 is the major tenascin receptor [19]. Tenascins are a family of four extracellular matrix proteins, tenascin X and C are major isoforms expressed in ocular tissues [22]. The altered expression of collagen V in fibroblasts lacking PRDM5, together with the absence of tenascins, is reminiscent of a subtype of autosomal recessive Ehlers-Danlos syndrome characterized by tenascin X deficiency [20,21]. Tenascin X is a large ECM glycoprotein abundantly expressed during development and in adult tissue strongly associated with ocular basement membranes (Bowman's layer and Descemet's membrane in particular) [22]. Our data suggests that PRDM5 may play a role in the regulation of collagen, integrin and tenascin expression, proteins that participate in ocular basement membrane development including Bruch's membrane Table 1). Image objective magnifications (OM) are shown. PRDM5 staining is brown (obtained using DAB as the chromogen). a. Nuclei of the corneal epithelium are positive for PRDM5 (arrow). The corneal stroma shows mild cytoplasmic staining, but nuclei are negative (<). b. PRDM5 is not expressed in the corneal endothelium (arrow) (in this image the endothelium has become detached from the overlying stroma). c. Positive PRDM5 nuclear and cytoplasmic staining in the inner (*) and outer nuclear layers (**), with cones staining more strongly than rods. Ganglion cell cytoplasm are also positive (arrow). Ocular tissue from patient P1 with deletion exons 9-14 is shown in (e). GCL, ganglion cell layer; INL, inner nuclear layer; ONL, outer nuclear layer; PR, photoreceptors, RPE; retinal pigment epithelium. d. Retinal pigment epithelium cell nuclei (arrow) appear negative. e. PRDM5 staining in P1 with PRDM5 deletion exons 9-14, demonstrating loss of PRDM5 nuclear and most cytoplasmic staining in the retina associated with loss of antibody epitope Fig. 4 Changes in extracellular matrix collagens in Bruch's membrane in PRDM5-associated disease. Immunohistochemistry performed for collagens I, III and IV (red stain, obtained using the XT ultraView Universal Red Alkaline Phosphatase detection system) in retinas of a control individual, and BCS patients P1 and P2 (OM 40×). Absence of staining for collagens I, III and IV in Bruch's membrane (arrow) is present in sample P1, and absent staining for collagen I with reduced staining for collagens III and IV in P2. Images were recorded and processed identically to allow direct comparisons to be made between them Fig. 5 Expression of ECM components in dermal fibroblasts expressing or lacking PRDM5. Indirect immunofluorescence microscopy performed on dermal fibroblasts from a control (#2) and BCS patient P4 carrying the PRDM5 p.Glu134* mutation for collagens I, III and V, fibronectin, tenascins and integrin receptors α2β1 and α5β1. In control cells, collagen I is primarily expressed in the cytoplasm, with only limited expression extracellularly. Collagen I labelling is substantially reduced in PRDM5 mutant cells. Collagen III appears well organized in the ECM of control cells but was absent in the mutant fibroblasts where only diffuse cytoplasmic staining was visible. Furthermore, disarray of collagen V was evident with cytoplasmic accumulation and reduced extracellular matrix in the PRDM5 mutant cells. Expression of the collagen integrin receptor α2β1 was essentially abolished, with marked reduction in the fibronectin integrin receptor α5β1 and disorganization of fibronectin matrix, in PRDM5 mutant cells compared to control cells. Tenascins were organized in an abundant extracellular matrix in control cells whereas they were not detectable in the PRDM5 mutant cells. 1 cm on the image scale corresponds to 16 μm. Images were all recorded under identical parameters to allow for direct comparison [12,22]. A role for PRDM5 as a direct activator of collagen genes has been reported with direct binding of PRDM5 in conjunction with RNA polymerase II shown in a ChIP-sequencing experiment performed on murine MC3T3 cells (6). This role is also supported by the observation of a significant downregulation of structural collagens in fibroblasts of patients with BCS2 (4). While it is also possible that the basement membrane structural abnormalities observed in Bruch's membrane also involve the corneal endothelial and/or epithelial basement membranes, the presence of extreme corneal scarring and ECM changes linked to tissue remodelling precluded this analysis. Conclusions PDRM5-related disease is known to affect the cornea, skin and joints. Our study shows expression of PRDM5 in the human cornea and retina, and demonstrates downregulation of major structural components of Bruch's membrane in the eyes of two patients with BCS type 2. These findings suggest that ECM abnormalities in PRDM5-associated disease are more widespread than previously reported. Additional file Additional file 1: Table S1. Ocular PRDM5 expression and embryonic origin of ocular structure. Figure S1. Elastin staining in Bruch's membrane in PRDM5-associated disease. Figure S2. The 5 layers of Bruch's membrane (modified from Booji et al, 2010) [15]. Figure S3. qPCR assessment of target gene ITGA8 demonstrating fold change in mRNA expression in dermal fibroblasts isolated from BCS patients with different mutations: PRDM5 p.Arg590* (P3) and PRDM5 internal deletion of exons 9 to 14 (P1), versus sex and age-matched control fibroblast cell lines in the logarithmic scale (Log2RQ). (DOCX 5201 kb)
2016-05-12T22:15:10.714Z
2015-11-11T00:00:00.000
{ "year": 2015, "sha1": "db76be4c36ced1bdedfc09fef4f919ea79a8d002", "oa_license": "CCBY", "oa_url": "https://ojrd.biomedcentral.com/track/pdf/10.1186/s13023-015-0360-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eecaef721951777f028af858a60dbe4058e6c40a", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
119277332
pes2o/s2orc
v3-fos-license
Steady Flow for Shear Thickening Fluids with Arbitrary Fluxes We solve the stationary Navier-Stokes equations for non-Newtonian incompressible fluids with shear dependent viscosty in domains with unbounded outlets, in the case of shear thickening viscosity, i.e. the viscosity is given by the shear rate to the power p-2 where p>2. The flux assumes arbitrary given values and the Dirichlet integral of the velocity field grows at most linearly in the outlets of the domain. Under some smallness conditions on the"energy dispersion"we also show that the solution of this problem is unique. Our results are an extension of those obtained by O.A. Ladyzhenskaya and V.A. Solonnikov (J. Soviet Math., 21 (1983) 728-761) for Newtonian fluids (p=2). Introduction The Navier-Stokes system for stationary incompressible flows in a domain with unbounded straight outlets, with the velocity field converging to parallel flows (Poiseuille flow) in the ends of the outlets, was solved first by C. Amick [2] in the 1970s. This problem is known as Leray problem, cf. [2, p. 476]. Amick's solution assumes the fluxes of the fluid in the outlets to be sufficiently small, which turns out to be a sufficient condition to deal with the convective (nonlinear) term in Navier-Stokes equations. It is an open problem to solve Leray problem for arbitrary fluxes. Alternately, Ladyzhenskaya and Solonnikov [15] considered the stationary Navier-Stokes equations not demanding the fluid to be parallel in the ends of the outlets, but instead having arbitrary fluxes. In this case, the outlets do not need to be straight and they solved this new problem for domains having arbitrary uniformly bounded cross sections and with the fluid having arbitrary fluxes. Besides, their solution has the property that the Dirichlet's integral of the velocity field of the fluid grows at most linearly with the direction of each outlet, and they also proved that this solution is unique under some additional smallness condition. In this paper we extend the Ladyzhenskaya-Solonnikov's theorem, i.e. "Theorem 3.1" in [15], for power-law shear thickening fluids, i.e. incompressible non-Newtonian fluids obeying the power law when p > 2. Here, S is the viscous stress tensor, v is the velocity field of the fluid and D(v) is the symmetric part of velocity gradient ∇v (i.e. D ij (v) = 1 2 ( ∂v j ∂x i + ∂v i ∂x j ) for v = (v 1 , · · · , v n ), i, j ∈ {1, · · · , n}, n ∈ N). For p = 2, the fluid is Newtonian. If 1 < p < 2, the fluid is called shear thinning (or plastic and pseudo-plastic) and if p > 2, shear thickening (or dilatant). In engineering literature the power law (1.1) is also known as Ostwald-De Waele law (see e.g. [6]). Corresponding to (1.1) we have the following system of equations modelling the flow of an incompressible fluid in a stationary regime: where P is the pression function of the fluid (and v is the velocity field, as already indicated above). This model equations are also referred to as Smagorinsky model, due to [22], or Ladyzhenskaya model, due to [12,13,14]. A related model where the viscosity is given by |v| p−2 , instead of |D(v)| p−2 , is considered in [16]. For this case, it is shown in [16,Remark 5.5 in Chap.2, §5.2] the existence of a (weak) solution for system (1.2) in a bounded domain with homogeneous Dirichlet boundary condition, for p ≥ 3n n+2 . There are many results concerning the solution of (1.2) in bounded domains. For instance, in [9] the existence of a solution for (1.2) is obtained under the weaker condition that p ≥ 2n n+2 . In unbounded domains there are not so many results. For parallel fluids we can identify v with a scalar function v and the system (1.2) reduces to the p -Laplacian equation (1.3) − div(|∇v| p−2 ∇v) = c for some constant c (related to the "pressure drop"). So, we can consider the Leray problem for (1.2), i.e. the solution of (1.2) in a domain with straight outlets with the velocity field tending to the solution of (1.3) in the ends of the outlets. This problem was solved by E. Marušić-Paloka [17] under the condition that the fluxes are sufficiently small and p > 2, thus extending Amick's theorem [2] for power fluids with p ≥ 2. As far as we know, the Leray problem for (1.2) when p < 2 (with small fluxes) is an open problem. In this paper, as we mentioned above, we extend Ladyzhenskaya-Solonnikov's theorem [15,Theorem 3.1] for (1.2) when p > 2. More precisely, we obtain the existence of a solution v to the system (1.2) for n = 2, 3, and p ≥ 2, in a domain Ω with unbounded outlets, specified in the next Section, for any given fluxes in the outlets and homogeneous Dirichlet boundary condition v|∂Ω = 0. The "Dirichlet integrals" |∇v| p of our solution grows at most linearly with the direction of the outlets (see (2.1) 5 in Section 2). Besides, we observe that these integrals over portions of the outlets with a fixed 'length' are bounded by a constant that tends to zero with the flux (see Proposition 4.2 and Remark 4). Under this condition and some aditional one, we have uniqueness of solution (see Theorem 4.4). All these facts were obtained in [15] for the case p = 2, but the power-law model ((1.2) with p = 2) was not treated in [15]. In the next two paragraphs we look at some facts relating to the case p = 2. First, to deal with the nonlinear term div(|D(v)| p−2 D(v)) one can use the monotone method of Browder-Minty. Secondly, we extend the technique employed in [15] to obtain the existence of a solution, which, in particular, consists in first solving the problem in a bounded truncated domain and then taking the limit when the parameter of the truncation tends to infinity, to obtain a solution in the whole domain. To take this limit we need first uniform estimates with respect to the truncation parameter for the solution in the truncated domain, and this is obtained by integrating by parts the equation times the solution in some fixed bounded domain. Then we need the regularity of the solution in bounded domains, more precisely, that the solutions have velocity field at least in the Sobolev space W 2,l and pressure in W 1,l , for some positive number l, due to the boundary terms that comes from the integration by parts. However this regularity is not expected for the weak solutions of (1.2), if p = 2. To overcome this difficult, when dealing with (1.2) in a truncated bounded domain we modify it to where T > 0 is the truncation parameter. See Proposition 4.1 in Section 4. As in [23] and [15], and in several subsequent papers, here the velocity field v is sought in the form v = u+a, where u is the new unknown with zero flux and a is a constructed vector field carrying the given fluxes in the outlets (i.e. if the given flux in an outlet with cross section Σ is α then Σ a·n = α and Σ u·n = 0, where n is the unit normal vector to Σ pointing toward infinity). This vector field a depends on the geometry of the domain and, in the aforementioned papers, its construction is very tricky and makes use of the Hopf cutoff function (see [23,15]). In the case of power-law fluids (1.2) with p > 2 we found out that the construction of a can be quite simplified. Indeed, a key point in the construction, in any case, is to obtain a vector field a that controls the quadratic nonlinear term (u∇u)a, which appears after substituting v = u + a in (1.2) and multiplying it by u. That is, to obtain a priori estimates, one multiplies the first equation in (1.2) by u and try to bound all the resulting terms by the 'leading' term |D(u)| p . In [15] it is shown that for any positive number δ there is a vector field a which, in particular, satisfies the estimate Ωt |u| 2 |a| 2 ≤ cδ 2 Ωt |∇u| 2 for some constant c indepedent of δ, u and Ω t , where Ω t is any truncaded portion of the domain with a length of order t. Looking at their construction and using Korn's inequality it is possible to show that where p ′ is the conjugate exponent of p, i.e. p ′ = p/(p − 1). When p = 2 this estimate reduces to | Ωt (u∇u)a| ≤ cδ Ωt |∇u| 2 . With this estimate we can estimate the integral of (u∇u)a in the truncated domain Ω t , by using Hölder inequality: Thus we can control the nonlinear term (u∇u)a by taking necessarily δ sufficiently small. When p > 2, proceeding similarly and using also Korn's inequality, we obtain Then, by Young inequality with ǫ, we have for some new constant C ǫ . From this estimate, we can control the nonlinear term (u∇u)a by taking ǫ sufficiently small, and so we do not need to construct the vector field a satisfying the estimate (1.5) for a sufficiently small δ. See Section 4 for the details. In fact, if a is only a (smooth) bounded divergence free vector field vanishing on ∂Ω, then, by Poincaré, Hölder and Korn inequalities, and the fact that our domain has uniformly bounded cross sections and p/p ′ = p − 1 > 1 (p > 2), we have The plan of this paper is the following. Besides this introduction, in Section 2 we introduce the main notations and set precisely the problem we will solve, state a lemma about the existence of the vector field a, carrying the flux of the fluid, and state our main theorem (Theorem 2.2). In Section 3 we state some preliminaries results we need to prove our main results. In Section 4 we prove our main theorem, make some remarks and prove a result about the uniqueness of our solution. Ladyzhenskaya-Solonnikov problem for power-law fluids In this section we set notations and the problem we are concerned with and state a lemma and our main theorem. We denote by Ω a domain in R n , n = 2, 3, with a C ∞ boundary, of the following type: where Ω 0 is a bounded subset of R n , while, in different cartesian coordinate system, with Σ i (x 1 ) being C ∞ simply connected domains (open sets) in R n−1 , and such that, for constants l 1 , l 2 , 0 < l 1 < l 2 < ∞, they sastify and contain the cylinders For simplicity, we will denote by Σ any of the cross sections Σ i ≡ Σ i (x 1 ) or, more generaly, any cross section of Ω, i.e., any bounded intersection of Ω with a (n−1) -dimensional plane. We will denote by n, the ortonormal vector to Σ pointing from Ω 1 toward Ω 2 i.e. in the above local coordinate systems, we have n = (1, 0) (where 0 ∈ R n−1 ) in both outlets Ω 1 and Ω 2 . With these notations, the flux through any cross section Σ of Ω of an incompressible fluid in Ω with velocity field v vanishing on ∂Ω, is given by the 'surface' integral Σ v · n (notice that by the divergence theorem applied to the region bounded by ∂Ω, Σ 1 and Σ 2 , we have Σ 1 v · n = Σ 2 v · n, for any cross sections Σ 1 and Σ 2 of Ω 1 and Ω 2 , respectively). We remark that we take our domain Ω with only two outlets Ω i , i = 1, 2, just to simplify the presentation, i.e. we can take Ω with any finite number of outlets with no significant change in the notations, results and proofs given in this paper. We shall use the further notations, where U is an arbitrary subdomain of Ω, s > t > 0 and 1 ≤ q < ∞: In these notations, the set Ω t -a bounded cut of Ω with a "length" of order t -will be taken usually for large t, so this notation will not cause confusion with the (unbounded) outlets Ω i , where i = 1, 2. By W 1,q (U) and W 1,q 0 (U) we stand for the usual Sobolev spaces, consisting of vector or scalar valued functions, and W 1,q loc (U ) is the set of functions in W 1,q (V ) for any bounded open set V ⊂ U. Often when it is clear from the context we will omit the domain of integration in the notations. The notation |E| will stand for the Lebesgue measure of a Lebesgue measurable set E in the dimension which is clear in the context. Finally, the same symbol C, c, C · or c · will denote many different constants. In this paper, we are concerned with the following problem: given α ∈ R, find a velocity field v and a pressure P such that [15] (for the case p = 2). Here, and throughout, we use the notation for any velocity fields v = (v 1 , · · · , v n ) and w = (w 1 , · · · , w n ) defined in Ω such that the last expression on the right makes sense. To solve (2.1), we seek a velocity field v in the form v = u + a, where u is a vector field with zero flux and a will carry the flux α, i.e. Σ u · n = 0 and Σ a · n = α. More precisely, we shall take a to be a vector field having the properties given by the following lemma. Lemma 2.1. For any p ≥ 2, there exists a smooth divergence free vector fieldã, which is bounded and has bounded derivatives in Ω, vanishes on ∂Ω, and has flux one, i.e. Σã = 1 over any cross section Σ of Ω. In particular, given α ∈ R, the vector field a = αã is a vector field preserving all these properties but having flux α and else satisfying the following estimates: where p ′ = p/(p−1) and c is a constante depending only onã, p and Ω. The proof of this lemma is given in Section 4. Definition 1. A vector field u is said to be a weak solution to the problem (2.2) if it has the following properties: Similarly, a vector field v is said to be a weak solution to the problem for all ϕ ∈ D(Ω). Remark 1. The use of divergence free test functions ϕ in (2.4) eliminates the pressure P, but it is a standard fact that it can be recovered due to 'De Rham's lemma' (cf. e.g. [10, Lemma IV.1.1]). We end this Section stating our main theorem, which we prove in Section 4. Theorem 2.2. Let p ≥ 2. Then, for any α ∈ R, problem (2.1) has a weak solution v, in the sense of Definition 1. Preliminary results In this Section we give some preliminary facts we shall need to prove our mains results in Section 4. We begin with Lemma 3.1 below, which is due to Ladyzhenskaya and Solonnikov [15,Lemma 2.3]. Our statement below differs slightly from [15] and, for convenience of the reader, we present its proof, which essentially can be found in [15] and [20,21]. for all t ∈ [t 0 , T ], and z(T ) ≤ ϕ(T ), then is a non identically zero and non decreasing differentiable function, and satisfies the inequality Then, by the first inequality, we have z(t 1 ) < δ −1 Ψ(z ′ (t 1 )), and so, using the second inequality, we have also for all t on a neighborhood on the right of t 1 , and so, taking t 2 to be the supremum of these points in (t 1 , T ), we have t 1 < t 2 < T and, by the previous ). Notice that λ > 0, since Ψ(0) = 0 and Ψ is strictly increasing. As z is a nondecreasing function, we have that z(t) ≥ z(t 1 ) for all t ≥ t 1 . Then we claim that z(t) ≥ z(t 1 ) + λ(t − t 1 ) for all t ≥ t 1 . Indeed, the inequalities z(t) ≥ z(t 1 ) and z(t) ≤ Ψ (z ′ (t)) imply z ′ (t) ≥ Ψ −1 (z(t)) ≥ Ψ −1 (z(t 1 )) = λ. Thus, we have shown the first statement in part 2) of the Lemma. For the remainder, notice that, since lim t→∞ z(t) = ∞, there exists a r such that z(t) ≥ τ 1 for all t > r, so from Ψ(τ ) ≤ cτ m and z(t) ≤ Ψ(z ′ (t)) we have z(t) ≤ c(z ′ (t)) m for all t > r, and the results then follow by direct integrating this inequality. In the next lemma we collect three very useful inequalities. The first can be found in many texts, as for instance in [7] and [3, Lemma 2.1, p. 526]. The third inequality contains Korn's inequality (see [18] The last one is a classical Poincaré type inequality; see e.g. [10, p.56]. In these inequalities, c 1 , c 2 are positive constants depending only on p and, for the last two, on the domain U. for all x, y ∈ R n and p ≥ 2. * In [18], Korn's inequality is stated for dimension three. The result in dimension two can be obtained from the one in dimension three by extending the domain U ⊂ R 2 to U × (0, 1) and the vector field v : In ii) and iii), U is an arbitrary bounded domain of R n , n = 2, 3, with a smooth boundary, Γ is any Lebesgue measurable subset of ∂U with positive measure, and 1 ≤ p < ∞. Next, we state a corollary of Brouwer fixed point theorem. Lemma 3.4. Let U be a locally Lipschtzian and bounded domain in R n , n ≥ 2, and 1 < q < ∞. Then there is a constant c such that, for The final result of this Section regards the regularised distance function to the boundary of a domain (an open connected set) in R n . Then, there is a function ρ ∈ C ∞ (V ) such that for every x ∈ V and any derivative ∂ β , β = (β 1 , · · · , β n ) ∈ Z + , we have where k β is a constant depending only on β and n. Proof of Theorem 2.2 and other results In this section we prove Lemma 2.1 and our main theorem -Theorem 2.2. Besides, we make some remarks, prove a Proposition on the 'uniform' distribution of energy dissipation (Proposition 4.2) and a Theorem regarding the uniqueness of solution of problem (2.1). We begin by proving Lemma 2.1. As we observed in the Introduction, the proof of this lemma (the construction of a) is simpler in this paper (i.e. for the case p > 2) than for the classical one for newtonian fluids (p = 2). For the construction in the case p = 2, see [15, p.744 Proof of Lemma 2.1. Suppose we have a vector fieldã as in Lemma 2.1. Then the statements with respect to a = αã follow, with c depending on p, sup |x 1 |>0 |Σ|, sup Ω |ã| and sup Ω |∇ã|. Indeed, for property Lemma 2.1 i), see (1.8). For property ii), we have Ω i,t−1,t |∇a| p ≤ (sup |∇ã| p )(sup |Σ|)|α| p and iii) follows from ii): To construct a vector fieldã with the properties in the statement of Lemma 2.1, first we observe that it is enough to construct in each outlet Ω i a vector field a i satisfying these properties in Ω i . Indeed, if we have this, then we can obtain the desired vector fieldã defined in Ω by using appropriate cutoff functions. We omit this part of the proof and refer to [10, cap.VI] for a similar procedure in a domain with straight outlets and Poiseuille flows in place of the vector fields a i , to be constructed below. We first constructã in the case n = 2. By what we observed above, it is enough to construct the vector fieldã in an arbitrary outlet Ω i , which we shall denote by Ω in this proof. Without loss of generality, we take for all x 1 ∈ R. (l 1 < l 2 are positive numbers introduced in Section 2.) Then we setã for ζ(x 1 , x 2 ) = ψ(x 2 /ρ(x)), where ρ(x) is the regularised distance to ∂Ω (see Lemma 3.5) and ψ : R → R is a smooth nondecreasing function such that ψ(s) = 0 if s < 0 and 1, if s > 1. We notice that ζ is identically zero in the 'lower band' {x ∈ Ω ; f 1 (x 1 ) < x 2 < 0} and identically one in a neighborhood of the 'upper boundary' {x ∈ ∂Ω ; x 2 = f 2 (x 1 )}. In particular,ã is a divergence free bounded vector field vanishing on a neighborhood of ∂Ω and Now, because ζ is constant in a neighborhood of each of the two components of ∂Ω, we have that any derivative of ζ is zero in this neighborhood and, thus, bounded in Ω. Thenã and its derivatives are bonded function in Ω. In the case n = 3, we take ζ(x 1 , x ′ ) = ψ(|x ′ |/ρ(x)), x ′ ≡ (x 2 , x 3 ) ∈ R 2 , where ρ(x) is the regularised distance to ∂Ω (see Lemma 3.5), ψ is as above, but ψ(s) = 0 if s < 1 and 1, if s > 2. Then we set . Notice that ζ constant for x ′ close to zero and equal to one in a neighborhood of ∂Ω (i.e. ρ(x) close to zero), and thus, ζ is a smooth function with bounded derivatives, vanishing in neighborhoods of x ′ = 0 and ∂Ω. Therefore,ã is a smooth function vector with bounded derivatives. Beside, it is divergence free, and, by Stokes theorem in the plane, we have Σã · n = ∂Σ bdσ = 1. To solve problem (2.2), first we shall solve the truncated modified problem, T > 0: Then we will use Lemma 3.1 to obtain a weak solution of (2.2) by taking the limit, when T → ∞, in the solution u T of (4.1), extended by zero outside Ω T . Proof. The regularity part, i.e. (u T , P) ∈ W 2,l (Ω t ) × W 1,l (Ω t ), for any t ∈ (0, T ), is a corollary of the proof of Theorem 1.2 in [4]. Notice that if (u T , P) is a weak solution with u T in D 1,p 0 then v = u T + a is a weak solution in W 1,p (Ω T ) of (4.2) The fact that we do not have here the homogeneous Dirichlet boundary condition v = 0 here in the whole boundary ∂Ω T does not affect the method given in [4] because a = 0 in (∂Ω T ) ∩ (∂Ω) and the remaing part of ∂Ω T , i.e., (∂Ω T )/(∂Ω), is interior to Ω T . Then we have only to show the existence of a weak solution for (4.1). For simplicity, most of the time in this proof we shall write Ω T = Ω and u T = u. Also we keep the notation (·, ·) with the integration over Ω = Ω T in this proof, i.e. for (vector) functions v, w such that v · w ∈ L 1 (Ω T ), (v, w) = Ω T v · w. We will apply the Galerkin method and the monotonicity method of Browder-Minty (cf. [8,Remark,p. 497]). The Browder-Minty method is used due to the nonlinear term in the left hand side of (4.1) 1 . Now we want to pass to the limit in (4.4) when m → ∞ and obtain it with u in place of u m and with any ϕ ∈ D(Ω) in place of ϕ j . We begin by defining the operators Notice that D(w)+D(a) ∈ L p ′ (Ω) because p > 2 ⇒ p ′ < p and Ω = Ω T is a bounded domain. We also write B(w) = − (w · ∇w + w · ∇a + a · ∇w + a · ∇a) . and notice that the two last terms converge to zero, when m → ∞, by the estimates above we used to obtain (4.14). It is easy to see, using again (4.11) and the fact that p > 2 and Ω = Ω T is bounded, that we have also lim C(u m ), u m = C(u), u . and (4.18) Ωt u · ∇u · a − a · ∇u · u − a · ∇a · u ≤ |u| 1,p,Ωt Ωt |a| p′ |u| p ′ 1/p ′ + Ωt a · ∇u · u − a · ∇a · u ≤ ε|u| p 1,p,Ωt + C ε t, where ε > 0 is fixed below. Besides, proceeding as in (4.8), we get Then, from (4.16)-(4.19) and taking ε ≪ 1, we obtain Now the idea is to control the boundary integral I by the interior integral y(t), but if for instance one tries to apply the trace theorem then higher order derivatives arise. To achieve that purpose we use the clever idea given in [15] for the case p = 2, that is, to integrate I ≡ I(t) from η − 1 to η, for η > 1, or better, integrate the estimate (4.21). Thus we introduce the function Notice that since y is a nondecreasing function we have y(η − 1) ≤ z(η) ≤ y(η) for all η > 1, thus estimating y is the same as estimating z. Another interesting feature of the function z is that Then if we estimate η η−1 I(t)dt in terms of |u| p 1,p,Ω η−1,η and 1 T |u| 2 1,2,Ω η−1,η , in the end, in virtue of (4.21), we shall obtain a estimate for z(η) in terms of z ′ (η). Then we shall use Lemma 3.1 to get the desired estimate for z(η). Let's do the details. Having the estimate (4.33), we complete now the proof of our main result. Proof of Theorem 2.2. Let u k be the solution of (4.1) in Ω k , k = 3, 4, · · · , whose existence is assured by Proposition 4.1, and set u k = 0 in Ω/Ω k . By (4.33), for each j = 2, 3, · · · , the sequence {u k } k≥j+1 is weakly compact in W 1,p (Ω j ), thus, by a diagonalization process we obtain a subsequence, which we also denote by {u k }, and an u in W 1,p loc (Ω) such that for any t > 0, where q ≥ 1 is arbitrary, if p ≥ n, and less than p * := 3p 3−p , if n = 3 and p < 3. (Cf. (4.11)). Besides, by (4.34) 1 , the estimate (4.33) and the fact that u k ∈ D 1,p 0 (Ω), we have that the limit u satisfies (2.2) 2 -(2.2) 5 . Then, to conclude the proof of Theorem 2.2, it remains to prove that u satisfies the equation (2.2) 1 , in the weak sense (2.4). Again, we shall use the Browder-Minty method, due to the shear dependent viscosity. The idea here is to mimic the proof of Proposition 4.1, paying attention that now Ω is not a bounded domain and D(u) is only locally integrable in Ω. This lead us to localize the arguments and operators used in that proof, as follows. Given ϕ ∈ D(Ω), letting k 0 ∈ N such that supp ϕ ⊂ Ω k 0 −1 , we have (4.35) and B(w) = − (w · ∇w + w · ∇a + a · ∇w + a · ∇a) . Then, we want to pass to the limit in (4.35) when k → ∞ and obtain (2.4). Let ζ : Ω −→ R + be a smooth function such that ζ = 1 in supp ϕ and ζ = 0 in Ω \ Ω k 0 and A ζ , A ζ,k be the operators defined by on the space We notice, as ζ is a nonnegative function, that A ζ,k is still a monotone operator. Besides, {A ζ,k (u k )} is a bounded sequence in V * 0 , then, up to a subsequence, we have A ζ,k (u k ) * ⇀ χ ζ for some χ ζ in V * 0 . As in (4.14), we also have Then, by (4.36), we obtain χ ζ , ϕ = (B(u), ϕ), so it remains to show that χ ζ = A ζ (u). To obtain this, from the monotonicity of A ζ,k , it is enough to prove that A ζ,k (u k ), u k converges to χ ζ , u . Indeed, for all w ∈ V 0 and, by (4.34), A ζ,k w, u k and A ζ,k w, w tend, respectively, to A ζ w, u and A ζ w, w , when k → ∞. Then, once we have lim k→∞ A ζ,k (u k ), u k = χ ζ , u , we shall have χ ζ −A ζ (u−λw), w ≥ 0 for all w ∈ V 0 and all λ ≥ 0, and by Lebesgue's dominated convergence theorem, lim λ→0+ A ζ (u − λw), w = A ζ (u), w , hence χ ζ = A ζ (u). Let us show then that lim k→∞ A ζ,k (u k ), u k = χ ζ , u . We compute χ ζ , u and lim k→∞ A ζ,k (u k ), u k using directly the equation (4.1) 1 , with T = k. Multiplying this equation by ζu and integrating by parts in Ω k 0 , we arrive at (4.40) where P k is the pressure function associated with u k . From (4.33), we have Similarly to the proof of (4.14), we also have Next, we show that (4.44) for some further subsequence of k → ∞, where, up to a constant, P is the pressure function associated with u. For this, it is enough to show that there is a P ∈ L p ′ (Ω k 0 ) (p ′ = p/(p − 1)) such that P k ⇀ P in L p ′ (Ω k 0 ), i.e. {P k } is uniformly bounded in L p ′ (Ω k 0 ). Let us assume, without loss of generality, Ω k 0 P k dx = 0. Writing by Lemma 3.4 there exist a constant c (independent of k) and a vector field ψ ∈ W 1,p 0 (Ω k 0 ) such that Notice that Ω k 0 gdx = 0, g ∈ L p (Ω k 0 ) and g p,Ω k 0 ≤ 2 P k Then, (4.46) where, for the last iguality, we used equation (4.1) 1 . Using again (4.45) and previous estimates, it follows that (4.47) as we wished. From (4.40)-(4.44), we obtain (4.48) Now, replacing u by u k in (4.40), we have (4.49) and taking the limit when k → ∞ in the right hand side here, analogously to the steps we did to obtain (4.48), we get the right hand side of (4.48), i.e. Next, we make some remarks and prove two additional results, one on the rate of dissipation of energy of the solution obtained for problem (2.1) and another on the uniqueness of solution. Remark 2. Dropping the convective term v · ∇v in (2.1) 1 , we obtain the Ladyzhenskaya-Solonnikov problem for Stokes' system with a power law. The solution of this problem can be obtained as in the proof of Theorem 2.2, with obviously much less computations. The solution of problem (2.1) has energy dissipation uniformly distributed along the outlets. More precisely, we have the following result, which generalizes Theorem 3.2 in [15] for power law shear thickening fluids. We now state and prove our uniqueness result.
2011-08-17T22:37:10.000Z
2011-08-17T00:00:00.000
{ "year": 2012, "sha1": "cdacc45810e5eff2e64fcf065a8ebe20290e376a", "oa_license": "elsevier-specific: oa user license", "oa_url": "https://doi.org/10.1016/j.jde.2011.11.025", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "cdacc45810e5eff2e64fcf065a8ebe20290e376a", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
3110888
pes2o/s2orc
v3-fos-license
IL-17A stimulates the production of inflammatory mediators via Erk1/2, p38 MAPK, PI3K/Akt, and NF-κB pathways in ARPE-19 cells. PURPOSE To investigate the signaling pathways involved in interleukin (IL)-17A -mediated production of interleukin 8 (CXCL8), chemokine (C-C motif) ligand 2 (CCL2), and interleukin 6 (IL-6) by ARPE-19 cells, a spontaneously arisen cell line of retinal pigment epithelium (RPE). METHODS Flow cytometry analysis and western blot were used to detect the phosphorylation of extracellular signal-regulated kinases 1/2 (Erk1/2), p38 mitogen activated protein kinase (MAPK) and protein kinase B (PKB; Akt) in ARPE-19 cells stimulated with IL-17A. These cells were further pretreated with a series of kinase inhibitors and followed by incubation with IL-17A. CXCL8, CCL2, and IL-6 in the supernatant were quantified by enzyme-linked immunosorbent assay (ELISA). RESULTS Coculture of ARPE-19 cells with IL-17A resulted in significant increases in Erk1/2, p38 MAPK, and Akt phosphorylation. Inhibition of p38MAPK, phosphoinositide 3-kinase (PI3K)-Akt and nuclear factor-kappaB (NF-κB), with the inhibitors SB203580, LY294002 and pyrrolydine dithiocarbamate (PDTC) respectively, reduced IL-17 (100 ng/ml) mediated production of CXCL8, CCL2, and IL-6 in a concentration dependent manner. Inhibition of Erk1/2 with PD98059 decreased the expression of the tested three inflammatory mediators when using low doses of IL-17A (0-10 ng/ml) but not at higher concentrations. CONCLUSIONS IL-17A-induced production of inflammatory mediators by ARPE-19 cells involves Erk1/2, p38MAPK, PI3K-Akt and NF-κB pathways. Uveitis is a common intraocular inflammatory disease. Recent studies have shown that helper T lymphocyte (Th)17 cells are implicated in the pathogenesis of this serious intraocular disorder [1,2]. They have been identified as a subset of T-helper lymphocytes characterized by predominantly producing interleukin (IL)-17A [3,4]. Growing evidence suggests that Th17 cells trigger inflammatory responses primarily via IL-17A [5]. A recent study showed an increased expression of IL-17A mRNA in the retina of mice with experimental autoimmune uveoretinitis (EAU), a classical model for human autoimmune uveitis [1]. IL-17A protein was furthermore found to be highly expressed by peripheral blood mononuclear cells (PBMCs) from uveitis patients [6,7]. IL-17A is a proinflammatory cytokine which is reflected by its ability to promote a variety of cells to produce chemokines and proinflammatory cytokines including interleukin-8 (CXCL8), CCL2, and IL-6 [8]. The neuroectodermally-derived retinal pigment epithelium (RPE), strategically positioned at the blood-retinal barrier, is considered to play an important role in posterior ocular inflammation due to its ability to secrete several inflammatory mediators [9]. CXCL8, CCL2, and IL-6 are three major inflammatory mediators produced by RPE cells in response to various stimuli [9]. Several studies have shown that these mediators are involved in the pathogenesis of uveitis [10][11][12]. CXCL8 is a chemoattractant and activator of neutrophils, whereas CCL2 is a chemoattractant and activator for lymphocytes and monocytes. These two chemokines mediate neutrophil, lymphocyte and monocyte/macrophage infiltration into tissues. IL-6 is a pleiotropic proinflammatory cytokine. The overexpression of IL-6 may intensify the local immune and inflammatory response. In a previous study we showed that IL-17A is a potent stimulus for CXCL8, CCL2, and IL-6 secretion by ARPE-19 cells [13], the spontaneously arisen human RPE-derived cell line which has been extensively used in the past decades to investigate the role of this cell layer in the pathogenesis of ocular posterior diseases including uveitis. It has been reported that activation of extracellular signal-regulated kinases 1/2 (Erk1/2), p38 mitogen activated protein kinase (MAPK), and phosphoinositide 3-kinase (PI3K)-Akt is involved in the IL-17A induced response of certain cell types [14][15][16][17]. However, the signaling events leading to CXCL8, CCL2, and IL-6 protein expression by IL-17A-induced ARPE-19 cells have not yet been characterized. In this study, we therefore investigated the role of Erk1/2, p38 MAPK, and PI3K-Akt in IL-17A-induced CXCL8, CCL2, and IL-6 protein production. METHODS Cell culture: Human ARPE-19 cells were obtained from the American type culture collection (ATCC, Manassas, VA), and cells between passages 16 and 20 were used for experiments. The cells were cultured in Dulbecco's modified Eagle medium/F12(DMEM/F12 (Invitrogen, Beijing, China) with 10% fetal bovine serum (FBS, Invitrogen, Carlsbad, CA), 100 U/ml penicillin, and 100 μg/ml streptomycin in a humidified incubator at 37 °C in 5% CO2. The cells were passed every 4 to 5 days by trypsinization and were seeded into Corning flasks (Corning, Lowell, MA) at 1.2×10 6 cells/flask, resulting in completely confluent (≈1.2×10 6 cells/flask) cultures in 4 days. Flow cytometry analysis: Flow cytometry analysis was used to detect the activation state of signaling pathway kinases in ARPE-19 cells. Confluent ARPE-19 cells maintained in serum-free medium for 24 h were cultured with or without 100 ng/ml IL-17A at 37 °C in 5% CO2 for the detection of phospho-Erk1/2, p38, and Akt, respectively. We conducted simultaneous staining of ARPE-19 cells for intracellular phosphorylated Erk1/2, p38, and Akt proteins according to the protocol recommended by Cell Signaling Technology (Cell Signaling Technology, Beverly, MA). Briefly, ARPE-19 cells were fixed in 4% formaldehyde for 10 min at room temperature and permeabilized in methanol for 30 min on ice. We used the phospho-specific Abs anti-phospho-Erk1/2-PE, anti-phospho-p38MAPK-Alexa Fluor 488 and anti-phospho-Akt-Alexa Fluor 488 for intracellular staining (Cell Signaling Technology). Isotype-matched irrelevant Abs were used as controls. Phosphorylation of the three proteins for both unstimulated and stimulated ARPE-19 cells was evaluated by flow cytometry and expressed as mean fluorescence intensity (MFI). All experiments were repeated three times. Western blot: ARPE-19 cells were serum starved in DMEM/ F12 without FBS for 24 h, then treated with or without 100 ng/ml IL-17A for 7, 15, or 30 min. The cells were subsequently rinsed with ice-cold PBS and lysed with lysis buffer containing 50 mM Tris-HCl (pH 7.4), 150 mM NaCl, 1% Triton X-100, 1% sodium deoxycholate, 0.1% SDS, 2 mM EDTA, and 100 μM phenylmethylsulfonylfluoride. The cell lysate was centrifuged and the supernatant was collected. Protein concentration was determined with a protein assay (Bio-Rad, Richmond, CA). Laemmli gel loading buffer was added to the lysate and boiled for 7 min, after which proteins were separated on an SDS-polyacrylamide gel. Proteins were transferred to polyvinylidene difluoride membranes (Millipore, Bedford, MA), blocked by 5% skim milk at 37 °C for 2 h, and incubated with the primary phosphorylated or total antibodies against Erk1/2, p38MAPK, and Akt (Cell Signaling Technology) at 4 °C for 16 h, followed by a horseradish peroxidase-conjugated secondary antibody at 37 °C for 1 h. The membranes were further developed using a chemiluminescent detection kit (Cell Signaling Technology). Each stimulation experiment was repeated three times. Enzyme-linked immunosorbent assay (ELISA): ARPE-19 cells were maintained in DMEM/F12 medium containing 10% FBS for 4 days to become confluent. Before treatment with signaling inhibitors, cells were serum-starved for 24 h in DMEM/F12 without FBS. ARPE-19 cells were pretreated with or without an inhibitor to Erk1/2 (PD98059 at 50, 25, and 10 μM), p38MAPK (SB203580 at 25, 10, and 1 μΜ), PI3K (LY29400 at 25, 10, and 1 μΜ) or NF-κB (PDTC at 50, 25, and 10 μM; all from Sigma-Aldrich, St. Louis, MO) for 2 h, followed by incubation with or without recombinant IL-17A (R&D Systems, Minneapolis, MN) for 24 h. The supernatants were collected and centrifuged to remove particulates and stored at −70 °C until analysis. CXCL8, CCL2, and IL-6 were measured using human commercially available ELISA development kits (Duoset; R&D Systems). Each stimulation experiment was repeated four times. Statistical analyses: All data are expressed as means±SD. Statistical significance of changes was determined by the Student's t-test. A p<0.05 was considered to be statistically significant for all experiments. RESULTS Effect of IL-17A on Erk1/2, p38MAPK, and Akt phosphorylation: To investigate the early molecular mechanisms whereby IL-17A stimulates the production of CXCL8, CCL2, and IL-6, ARPE-19 cells were incubated for 10 or 20 min with 100 ng/ml IL-17A. The level of phosphorylated Erk1/2, p38MAPK, and Akt was evaluated by measuring mean fluorescence intensity (MFI) with flow cytometry. The result revealed that the MFI of phosphorylated Erk1/2, p38MAPK and Akt significantly increased in ARPE-19 cells stimulated by 100 ng/ml IL-17A as compared to unstimulated cells. Examples of phosphorylated specific intracellular staining are shown in Figure 1A-C. Additionally, western blot analysis was performed to verify the activation of these protein kinases. The samples were immunoblotted with antibodies against the phosphorylated form of Erk1/2, p38MAPK, and Akt. As shown in Figure 1D-F, IL-17A increased the phosphorylation of Erk1/2 and p38 MAPK by 7 min and remained elevated up to 30 min in these experiments. The expression of total-phosphorylated Erk1/2 was not affected. Similarly, IL-17A activated Akt within 7 min. The phosphorylation of Akt reached a maximum at 15 min and no further increase was observed at later time points. The levels of total Akt remained generally constant. Thus IL-17A activates Erk1/2, p38, and Akt in ARPE-19 cells. Effects of signaling inhibitors on CXCL8, CCL2, and IL-6 production induced by IL-17A: To confirm the role of Erk1/2, p38MAPK, Akt, and NF-κB activation in the production of CXCL8, CCL2, and IL-6 by ARPE-19 cells stimulated with 100 ng/ml IL-17A, we investigated the effect of Erk1/2, p38MAPK, Akt, and NF-κB inhibitors. As shown in Figure 2, IL-17A caused a 13.1, 4.5, and 5.9 fold secretion of CXCL8, CCL2, and IL-6 over basal levels, which was considered as 100%. SB203580, an inhibitor of p38MAPK, decreased IL-17A-induced CXCL8, CCL2, and IL-6 production in a dose-dependent manner, resulting in levels of 69±8%, 59±7%, and 47±16%, respectively. Similarly, LY294002 and PDTC, inhibitors of PI3K/Akt and NF-κB also dose-dependently inhibited the expression of CXCL8, CCL2, and IL-6. The levels following incubation with LY29400 were 75±5%, 64±12%, and 65±7%. The maximum levels following PDTC were 80±7%, 70±10%, and 78±8%. The aforementioned inhibitors also markedly repress the basal secretion of CXCL8, CCL2, and IL-6 by ARPE-19 cells in the absence of IL-17A (data not shown). PD98059, an inhibitor of Erk1/2 did not affect the production of the three tested inflammatory mediators when using a dose of 100 ng/ml of IL-17A. A separate experiment using lower concentrations of IL17A showed that PD98059 was able to inhibit the production of these mediators when the concentration of IL-17A used to stimulate the cells was lower than 50 ng/ml (Figure 3). DISCUSSION In this study, we demonstrated that IL-17A is able to enhance the phosphorylation of Erk1/2, p38MAPK, and Akt in RPE cells. The expression of IL-17A-induced CXCL8, CCL2, and IL-6 was decreased in these cells by SB203580, LY294002, and PDTC, inhibitors of p38MAPK, PI3K-Akt, and NF-κB, respectively, in a concentration-dependent manner. We also found PD98059, the inhibitor of Erk1/2 altered the expression of IL-17A-induced inflammatory mediators, depending on the concentration of IL-17A. This study is a continuation of a previous study to determine the intracellular mechanisms of IL-17A-mediated production of CXCL8, CCL2, and IL-6 by RPE cells [13]. Accumulating evidence shows that MAPK activation is an important signaling event in the response of RPE cells to proinflammatory cytokines such as IL-1β and tumor necrosis factor (TNF)-α [18]. We found that IL-17A induces phosphorylation of Erk1/2 and p38 MAPK in RPE cells, which is in accordance with earlier findings in articular chondrocytes [19] and cardiac fibroblasts [20]. We also showed that PD98059 and SB203580, specific inhibitors of Erk1/2 and p38MAPK, reduced IL-17A-induced CXCL8, CCL2, and IL-6 production by RPE cells, indicating that Erk1/2 and p38MAPK are involved in this process. These findings are also in accordance with earlier findings using lung microvascular endothelial cells [14], mesangial cells [15], and human pancreatic periacinar myofibroblasts [16]. It is worthwhile to point out that the PD98059 mediated inhibition of the release of the tested three inflammatory mediators was lost when using higher concentrations of IL-17A. The latter observation is consistent with earlier findings whereby PD98059 was also not able to inhibit IL-17A mediated induction of CXCL8 mRNA production by fibroblast-like synoviocytes [21]. These data suggest that Erk1/2 may not be the major intracellular molecular signaling pathway regulating the IL-17A-mediated production of CXCL8, CCL2, and IL-6 in non-immune type cells such as the RPE cell or synoviocyte. It has previously been shown that PI3K and its downstream mediator Akt regulate the production of inflammatory mediators in response to proinflammatory cytokines [22,23]. Akt is activated by a variety of cell surface receptors that when stimulated induce the activation of PI3K. PI3K produces phosphatidylinositol 3, 4, 5 triphosphates (PIP3) which in turn activates Akt. In the present study, we demonstrated that activation of the PI3K-Akt pathway by IL-17A is necessary for CXCL8, CCL2, and IL-6 protein full expression in RPE cells. These results are consistent with other findings in human airway epithelial cells [17], fibroblasts and macrophages obtained from rheumatoid arthritis synovial tissue [17,24]. In contrast to our data, the PI3K/Akt pathway was only involved in the expression of CCL2, not CXCL8 in human RPE following stimulation by IL-1β or TNF-α [25]. The discrepancy may be, at least partially, explained by the nature of the stimulus [25]. The activation of transcriptional factor NF-κB plays a central role in the induction of inflammatory mediators. The promoter regions of the human CXCL8, CCL2, and IL-6 genes have been cloned and have been shown to contain consensus binding motifs for NF-κB [26]. Our results showed that PDTC, an inhibitor of NF-κB, significantly inhibited IL-17induced CXCL8, CCL2, and IL-6 production in RPE cells,suggesting that NF-κB activation was responsible for the expression of these inflammatory mediators. Similar results have been observed using a variety of other cell types [16,17,27]. In conclusion, we have demonstrated that p38MAPK, PI3K-Akt, and the transcription factor NF-κB are involved in IL-17-induced CXCL8, CCL2, and IL-6 release in ARPE-19 cells. Knowledge concerning the mechanisms whereby IL-17A regulates the secretion of inflammatory mediators by RPE cells may increase our insight in the pathogenesis of posterior uveitis and delineating the transactivation mechanisms may help to identify new therapeutic targets.
2014-10-01T00:00:00.000Z
2011-11-23T00:00:00.000
{ "year": 2011, "sha1": "da7cfbd0d77057a254002bd71393350d310a7f34", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "da7cfbd0d77057a254002bd71393350d310a7f34", "s2fieldsofstudy": [ "Biology", "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
239041761
pes2o/s2orc
v3-fos-license
Associated Determinants Between Evidence of Burnout, Physical Activity, and Health Behaviors of University Students Risk behaviors and signs of burnout are associated with substantial health losses and university dropouts. Physical activity can be an effective approach to reduce these factors. The objective of this study was to analyze aspects related to health behaviors, physical activity, and signs of burnout in university students and their association with physical activity. The probabilistic cluster sample consisted of 3,578 regularly enrolled undergraduate students from UFPR in Curitiba, based on a population sample of 24,032 university students. The students completed the MBI-SS and NCHA II instruments. Descriptive statistics were used to identify demographic indicators and characteristics of the university environment. For the proportion of subjects with respective confidence intervals (CI = 95%), contingency tables involving the chi-square test (χ2) were used. The prevalence of signs of burnout was estimated in punctual proportions accompanied by the respective confidence intervals (CI = 95%). To analyze the associations between the independent variables and signs of burnout, the Hierarchical Logistic Regression was used through an analysis adjusted by the other independent variables involved in the models (CI = 95%). Results showed that the prevalence of individuals who showed signs of burnout was 40.4%. The hierarchical multiple regression model pointed to: female sex (OR = 1.30; 1.11–1.51); age between 20–24 years (OR = 1.51; 1.25–1.83); and 25–29 years (OR = 1.69; 1.27–2.24); being single (OR = 2.67; 1.01–7.10); presenting regular/poor health perception (OR = 1.59; 1.13–2.22), belonging to Human Sciences courses (OR = 1.37; 1.14–1.64); attending 2nd or 3rd year (OR = 1.34; 1.12–1.61); poor academic performance (OR = 5.35; 4.11–6.96); mean (OR = 2.08; 1.78–2.43). We conclude that academics showed a high prevalence of health risk behaviors and correlate and diagnose emotional problems and signs of burnout. Signs of burnout were significantly associated with the practice of physical activity in its three dimensions; however, in the adjusted analysis for demographic indicators, the characteristics of the university environment, and health behaviors, physical activity was not significant for the model. Risk behaviors and signs of burnout are associated with substantial health losses and university dropouts. Physical activity can be an effective approach to reduce these factors. The objective of this study was to analyze aspects related to health behaviors, physical activity, and signs of burnout in university students and their association with physical activity. The probabilistic cluster sample consisted of 3,578 regularly enrolled undergraduate students from UFPR in Curitiba, based on a population sample of 24,032 university students. The students completed the MBI-SS and NCHA II instruments. Descriptive statistics were used to identify demographic indicators and characteristics of the university environment. For the proportion of subjects with respective confidence intervals (CI = 95%), contingency tables involving the chi-square test (χ 2 ) were used. The prevalence of signs of burnout was estimated in punctual proportions accompanied by the respective confidence intervals (CI = 95%). To analyze the associations between the independent variables and signs of burnout, the Hierarchical Logistic Regression was used through an analysis adjusted by the other independent variables involved in the models (CI = 95%). Results showed that the prevalence of individuals who showed signs of burnout was 40.4%. The hierarchical multiple regression model pointed to: female sex (OR = 1.30; 1.11-1.51); age between 20-24 years (OR = 1.51; 1.25-1.83); and 25-29 years (OR = 1.69; 1.27-2.24); being single (OR = 2.67; 1.01-7.10); presenting regular/poor health perception (OR = 1.59; 1.13-2.22), belonging to Human Sciences courses (OR = 1.37; 1.14-1.64); attending 2nd or 3rd year (OR = 1.34; 1.12-1.61); poor academic performance (OR = 5.35;); mean (OR = 2.08; 1.78-2.43). We conclude that academics showed a high prevalence of health risk behaviors and correlate and diagnose emotional problems and signs of burnout. Signs of burnout were significantly associated with the practice of physical activity in its three dimensions; however, in the adjusted analysis for demographic indicators, the characteristics of the university environment, and health behaviors, physical activity was not significant for the model. INTRODUCTION Although initially linked to the field of professional practice, studies on burnout have begun to investigate pre-professional scope in university students. This signals attempts to prevent the early evolution of this phenomenon, from the study phase to the labor market (Robins et al., 2018). Even if the university class is not considered to belong to the work environment, the students' activities can be interpreted as pre-professional. The workload in the academic context corresponds to the demands of studies, as they develop specific mandatory activities such as studying, facing practical classes, internships, assessment activities, competitive academic environment, generating conflicts and stress (Marôco et al., 2020). Burnout has started to be investigated among university students, expanding its concept and confirming the existence of three dimensions (emotional exhaustion, disbelief, and professional/personal effectiveness), derived from the Maslach Burnout Inventory -MBI, also in this population (Schaufeli et al., 2002). One of the most accepted concepts reported that burnout is an emotional response to chronic stress situations (Guedes and Souza, 2015). Among the types of treatment to reduce symptoms, physical activity can be an essential strategy. Naczenski et al. (2017) reported in a systematic review of 10 studies addressing the topic in the general population. Furthermore, most research involving physical activity and signs of burnout was carried out in different countries or with a non-university population (Lindwall et al., 2014;Olson et al., 2014), which means there is a gap in the literature. The university population in general, has in their routine, high demand for workload and academic activities, many of these have short deadlines. In this context, it seems that university students are exposed to and show signs of mental and emotional exhaustion. One of the ways of controlling burnout is to practice physical activity. The routine practice of physical activity can bring mental relief and more willingness for the student to perform tasks. The relationship between physical activity and burnout is addressed in the present study since activity can act as a protective factor against, prevent physical symptoms, and control burnout (Bielemann et al., 2007;Batista and Ornellas, 2013). This study aimed to assess the prevalence and associated determinants between signs of burnout, physical activity, and health behaviors in university students. Sample Selection The reference population included undergraduate students from the UFPR Campus, regularly enrolled in the second semester of the 2019 school year, in Curitiba, Paraná. To illustrate the dimension of the population universe to be treated, according to information from the Institution's Undergraduate Dean, 24,032 university students were enrolled at the beginning of the 2019 academic year in the 77 undergraduate courses offered on the Curitiba Campus. The sample selection procedures followed a sequence of steps to obtain a probabilistic cluster sampling that could effectively represent the population of university students at UFPR, the year 2019. First, the classes were chosen by sampling stratified by large areas of education, namely: (a) Human Sciences (8,846); (b) Exact Sciences (9,910); (c) Biological Sciences (5,276), the prevalence of outcome (signs of burnout) unknown (50%). Then, the sample calculation resulted from the sum of the population of each stratum, assuming a confidence interval of 95% and a sampling error of 3.0%. Based on the 24,032 university students enrolled in the institution's undergraduate courses, initially, the minimum number of subjects measured was 1,861 students. However, considering that the sampling design involved clusters, the effect of sampling design was addeddeff equivalent to 1.5 and 20.0% for cases of loss, thus resulting in a predicted sample size of 3,350 university students. However, for data analysis purposes, 3,578 university students were gathered, 1,868 female and 1,710 male. Regarding the selection of university students to compose the sample, there was a concern to obtain a representation proportional to the population considered, having as a reference for this proportionality the number of university students in the significant areas of study, course, year, and shift (morning, night, and full time) in which they were enrolled. Data Collection The application of the questionnaires was carried out by two researchers. The researchers were trained in the procedures for 4 weeks at the Physical Performance Studies Center CEPFIS, and a pilot data collection was carried out with Physical Education students from UFPR 30 days before the start. The academics were gathered in a classroom, where the objectives of the research project, the principles of confidentiality, non-identification in the study, and no influence on academic performance were explained. At this time, university students were invited to participate in the study and received guidance on how to fill out the Informed Consent Form. Subsequently, those university students who agreed to participate received questionnaires with instructions and recommendations for self-completion, with no time limit established for its completion. The questionnaires were answered individually, and any doubts expressed by the respondents were promptly clarified by the researcher monitoring the data collection. Inclusion criteria: Students of both sexes, regularly enrolled in the second semester of 2019, present in the classroom in August, who completed and signed the TCLE. Exclusion criteria: (a) students who were unable to complete the questionnaire by the end; (b) students with some mental limitations to fill in, and those who turned in incomplete or illegible questionnaires. Health Behavior According to the proposed objectives for the present study, questions from the instrument were abstracted National College Health Assessment II in the following domains: health, health education, and safety. In the case of the domain health, health, and safety education, the following question was used: "In general, how do you describe your health." The answers were categorized according to the study by Adams et al. (2007) as (a) excellent; (b) very good; (c) good, and (d) regular and bad, and for the effect of results, the answers "I do not know" were not used. To examine the practice of physical activity, the question "In the last seven days, how often did you practice?" was used. We asked whether they had undertaken, aerobic/cardiorespiratory exercise of moderate-intensity (causing a moderate increase in heart rate, such as a brisk walk), vigorous-intensity (causing a significant increase in heart and respiratory rate, such as running), and strength training (weight training at ∼8-12 repetitions in each set). For the cut-off points, a study by Elliot et al. (2012) was used, which included: (a) no day; (b) 1-2 days/week; (c) 3-4 days/week; and (d) >5 days/week. We sought to identify whether the university meets the international criteria for acceptable physical activity practice (5 × 30 min/day of moderate activity, or 3 × 20 min/day of vigorous activity) as well as muscle-strengthening exercise. Therefore, a gradient was established to categorize good physical activity practice. Thus, it was unnecessary to include a new measurement instrument. Demographic Indicators and Characteristics of the University Environment The following demographic indicators were used, namely age: (a) < 19 years; (b) 20-24 years; (c) 25-29 years; and (d) >30 years; the sex: (a) men and (b) women. As for the characteristics of the university environment, the following categories were used: large area of study: (a) Biological Sciences; (b) Human Sciences; (c) Exact Sciences; study shift: (a) morning; (b) night; and (c) full; and academic performance: (a) good (very good and good); (b) medium; (c) weak (weak). Maslach Burnout Inventory-Student Survey-MBI-SS To measure signs of burnout, the scale by MBI- SS Schaufeli et al. (2002) was used, self-applicable referring to the feelings/emotions of students in the university context. The questionnaire consists of 15 questions that are subdivided into three subscales: emotional exhaustion (5 items), disbelief (4 items), and professional/personal effectiveness (6 items). Their frequency measures all items, ranging from 0 to 6, being 0 (never), 1 (a few times a year), 2 (once a month), 3 (a few times a month), 4 (once a month). Week), 5 (a few times a week), and 6 (every day). The internal consistency of the questionnaire in the Portuguese version (2012) was assessed using Cronbach's alpha coefficient, similar to moderate to strong between scales (r = 0.31-0.64). Validity was measured by correlation Pearson's and reliability by alpha index Cronbach's in three different universities (Schaufeli et al., 2002). To identify signs of burnout in university students, studies by Peres et al. (2014) and Viana et al. (2014) were used to determine the cut-off points. First, the distribution of responses (%) for each item of the questionnaire, emotional exhaustion, disbelief, and professional/personal effectiveness was calculated. Next, the mean, standard deviation, first and second tertiles of each dimension were calculated. The highest tertile is assumed to be at risk for the dimensions of physical, emotional, and disbelief exhaustion, and the lowest tertile for professional/ personal effectiveness. Thus, subjects with high signs of burnout were identified (when the three dimensions were located according to the tertiles mentioned above), subjects with moderate risk of burnout (when two of their dimensions were located according to the tertiles mentioned above), subject with low risk for burnout (when one of its three dimensions were located according to the tertiles mentioned above), finally, subjects without any risk for burnout (when all its dimensions were located according to the tertiles mentioned above). For the Hierarchical Logistic Regression, there was a dichotomization of the variable, signs of burnout in (a) absence of signs, and (b) signs of Burnout (signs of burnout low, signs of burnout moderate, and signs of burnout high). Statistical Treatment The prevalence estimates equivalent to health behaviors, the practice of moderate and vigorous cardiorespiratory/aerobic physical activity, strength training, and signs of burnout due to demographic indicators and health perception were presented in punctual proportions (%), accompanied by respective 95% confidence intervals (95% CI). To analyze the linearity of associations between burnout signs and correlated potentials, Odds Ratio calculations were used in the SPSS 20.0 statistical program. First, statistical differences between the strata under investigation were treated by continuity correction Yates for 2 × 2 contingency tables, for the others, the chi-square test (χ 2 ). Next, correlates that showed at least marginally significant associations (p ≤ 0.20) in the bivariate analysis were included in the hierarchical multiple regression procedures. In this case, the correlates were included in blocks, with the sociodemographic variables (level 1) being the first to be included in the model, followed by those related to the characteristics of the university environment (level 2) and physical activity (level 3). Remained in the multivariate model all those related with statistical significance (p < 0.05). The present study followed the ethical norms established in the Declaration of Helsinki (1975Helsinki ( , revised in 1983. UFPR Sectors authorized the study for its execution and the Ethics and Research Involving Human Subjects of UFPR by Opinion No. 3,430.223 on July 2, 2019. Table 1 shows the frequency of extracts on signs of burnout according to sociodemographic indicators, health perception, and characterization of the university environment. Supplementary It was found that those with more signs of burnout in all classifications (low, moderate, and high) were female students with low academic performance, who perceived their health as regular/inferior. The most general category for moderate signs of burnout was enrolled in the night shift, while students with characteristic signs of burnout were between 25 and 29 years old and belonged to courses in the area of Human Sciences. Supplementary Table 2 shows the prevalence of odds ratio of signs of burnout with stratification for Physical Activity correlates. This table indicates that moderate cardiorespiratory conditioning, when not practiced on any days, presents a prevalence of 46.1% and a 61% odds ratio of developing signs of burnout. The intense cardiorespiratory variable when practiced on any day of the week had a prevalence of 44.2% and an 81% odds ratio of showing signs of burnout. Strength training when practiced between 1 and 2 days/week presented a prevalence of 39.4% and a 37% odds ratio of developing signs of burnout, in this same variable, when not practiced any day has a 62% ratio of chance of developing burnout. DISCUSSION This study aimed to assess the prevalence and associated determinants between signs of burnout, physical activity, and health behaviors in university students. Generally speaking, there are many determinants of health behaviors, however, we still lack research that addresses the topic, especially among university students. Identifying and interfering with these behaviors is of paramount importance, as this is the only way to reduce the onset, regularization, and consequences that these behaviors have inflicted on young students (Das and Horton, 2012). The presence of signs of burnout in the selected samples was 40.4%, with 3.1% for signs of high burnout, 8.0% for signs of burnout moderate, and 29.3% for signs of burnout low. To identify signs of burnout in university students in this research, we used cut-off points similar to the studies by Peres et al. (2014) and Viana et al. (2014). Exposure to burnout symptoms for a long period can cause serious damage to health, especially in mental and emotional skills. It is believed that low levels of burnout evolve in the autonomy and performance of university students, improving the ways they solve problems in their academic career. The results of this study alert the scientific community to develop goals that favor the best performance of the mind through activities that lead students to a state of relaxation and mental lightness. The results reveal that it is important to encourage them to increase the levels of physical activity they undertake each week and reflect on important psychological issues such as self-care, and awareness of a healthier lifestyle. Universities should encourage university students to set aside a specific time of the day for students to disconnect from their tasks and connect with their bodies and mind on the move. Results regarding the presence of burnout signs in the region of the Americas were found in 41.6% of university students in Barranquilla-Colombia (Caballero et al., 2007), 55.0% of medical students in Texas-United States (Chang et al., 2012), 56.2% of students in a public university of São Paulo-Brazil (Peres et al., 2014), 65.1% of health sciences academics from Montes Claros-Brazil (Viana et al., 2014). While in the Middle East, in Saudi Arabia, 67.1% of health sciences academics showed signs of burnout (Almalki et al., 2017). Regarding the presence of signs of burnout, a meta-analysis carried out by Low et al. (2019) with medical students showed a prevalence of 27.7% in European university students, 51.0% in Asian university students, and 51.6% in North American university students. Other studies that used the MBI-SS as an instrument presented results for signs of burnout that were much lower than the results of this research. For example, signs of burnout were reported in 12.0% of university students in the city of Porto (Barbosa et al., 2016), 7.4% in medical students from the Sultan (Alalawi et al., 2017), 17.0% of dentistry students in Araraquara (ENESP) (Campos and Maroco, 2012), 18.8% of nursing students in Costa Rica (Reyes and Blanco, 2016). Despite the use of the same instrument (MBI-SS), differences in the prevalence of signs of burnout were possibly found due to the methodologies used, for example, a sample composed of first-year students only (Barbosa et al., 2016), sample size (Campos and Maroco, 2012), different cut-off points for categorizing signs of burnout (Reyes and Blanco, 2016). Signs of burnout were found to a greater or lesser degree, as varying according to methodological heterogeneity, such as different measurement instruments, cultural differences, sample size, categorization of cut-off points, and time of application of the research. Significant associations with signs of burnout through the chi-square strata (χ 2 ) were found for female students, aged between 25 and 29 years, from the night study period, belonging to the Human Sciences area, with weak academic performance and inadequate/regular self-perception of health. In the prevalence analysis through the odds ratio of signs burnout with stratification for physical activity correlates, fewer days of practice were associated with greater chances of signs of burnout, especially for those who did not practice physical activity in any of the dimensions moderate-intensity cardiorespiratory/aerobic, vigorous-intensity, and strength training on any day of the week. Corroborating the findings of this research, physical activity was significantly associated with lower signs of burnout in several studies (Weight et al., 2013;Cecil et al., 2014;Lindwall et al., 2014;Olson et al., 2014;Fares et al., 2015;Farias et al., 2019). To explain such relationships, psychological mechanisms were reported as a way to reduce chronic stress and, consequently, signs of burnout (Sonnentag, 2012), either through increased self-efficacy (Joseph et al., 2014), through increases in sense of competence to deal with tasks (Feuerhahn et al., 2014) or make them less demanding (Hockey, 2013). About physiological mechanisms, physical activity may be able to improve the relationship with psychological stress (cardiovascular fitness hypothesis), causing a more significant recovery of the body to stress exposure factors (Klaperski et al., 2014), inducing changes in several neurotransmitters and neuromodulators, with consequent improvement in mood and increased energy (Schuch et al., 2016). Questions related to physical activity in its dimensions required the respondent to remember the practice in the previous 7 days, while questions related to signs of burnout assessed the prevalence in the last year. This is worrisome given that it might not be possible to determine when signs of burnout were established during the previous year, nor to determine whether the practice of reported physical activity reflected a pre-existing behavior. Thus, the validity of these findings depends on the assumption that physical activity indicates previous behavior. However, careful consideration of alternative possibilities is necessary (Naczenski et al., 2017). When adjusted for the variables of demographic indicators, the characteristics of the university environment, and correlates of health behaviors, physical activity in its dimensions was not significant for the model. The absence of an association between signs of burnout and physical activity in the adjusted model may have been because, despite the high prevalence of 40.4% for "signs of burnout, " the prevalence of college students with signs of burnout was only 3.1%. This is a high result for the dimensions "emotional exhaustion" and "disbelief, " and a low result for "professional/personal effectiveness" through the categorization of tertiles. Taking into account the hypothesis that burnout develops in the form of a continuum with the beginning of the process possibly occurring in the emotional exhaustion dimension (Guedes and Souza, 2015), it is assumed that a good part of the almost 29.3% of students who showed signs of burnout as being low, presented results only for the dimension emotional exhaustion. This justification is supported in the literature through the results of a systematic review carried out by Naczenski et al. (2017) in the general population, where there was a negative relationship between physical activity and only the dimension and emotional exhaustion of burnout (Bretland and Thorsteinsson, 2015;Lindegard et al., 2015). In the few studies that investigated the dimensions of disbelief and low effectiveness professional/personal, the evidence was inconsistent (Gerber et al., 2013;Freitas et al., 2014;Bretland and Thorsteinsson, 2015). Thus, the hypothesis that physical activity is a practical approach to reduce burnout was not confirmed in the present study after the adjusted regression model, thus lacking further investigation, primarily through intervention studies and research that assess the association of physical activity with each dimension of burnout, separately. These statements are justified by the fact that in this same review by Naczenski et al. (2017) with the general population, this research model proved to be more efficient, demonstrating association results with the dimension emotional exhaustion more consistently (Van Rhenen et al., 2005;Gerber et al., 2013;Tsai et al., 2013;Freitas et al., 2014;Bretland and Thorsteinsson, 2015;Lindegard et al., 2015). More prospectively designed research studies would help to determine whether the practice of physical activity, in all its dimensions, is causally related to lower signs of burnout. In addition, such studies could more fully justify the practice of physical activity as a therapeutic complement to alleviate burnout. Most questions relating to this subject relate to the time recall type and college students should not be expected to accurately recall behaviors from the previous year. Regarding physical activity, the measures of practice in its dimensions were based on weekly frequency recall questions, but the volume, intensity, or consistency of the practice of these exercises was not verified. Finally, cross-sectional data such as that used in the present research allows for the construction of crosssectional association models, but prevents the assessment of causality. However, with the collected data, it is possible to have a parameter of the importance of the weekly physical activity to control burnout signs in university students. In addition, the research was carried out in an important university environment of a federal institution and had a very representative sample size, which brings reliability to the results. Through this crosssectional model association, it is possible to create a study design that evaluates the sample longitudinally, which will give greater consistency to the investigated theme. CONCLUSIONS Despite the high prevalence of signs of burnout (low, moderate, and high), signs of burnout high were reported by only a tiny portion of the selected sample. After adjusted regression analysis, female students, aged between 20 and 29 years, who were married, with poor academic performance and very high stress, deserve special attention. In the crude analysis, signs of burnout were significantly associated with the practice of physical activity in its three dimensions; however, in the analysis adjusted for demographic indicators, the characteristics of the university environment, and health behaviors and physical activity were found to be non-significant for the model. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Ethics and Research Committee Involving Human Beings at UFPR through Opinion No. 3,430.223 on 07/02/2019. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS RS designed the study. RS, FR, RE, LR, OG, RO, MR, and SS participated in the project and coordination and wrote the manuscript. All authors read and approved the final version of the manuscript and agreed with the authors' order of presentation.
2021-10-21T13:28:49.650Z
2021-10-20T00:00:00.000
{ "year": 2021, "sha1": "5ba7249fbb1e5ad641b8793880d33e3bbc615dc2", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fspor.2021.733309/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5ba7249fbb1e5ad641b8793880d33e3bbc615dc2", "s2fieldsofstudy": [ "Education", "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
260598029
pes2o/s2orc
v3-fos-license
Prognostic value of soluble ST2 in AL and TTR cardiac amyloidosis: a multicenter study Background Both light-chain (AL) amyloidosis and transthyretin (ATTR) amyloidosis are types of cardiac amyloidosis (CA) that require accurate prognostic stratification to plan therapeutic strategies and follow-ups. Cardiac biomarkers, e.g., N-terminal pro-B-type natriuretic peptide (NT-proBNP) and high-sensitivity cardiac troponin T (Hs-cTnT), remain the cornerstone of the prognostic assessment. An increased level of soluble suppression of tumorigenesis-2 (sST2) is predictive of adverse events [all-cause death and heart failure (HF) hospitalizations] in patients with HF. This study aimed to evaluate the prognostic value of circulating sST2 levels in AL-CA and ATTR-CA. Methods We carried out a multicenter study including 133 patients with AL-CA and 152 patients with ATTR-CA. During an elective outpatient visit for the diagnosis of CA, Mayo Clinic staging [NT-proBNP, Hs-cTnT, differential of free light chains (DFLCs)] and sST2 were assessed for all AL patients. Gillmore staging [including estimated glomerular filtration rate (eGFR), NT-proBNP] and Grogan staging (including NT-proBNP and Hs-cTnT) were assessed for TTR-CA patients. Results The median age was 73 years [interquartile range (IQR) 61–81], and 53% were men. The endpoint was the composite of all-cause death or first HF-related hospitalization. The median follow-up was 20 months (IQR 3–34) in AL amyloidosis and 33 months (6–45) in TTR amyloidosis. The primary outcome occurred in 70 (53%) and 99 (65%) of AL and TTR patients, respectively. sST2 levels were higher in patients with AL-CA than in patients with ATTR-CA: 39 ng/L (26–80) vs. 32 ng/L (21–46), p < 0.001. In AL-CA, sST2 levels predicted the outcome regardless of the Mayo Clinic score (HR: 2.16, 95% CI: 1.17–3.99, p < 0.001). In TTR-CA, sST2 was not predictive of the outcome in multivariate models, including Gillmore staging and Grogan staging (HR: 1.17, CI: 95% 0.77–1.89, p = 0.55). Conclusion sST2 level is a relevant predictor of death and HF hospitalization in AL cardiac amyloidosis and adds prognostic stratification on top of NT-proBNP, Hs cTnT, and DFLC. Introduction Cardiac involvement is associated with a worse prognosis in patients with amyloidosis. Treatment methods for light-chain (AL) amyloidosis and transthyretin (ATTR) amyloidosis have improved over the last decade, and prognostic stratification should be more and more useful for helping physicians expand therapeutic choices. Cardiac biomarkers are the cornerstone of prognostic assessment (1,2). In AL amyloidosis, the stratification of patients is based on Mayo Clinic staging (3) that includes cardiac troponins, natriuretic peptides [N-terminal pro-B-type natriuretic peptide (NT-proBNP)], and the differential of free light chains in the revisited Mayo Clinic staging (4). In ATTR amyloidosis, the prognostic score is the Gillmore staging, which includes estimated glomerular filtration rate (eGFR) and NT-proBNP (5), and the Grogan staging, which includes troponin T and NT-proBNP (6). Like most prognostic scores, they are imperfect and could be improved by adding new and relevant variables. Soluble suppression of tumorigenicity-2 (sST2) is the circulating form of the interleukin-33 membrane receptor released in response to inflammation, fibrosis in various organs, and myocardium stress (7,8). The prognostic value of sST2 blood levels has been shown in various diseases. In heart failure (HF), sST2 levels add prognostic information on top of natriuretic peptides, and sST2 testing has been included in recent guidelines (9,10). Inflammation and profibrotic pathways are one of the explanations for cardiac damage in systemic amyloidosis (11), but the prognostic usefulness of the sST2 level measurement has been poorly studied in cardiac amyloidosis (CA), except for one large study that highlighted the prognostic role of sST2 in AL amyloidosis (12). We aimed to assess the prognostic value of sST2 in CA, compare it with previously validated biomarkers, and assess its added value on top of other biomarkers and Mayo Clinic staging (4), Gillmore staging (5), and Grogan staging (6) for AL and ATTR amyloidosis. Patient population This study involved three independent cohorts that included consecutive patients receiving a final diagnosis of CA at Lariboisiere Saint-Louis Hospital (Paris, France, n = 105), Henri Mondor Hospital (Creteil, France, n = 92), and Fondazione Monasterio (Pisa, Italy, n = 103). CA was diagnosed according to the current diagnostic algorithm (13). As part of an elective outpatient visit, all patients underwent clinical examination, blood measurements of high-sensitivity cardiac troponin T (Hs-cTnT), N-terminal pro-B-type natriuretic peptide (NT-proBNP), and free light chains, 12-lead electrocardiography, and comprehensive echocardiographic examination in accordance with the American Society of Echocardiography recommendations (14). Hs-cTnT and NT-proBNP were obtained by commercialized assays (Roche Diagnostics). Serum free light chains were determined using the Freelite assay (Binding Site, Birmingham, Meylan, France). The 2012 Mayo Clinic score (4) was calculated for each patient with NT-proBNP (cutoff 1,800 ng/L), Hs-cTnT (cutoff 40 ng/L) (15), and the differential of plasma free light chains (cutoff 180 mg/L) in AL amyloidosis patients. The 2004 Mayo Clinic staging (3) and stage 3B patients according to Wechalekar et al. (16) were also reported. All AL and TTR amyloidosis patients were included within the first 2 months after the diagnosis. For each TTR patient, Gillmore staging and Grogan staging were calculated. In TTR amyloidosis, prognostic thresholds were 3,000 ng/L for NT-proBNP, 65 ng/L for Hs-cTnT, and 45 mL/min for eGFR according to Gillmore and Gillmore staging (5) and after applying changes in the cTnT cutoff due to the use of new-generation Hs-cTnT (15). Administration of specific therapies (TTR tetramer stabilizers) at the time of inclusion was checked. Exclusion criteria were age <18 years, pregnant and breastfeeding women, suffering from Randall's disease, and the presence of forms of amyloidosis other than AL-CA or ATTR-CA. The study was carried out according to the principles outlined in the Declaration of Helsinki. Informed and written consent was obtained from all patients. sST2 assay Blood levels of sST2 were determined with the Presage ST2 ELISA (Critical Diagnostics, San Diego, CA, USA) during the diagnosis of cardiac amyloidosis. The assay was performed on an Aspect Reader device with Aspect-Plus ST2 Rapid Test assay cartridges marketed by Critical Diagnostics. The assay principle was based on lateral flow immunofluorescence, and the reproducibility coefficients of variation were between 9% and 20% depending on the concentration level between 75 and 30 ng/ L. According to a previous study on amyloidosis, the prognostic threshold was 30 ng/L for sST2 (12). Follow-ups and endpoints Patients were followed up by an exhaustive review of medical files and medical consultations every 3 months and by phone call to referring doctors and patients. When the conclusion of the medical report was unclear, we looked for the use of intravenous diuretics in electronic health records, and hospitalization was considered due to HF only when IV diuretics were prescribed. Follow-up information was retrieved by physicians blinded to sST2 results. The follow-up started at the time of sampling. The endpoint was the composite of all-cause death or first HF-related hospitalization. Statistical analysis Continuous data are expressed as medians and interquartile ranges (IQRs), whereas categorical data are expressed as numbers Nicol et al. 10.3389/fcvm.2023.1179968 Frontiers in Cardiovascular Medicine and percentages. The unpaired t-test was used to assess differences in key continuous variables between patients with AL-and ATTR-CA and between patients with or without events. The χ 2 test assessed differences in categorical data between these subgroups. Survival was evaluated with Cox proportional hazards regression analysis, providing estimated hazard ratios (HRs) and Kaplan-Meier curves. The predictive value of most baseline characteristics was explored using Cox regression analysis. To analyze the predictive value of sST2 regardless of Mayo Clinic staging for AL amyloidosis and Gillmore or Grogan staging for TTR amyloidosis, multivariate models included NT-proBNP, cardiac troponin, sST2, and eGFR for TTR amyloidosis and DFLC for AL and the prognostic scores (Mayo Clinic score in AL amyloidosis and Gillmore and Grogan staging in ATTR amyloidosis). Patients and outcomes The median age was 73 years (IQR 61-81), and 53% were men. Among the 141 patients with AL cardiac amyloidosis, eight were lost to follow-up. Among the 133 AL patients studied, 78 and 28 also had renal and neurological involvement, respectively. Patients with AL amyloidosis received a treatment based on bortezomib, dexamethasone, and cyclophosphamide in 85% of cases and daratumumab in 22% as first-line therapy. In total, 10% of patients (because of relapse) received IMID-based treatment, and 5% of patients received other treatments (1% of autologous stem cells, 1% of bendamustine because of IgM gammopathy at the diagnosis, 3% of other drugs). Most (around 90%) of AL amyloidosis patients were included at the time of first diagnosis of AL amyloidosis. Of the 133 AL amyloidosis and 152 TTR, 77 were in stage 3 and 35 were in stage 3b, according to the 2004 Mayo Clinic staging (3) and Wechalekar et al. (16). Among the 159 patients with ATTR-CA, 7 were lost to followup and 152 were then studied. Among them, 20 had variant ATTR amyloidosis (15 Val122Ile, 2 Val30Met, 3 other mutations) and 31 received tafamidis. None received other specific therapy. The main clinical characteristics of patients according to the type of amyloidosis are presented in Table 1. Patients with AL amyloidosis were younger, had lower systolic blood pressure and lower LV wall thickness, higher LVEF and global longitudinal strain than patients with TTR amyloidosis, and they also had global longitudinal strain, unlike those with TTR amyloidosis. sST2 levels were higher in AL patients than in ATTR patients, while NT-proBNP, cardiac troponin T, and eGFR levels were similar in the two groups. The main clinical characteristics of AL and TTR patients according to cardiac events are separately presented in Tables 2 and 3. Frontiers in Cardiovascular Medicine sST2 levels: determinants and prognostic values sST2 levels were higher in patients with AL amyloidosis than in patients with ATTR amyloidosis ( Table 1). sST2 levels were significantly higher in patients experiencing adverse events than in patients experiencing TTR-CA and AL-CA (Tables 2, 3). Supplementary Tables S1 and S2 list the predictive values of the most studied predictive variables and of sST2 for the risk of HF hospitalization and/or death in univariate analysis in TTR and AL amyloidosis, respectively. Table 4 presents the various multivariate Cox models that tested respective predictive values of the variables used in Mayo Clinic or Gillmore or Grogan models. In TTR models ( Table 4, Model 1), the sST2 level was not predictive of HF and/or death, and only NT-proBNP had strong and independent predictive value. In AL amyloidosis ( Table 4, Model 2), the sST2 level strongly predicted the outcome regardless of the Mayo Clinic score. Even in the most severe patients [defined by NT-proBNP >8,500 ng/L and/or systolic blood pressure <100 mmHg according to Wechelekar et al. (16)], sST2 was still predictive of cardiac events (HR: 2.12, CI: 95% 1.25-3.60, p = 0.005). Figure 1 shows the survival curves according to sST2 levels > or <30 ng/L for HF hospitalization and all-cause mortality in TTR and AL amyloidosis. Interestingly, sST2 was also predictive of cardiac events beyond 1 year of follow-up in AL-CA patients (Supplementary Table S3). Discussion Our study highlights the prognostic value of sST2 in cardiac amyloidosis. Increased sST2 levels are strongly associated with both death and HF hospitalization and remain predictive regardless of other validated predictors, including biomarkers in AL cardiac amyloidosis but not in TTR cardiac amyloidosis. Prognostic stratification is a major issue regarding the choice of chemotherapy, the monitoring of the patients, and therapeutic intensification in the event of immunochemical and/or organ nonresponse. In ATTR amyloidosis, prognostic stratification could be more and more required because treatment options are rapidly expanding beyond tafamidis (17,18): TTR silencing by mRNA knockdown or silencing, new TTR stabilizers, and amyloid resorption or extraction. Therefore, finding reliable prognostic markers is essential to identify the patients who will benefit from these new therapies. In TTR and AL cardiac amyloidosis, heart failure is a major issue. HF is predictive of subsequent death in these two populations and could explain almost half of the causes of death. Thus, it makes sense to combine HF and death in the outcome analysis of cardiac amyloidosis and to look for relevant biomarkers that predict both endpoints. In addition, we need biomarkers that could help to define the cardiac response to treatment and that could help to refine therapeutic strategies. sST2 is the circulating form of the interleukin-33 membrane receptor released in response to vascular congestion and inflammatory or profibrotic stimuli. Increased cardiac expression of ST2 has been observed after cardiac insult (19,20). By binding IL33, sST2 acts as a decoy receptor and removes the protective effects of IL33 against cardiac hypertrophy, reduction of contractility, and fibrosis (21,8). Previous studies showed that sST2 predicts outcomes in acute and chronic HF regardless of NT-proBNP, cardiac troponin, and LVEF (22)(23)(24)(25). Consequently, sST2 testing has been suggested in the guidelines to refine the assessment of the prognosis of HF patients. In amyloidosis, data on the usefulness of sST2 testing are scarce. Some studies have already assessed the prognostic value of sST2 in AL amyloidosis but not in ATTR amyloidosis. Zhang et al. (26) showed, in 56 AL amyloid patients, that sST2 was a powerful and independent prognostic biomarker for all-cause mortality with a cutoff of 12.3 mg/mL. Kim et al. (27) also showed the prognostic role of sST2 in 73 AL amyloid patients with a median follow-up of 18 months, and the authors suggested that the cutoff value could be 32.6 ng/mL. In a large cohort of 502 AL amyloidosis patients, Dispenzieri et al. (12) showed that sST2 levels >30 ng/mL were independently associated with mortality. To our knowledge, we present the first data on both AL-CA and ATTR-CA. We report that sST2 levels are higher in AL amyloidosis than in ATTR amyloidosis, even though patients with ATTR amyloidosis exhibited the most severe cardiac remodeling and dysfunction, i.e., greater left ventricular hypertrophy, more altered longitudinal strain, lowered cardiac output, and higher cardiac biomarker levels. sST2 levels were strongly predictive of the outcome irrespective of other cardiac biomarkers and predictors. The cytotoxic effect (28,29) and the inflammatory changes in cardiomyocytes (30) due to the light chains could explain these differences. Indeed, in AL amyloidosis, light chains polymerize into amyloid deposits that probably induce more severe systemic inflammation than in ATTR amyloidosis, in which there is no underlying neoplasia and where organ damage is more targeted: heart and peripheral nervous systems. Kotecha et al. (31) showed that cardiac AL amyloidosis resulted in greater myocardial edema than ATTR amyloidosis, possibly due to the specific toxicity of the light chains on the myocardium. In addition, edema defined by a native T2 >55 ms was an independent predictor of poor prognosis in AL amyloidosis. sST2 blood levels seem to be a marker of systemic inflammation in the case of neoplasia or certain autoimmune diseases independently of cardiac involvement (32). Compared with natriuretic peptides and troponin, sST2 blood levels are less influenced by obesity, advanced age, and chronic kidney disease (33). These potential advantages for clinicians were also observed in our study. Indeed, sST2 levels were poorly associated with other biomarkers and renal and LV function. In contrast, NT-proBNP and troponin levels were correlated to each other and renal and LV function. NT-proBNP levels are strongly correlated with Hs-cTnT levels (data not shown). Of these two biomarkers, cTnT gave the main prognostic information of AL amyloidosis, but it was the opposite in TTR amyloidosis. The increase in cTnT levels is due, at least in part, to the direct myocardial toxicity of light chains in AL amyloidosis, which could be a plausible explanation for the strong predictive value of cTnT levels. In addition, sST2 levels were measured at an early stage in most AL patients before the decrease in light chain levels. On the other hand, the increase in NT-proBNP levels is mainly due to the increase in LV wall stress (mainly related to LV hypertrophy and concentric LV remodeling). LV remodeling was lower in AL vs TTR cardiac amyloidosis. In TTR amyloidosis, the severity of such LV remodeling and its subsequent decreased LV function can lead to most cardiac events. Our study has some limitations. It is a retrospective study, but we collected data from a heterogeneous population and different centers. All patients with AL amyloidosis had a complete evaluation within 2 months of diagnosis, and some patients received chemotherapy before sST2 measurements, which could impact sST2 levels. sST2 levels could be measured in some patients during HF decompensation, and the predictive value of sST2 can differ according to the delay between decompensation and sST2 measurement (25). In conclusion, the increase in sST2 blood levels (>30 ng/L) is strongly associated with the risk of death or HF hospitalization in AL cardiac amyloidosis but not in TTR amyloidosis. Further studies are needed to estimate the change of sST2 after treatment in both amyloidosis types. Data availability statement The original contributions presented in the study are included in the article/Supplementary Material, and further inquiries can be directed to the corresponding author. Ethics statement The studies involving human participants were reviewed and approved by the local etic committee CEERB (Comité d'évaluation et d'éthique pour la recherche biomédicale) of Bichat Hospital. Written informed consent for participation was not required for this study in accordance with national legislation and institutional requirements. their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
2023-08-06T15:29:13.905Z
2023-08-02T00:00:00.000
{ "year": 2023, "sha1": "d2b6304869b3e3ea2efed91ba538aa6038d1ce60", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fcvm.2023.1179968/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a556f92665f8a0fdf6f578ec1d80f6122333ff21", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
222900168
pes2o/s2orc
v3-fos-license
Two Epistemological Arguments against Two Semantic Dispositionalisms Even though he is not very explicit about it, in Wittgenstein on Rules and Private Language Kripke discusses two different, albeit related, skeptical theses – the first one in the philosophy of mind, the second one in the metaphysics of language. Usually, what Kripke says about one thesis can be easily applied to the other one, too; however, things are not always that simple. In this paper, I discuss the case of the so-called “Normativity Argument” against semantic dispositionalism (which I take to be epistemological in nature) and argue that it is much stronger as an argument in the philosophy of mind than when it is construed as an argument in the metaphysics of language. The first two arguments I discussed elsewhere (see, e.g., Guardo 2012a and 2012b).In this paper I want to focus on the third. In the literature, there is a lot of debate not just about the strength of the Normativity Argument, but also about its content -different commentators have given very different readings of Kripke's remarks concerning the normativity of meaning and intention.Here I will set aside the exegetical issue, embracing without argument what may be called "the epistemological reading" of Kripke's remarks, 1 and focus on the task of assessing its strength.In this connection, I will argue for two theses.The first one is that in his book Kripke discusses, even though he is not very explicit about it, two different, albeit related, problems -one in the philosophy of mind and the other in the philosophy of language (or, more precisely, in metasemantics)and so his whole discussion of semantic dispositionalism, Normativity Argument included, should be seen as twofold in the very same way: there is a normativity argument against semantic dispositionalism in the philosophy of mind and there is another normativity argument against semantic dispositionalism in the philosophy of language.My second, and most important, claim will then be that the Normativity Argument is much stronger when viewed as an argument in the philosophy of mind. The paper is structured as follows.In section 1, I sketch the first of the two problems Kripke discusses, the one in the philosophy of mind, and I describe the corresponding form of semantic dispositionalism.In section 2, I discuss the normativity argument against this semantic dispositionalism and argue that it is quite a strong argument.In section 3, I turn to the problem in the philosophy of language.Finally, in section 4, I discuss the normativity argument against semantic dispositionalism in the philosophy of language and show that it is much weaker than its companion in the philosophy of mind. Semantic Dispositionalism in the Philosophy of Mind When, talking about game theory, I utter the name "Schelling", I refer to Thomas Crombie Schelling, the American economist -not to Friedrich Wilhelm Joseph von Schelling, the German idealist.When I use the word "red", I refer to a certain class of shades.And when I say that 68 + 57 = 125, by "+" I mean the addition function.But what does this referring, this meaning amount to?The nature of this prima facie unproblematic mental state is actually quite elusive and much of Wittgenstein on Rules and Private Language is devoted to a discussion of the, no doubt somewhat incredible, idea that there is no such thing. Take the case of "+".We all think that by this symbol we mean the addition function; but what does this meaning addition -rather than some quaddition function which diverges from addition only when at least one of its arguments is authentically huge -consist in?The difference cannot be a matter of the way I answer particular "+" problems, for the "+" problems I am presented with never involve really huge numbers, and addition and quaddition diverge only when we get to such numbers.Nor can we answer the challenge by trying to argue that at some point I must have entertained thoughts that fit addition but not quaddition, for such thoughts would no doubt involve language, and so the challenge would have just been moved from the case of "+" to that of the other words occurring in the thought in question -the recursive definition of addition fits addition but not quaddition, but only if by "S" I mean the successor function, and what does this meaning the successor function (rather than some other function which diverges from it only for huge arguments) consist in? Such questions need to be answered.Saying that there is no difference between meaning addition and meaning quaddition is tantamount to admitting that there is no such thing as meaning addition.And if there is no difference between meaning addition and meaning quaddition, then there is no difference between meaning green and meaning grue (where past objects were grue if and only if they were green while present objects are grue if and only if they are blue), and so on.Therefore, saying that there is no difference between meaning addition and meaning quaddition is saying that there is no such thing as meaning, period. Dispositions seem to many to provide the most natural answer to this kind of question.The reason why I mean addition and not quaddition is that my dispositions track the former, not the latter. Let us say, for concreteness' sake, that quaddition starts to diverge from addition when at least one of its arguments is greater than or equal to 1,000,000; when that is the case, the result of a quaddition is always 5. And let us also assume that I have never been presented with "+" problems involving arguments greater than 999,999.That does not mean that I do not have the disposition to answer "1,000,002" if asked about "1,000,001 + 1". 2 Here is how Kripke (1981, pp.22-23) introduces semantic dispositionalism: To mean addition by "+" is to be disposed, when asked for any sum "x + y", to give the sum of x and y as the answer […]; to mean quus is to be disposed, when queried about any arguments, to respond with their quum […].True, my actual thoughts and responses in the past do not differentiate between the plus and the quus hypotheses; but, even in the past, there were dispositional facts about me that did make such a differentiation. And here is a more careful characterization of the view: […] the simple dispositional analysis […] gives a criterion that will tell me what number theoretic function φ I mean by a binary function symbol "f", namely: the referent φ of "f" is that unique binary function φ such that I am disposed, if queried about "f(m, n)", where "m" and "n" are numerals denoting particular numbers m and n, to reply "p", where "p" is a numeral denoting φ(m, n) (Kripke 1981, p. 26). So much for the introductory remarks.Let us now turn to the normativity argument that Kripke puts forward against this first form of semantic dispositionalism. The Normativity Argument in the Philosophy of Mind Kripke's normativity argument against the semantic dispositionalism of the previous section is concisely stated in the following passage: […] ""125" is the response you are disposed to give, and […] it would also have been your response in the past".Well and good, I know that "125" is the response I am disposed to give […], and maybe it is helpful to be told […] that I would have given the same response in the past.How does any of this indicate that […] "125" was an answer justified […], rather than a mere jack-in-the-box unjustified and arbitrary response?Am I supposed to justify my present belief that I meant addition […], and hence should answer "125", in terms of a hypothesis about my past dispositions?(Do I record and investigate the past physiology of my brain?) (Kripke 1981, p. 23). Let me unpack the passage a little bit. From a logical point of view, the argument starts with the assumption that it is a conceptual truth about meaning that one's meaning a certain thing by a certain word can be used to justify their use of that word -and that when one justifies their use of a given word in terms of what they meant, the process takes a certain characteristic form; for lack of a better term, I will say that the justifications in question are "non-hypothetical". 3ere is an example of what Kripke has in mind.Let us suppose that, during a conversation, I say that analytic philosophers have a great deal of respect for Schelling's work and that, taking me to be speaking of the German idealist, you comment that you have never had that impression.I realize that there has been a misunderstanding, and I clarify that I was not referring to the German idealist, but to the American economist.My meaning the American economist can be used to justify my claim that analytic philosophers have a great deal of respect for Schelling's work.And the justification process is especially straightforward; it does not rely on hypotheses but, rather, on what seems to be a form of non-inferential knowledge of my mental states: when I say something, I non-inferentially know what I mean, and I can use this non-inferential knowledge to justify my utterances. But if it is a conceptual truth about meaning that one can justify their use of a certain word by means of their non-inferential knowledge of what they meant, then it is clear that a dispositional analysis of meaning can work only if it can account for such non-inferential knowledge, i.e.only if speakers have noninferential knowledge of their linguistic dispositions.But, as a matter of fact, speakers do not have such knowledge.And so semantic dispositionalism is bound to fail. I take this to be an extremely strong argument against the very notion that the mental state of meaning can be made sense of in terms of dispositions.The first, conceptual, step of the argument is virtually impossible to deny, especially when one realizes that it is even more straightforward than Kripke makes it out to be.After all, here the point is that semantic dispositionalists must make sense of the fact that we all have non-inferential access to what we mean; Kripke introduces this idea by focusing on the role that this access plays in our justificatory practices, but one does not have to go about it that way: that we non-inferentially know what we mean is quite clear in itself, even independently of this knowledge's role in our justificatory practices. The argument's second step is quite solid, too.If dispositionalism were true, my non-inferentially knowing that I mean addition would require me to noninferentially know, for any pair of huge numbers M and N, that I am disposed to answer with their sum if asked about "M + N".And that is a knowledge which I most definitely do not have. Note that what I am taking to be clear is not that it is not the case that I know, for any pair of huge numbers M and N, that I am disposed to answer with their sum if asked about "M + N".This I may well know -let us say I can deduce it, with reasonable confidence, from the answers I do give to more manageable "+" problems.What I believe is clear is only that, if I do have such knowledge, it is inferential in nature. Nor am I assuming that it is impossible for me to have the non-inferential knowledge in question.No doubt there are possible worlds in which I do have non-inferential access, down to the tiniest detail, to my current brain states, and hence to my linguistic dispositions.What I am assuming is just that, as a matter of fact, I do not have such knowledge.This is all that needs to be assumed in order for the argument to go through, since its point is that semantic dispositionalism cannot make sense of the fact that I have non-inferential access to what I mean, in this world. 4he Normativity Argument, viewed as an argument in the philosophy of mind, is, indeed, quite straightforward.In a certain sense, it comes down to the claim that semantic dispositionalism "[…] threatens […] to make a total mystery of the phenomenon of non-inferential, first-personal knowledge of past and present meanings […]" (Wright 1989, p. 175).In order to resist it, one should show either that this is not a real phenomenon or that, contrary appearances notwithstanding, a dispositional analysis can account for it.The first strategy looks utterly desperate, 5 while the second is inconsistent with what seem to be rather uncontroversial facts about our knowledge of our dispositions. Semantic Dispositionalism in the Philosophy of Language In this section I turn to the first of the two theses I want to argue for, namely that in his book Kripke discusses two different problems, one in the philosophy of mind and the other in the philosophy of language, and so all he says about semantic dispositionalism, Normativity Argument included, should be seen as twofold in the very same way. 6t us start by coming back to the way I introduced the problem of meaning in the philosophy of mind.Following Kripke, I tried to show that the notion of meaning something by a sign is problematic by calling attention to the fact that it is not clear how to make sense of the difference between meaning addition and meaning quaddition, where quaddition was assumed to be a function which diverges from addition only when at least one of its arguments is authentically huge.Kripke defines quaddition in a slightly different way: he stipulates quaddition to diverge from addition as soon as at least one of its arguments is greater than or equal to 57.However, Kripke also assumes that we have never been presented with "+" problems involving arguments greater than 56, so the difference between 4 For a more in-depth discussion of this second part of Kripke's argument see Guardo 2014. 5 Of course, a meaning skeptic can deny the reality of "the phenomenon of noninferential, first-personal knowledge of past and present meanings" on the basis of the fact that, in their view, there is no such thing as meaning.However, such a move is clearly unavailable to the dispositionalist, whose goal is to vindicate our intuitions concerning this mental state. 6Of course, the problem in the philosophy of language I am about to sketch is interesting, and deserving of discussion, in its own right -independently of whether Kripke really had it in mind or not. his definition and mine is superficial; in both cases, quaddition is defined in such a way that the answers we gave to the "+" problems we have been presented with were consistent with both addition and quaddition.Now let me ask a question: why is this important?Why does it matter that our answers to the "+" problems we have been presented with are compatible with both functions? The answer to this question is rather obvious: Kripke wants to build a case in which it is clear that the difference between meaning addition and meaning quaddition cannot be made sense of in terms of overt behavior, i.e. in terms of the answers we give to the "+" problems we are actually presented with.But, as clear as it is that this is what he has in mind, a little reflection is more than enough to see that Kripke's worry here does not make much sense.Overt behavior is just not the kind of thing a mental state can be identified with.Saying that my meaning addition by "+" consists in my giving (as opposed to my being disposed to give) certain answers to certain problems is not explaining what that mental state amounts to; it is saying that there is no such thing as meaning something by a sign, and then trying to substitute that concept with something else. So now the question is: how is it that Kripke did not realize that?The answer is, I think, that while he was working on Wittgenstein on Rules and Private Language Kripke had in mind, besides the problem I described earlier, another one, too.The two problems are related, and most of the time what holds with regard to the first problem holds in the case of the second one, too (and vice versa).Therefore, Kripke does not take the trouble to explicitly distinguish between them.But the two problems are distinct nonetheless, and sometimes what makes sense with regard to one does not make sense with regard to the other.And so not distinguishing between them may lead one to worry about things that need not be worried about.What I described in the previous two paragraphs is just one such case. But what is this other problem that Kripke had in mind?As I have already hinted, it is a problem in the philosophy of language.More precisely, it is the problem of explaining what determines the reference of a word.7 What makes it the case that the name "Ludwig Wittgenstein" denotes a certain Austrian philosopher?What makes it the case that the predicate "being a philosopher" refers to the class of individuals which, as a matter of fact, it does refer to?And what makes it the case that "+" refers to the addition function, and not to quaddition? 8 that that formulation is less than optimal and, therefore, in this paper I decided to drop it and substitute it with the one just given. 8One may wonder how Kripke could fail to clearly distinguish this problem from the one described in section 1.The answer is, I think, that both problems can be rephrased in terms of correctness, and when phrased that way it is indeed quite easy to mistake one for the other.That the concept of reference has a normative dimension (and so the problem of Kripke's two problems are, of course, related (their relationship will look especially close if one believes that the reference of a word depends on what people usually mean by it).But they are two distinct problems nonetheless.One has to do with the nature of a certain mental state, the other has to do with the relationship between linguistic expressions and entities in the world. It is because they are distinct problems that, sometimes, what does not make sense in the case of one does make at least some sense in that of the other.In the case of the problem of explaining the nature of the mental state of meaning something by a sign, any reference to overt behavior can be discarded out of hand as clearly irrelevant.But in the case of the problem of explaining what makes it the case that a word refers to what it refers to, overt behavior seems to be at least part of the solution: granted, taken by itself, past usage does not show that "+" does not refer to quaddition; but at least it rules out other functions, which diverge from addition also with regard to pairs of smaller arguments -or at least so it seems. 9,10 laining what determines the reference of a word can be rephrased in terms of correctness) is rather obvious: saying that "being a philosopher" refers to a certain class of individuals is saying that that predicate is applied correctly if and only if it is applied to a member of that class.The availability of a formulation in terms of correctness is somewhat less apparent in the case of the problem of the nature of the mental state of meaning something by a sign, for such a formulation involves the semi-technical notion of metalinguistic correctness.That being said, the idea is rather easy to get.If by "+" I have always meant quaddition, there is a sense -what Kripke calls the "metalinguistic" sensein which for me it is correct to answer "5" if asked about "1,000,001 + 1": "5" is the correct answer in the sense that "+", as I intended to use that symbol in the past, denoted a function which, when applied to the numbers I called "1,000,001" and "1", yields the value 5.And so the problem of explaining what makes it the case that by "+" I mean addition (and not quaddition) can be seen as the problem of explaining what makes it the case that I should answer "1,000,002" (and not "5") if asked for "1,000,001 + 1". 9 As a matter of fact, in this case appearances are misleading, for reasons I explain in Guardo 2012b and elsewhere.That being said, nothing of importance hinges on this point here. 10Some may take the upshot of the foregoing to be not that Kripke was interested in two distinct (and yet related) problems, but that the problem Kripke was really interested in is not the one he seems to be interested in -but, rather, the one in the philosophy of language I have just sketched.I believe that such a conclusion would be too strong.Kripke is quite clearly interested in the nature of the mental state of meaning, too.In fact, one of the things that makes it clear is his use of the Normativity Argument -which is very strong when viewed as an argument in the philosophy of mind but, as I am about to argue, rather weak as an argument in the philosophy of language. Just as the problem Kripke is interested in is actually two problems, it is important to recognize that there are two semantic dispositionalisms, one in the philosophy of mind and one in the philosophy of language.In the philosophy of mind, semantic dispositionalism is the thesis that what makes it the case that I mean, say, addition by "+" is that I have certain dispositions, and not others -I have addition-tracking, not quaddition-tracking, dispositions.In the philosophy of language, on the other hand, to be a semantic dispositionalist is to have a certain view of what makes it the case that a word refers to what it refers to -"+" denotes the addition function because it is that function which is tracked by the speakers' dispositions concerning the use of that symbol. And just as there are two semantic dispositionalisms, one can try to put forward a normativity argument both in the philosophy of mind and in the philosophy of language.In section 2, I argued that, in the philosophy of mind, normativity considerations are extremely effective.In the next section, I will try to show that in the philosophy of language the situation is completely different. The Normativity Argument in the Philosophy of Language According to the epistemological reading I am assuming here, the Normativity Argument is epistemological in nature.The argument gets called "Normativity Argument" because it makes use of the notion of justification, which is normative, but its focus on our justificatory practices is just a means to call attention to an epistemological point, and in fact the argument can be rephrased without making any mention of justifications -so that "Normativity Argument" is really something of a misnomer. In the philosophy of mind, focusing on the epistemological core of the argument -setting aside all talk of justifications -gets us something like this: it is a fact that we have direct access to (non-inferential knowledge of) what we mean by our words; we do not have, however, any such access to our linguistic dispositions; therefore, dispositional analyses of meaning cannot account for the epistemology of this mental state, and so they can be discarded out of hand. To me, this looks like a very strong argument.But can such considerations be generalized to the case of semantic dispositionalism in the philosophy of language?Well, in the philosophy of language, semantic dispositionalism is the view that what makes it the case that a word refers to what it refers to are the speakers' dispositions.Therefore, here, in order to get off the ground, the Normativity Argument would need to call attention to some feature of our epistemic relationship with facts about reference -and, relatedly, of our knowledge of a word's reference -that semantic dispositionalism cannot make sense of.What we need is an asymmetry between our knowledge of a word's reference, our semantic competence, and our knowledge of the speakers' dispositions.Hence, the issue of the ef-fectiveness of "normative" considerations against semantic dispositionalism in the philosophy of language comes down to a very simple question: is such an asymmetry anywhere to be found? To the extent that I can make sense of the notion of reference, it seems to me that the character of our epistemic relationship to the relevant facts is perfectly consistent with the idea that those facts are facts about the speakers' dispositions.The mental state of meaning a certain thing by a certain word is clearly a conscious state (a state with a phenomenal component), to which we have direct, noninferential access.Facts about the reference of linguistic expressions, though, are not like that.Granted, that "+" refers to addition is something I am extremely confident about.It may even be said that that is something I am certain of.But the very same degree of confidence I have in the fact that my own and my fellow speakers' dispositions concerning "+" track addition, and not some other quaddition-like function.Therefore, it seems that nothing about the nature of our epistemic access to facts about reference tells against the idea that these facts are really facts concerning how we are disposed to use the words of our language. One might try to salvage the argument by building on the fact that, in its original version, the Normativity Argument made use of the concept of justification.Of course, we have seen that, in the case of the version of the argument Kripke runs in the philosophy of mind, any mention of justifications can be removed without in any way weakening the argument.But maybe things are different when we turn to the philosophy of language; maybe here the reference to our justificatory practices is essential. Prima facie, this is an interesting suggestion.When one realizes that the point of the Normativity Argument is epistemological, Kripke's emphasis on the notion of justification starts to look rather strange.But if it were to turn out that in the case of the philosophy of language the argument requires that concept, then the way Kripke builds it would make much more sense.That being said, I do not see how a focus on our justificatory practices could provide the kind of asymmetry we are after.And so my conclusion is that the Normativity Argument is not a serious threat to semantic dispositionalism in the philosophy of language. Conclusion Kripke took the Normativity Argument to show not just that semantic dispositionalism is false, but that it is clearly false, that nobody in their right mind could take seriously such a blatantly inadequate account of meaning.The standard interpretation of the Normativity Argument -according to which the point of the argument is that while meaning a certain thing by a certain word entails categorical oughts, having certain dispositions does not -makes Kripke's assessment of the strength of his argument look overly optimistic.11After all, that meaning a certain thing by a certain word entails categorical oughts is far from uncontroversial.12On the other hand, the epistemological reading I sketched in section 2 makes, I think, perfect sense of Kripke's view of the dialectic -since the argument described in that section is indeed a very strong one.But by vindicating Kripke's assessment of the merits of the Normativity Argument the epistemological reading raises a worry: if it is true that nobody in their right mind could take seriously such a blatantly inadequate account of meaning as semantic dispositionalism, how is it that among the ranks of semantic dispositionalists we find philosophers such as (to name just a few) Simon Blackburn (1984), John Heil and Charlie Martin (1998), Fred Dretske (1981), and Jerry Fodor (1990)? The two theses I have argued for in the previous two sections can, I think, help answer such worries.As shown in section 3, the label "semantic dispositionalism" is ambiguous.It may refer to the view in the philosophy of mind which is the primary target of Kripke's normativity considerations, but it may also refer to a thesis in the philosophy of language.And, as I have argued in section 4, when viewed as an argument against the latter thesis the Normativity Argument is quite weak.Hence, it may be that the reason why Blackburn, Dretske, Fodor, etc found semantic dispositionalism attractive is that what they had in mind was, at least to some extent, not the view in the philosophy of mind, which is indeed blatantly inadequate, but that in the philosophy of language. 13
2020-10-17T11:33:12.205Z
2020-06-30T00:00:00.000
{ "year": 2020, "sha1": "edf93d836ed17d6c831f8010478c23830b42d6a5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.30687/jolma/2020/01/001", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "9172dfca05fc2e0036672033720bf52f89ab832c", "s2fieldsofstudy": [ "Philosophy" ], "extfieldsofstudy": [ "Philosophy" ] }
248295
pes2o/s2orc
v3-fos-license
Specific Immunotherapy in Food Allergy — Towards a Change in the Management Paradigm This chapter addresses the epidemiology and natural history of food allergy, the mechanisms of immune tolerance to foods, the forms of desensitization and induction of tolerance to allergens with applications to foods, and affords an update on the experience gained with the induction of tolerance to different foods, the respective protocols and guides, efficacy and safety considerations, and adverse reaction risk factors. An evaluation is also made of the current state of biological therapy utilization in conjunction with specific immunotherapy for food allergens. Introduction Food allergy is one of the leading causes of allergic disease and the main cause of anaphylactic reactions and mortality due to allergic problems, producing important economic problems and social restrictions for the affected patients and their families. The increase in the prevalence and persistence of food allergy throughout the world has a significant impact not only upon patient safety but also on quality of life and healthcare expenditure. Indeed, food allergy constitutes a major public health problem. The attitude or approach to food allergy has always been to avoid the cause and adopt measures against adverse reactions in the event of accidental intake. However, in the last few years a new management strategy has been explored: the active induction of food tolerance. One of the most promising therapies is desensitization to specific food allergens through oral or sublingual immunotherapy, and research in this field is advancing quickly for some established allergens. In this respect, the technique has demonstrated its effectiveness, and its transfer to clinical practice is presently undergoing evaluation. This chapter addresses the epidemiology and natural history of food allergy, the mechanisms of immune tolerance to foods, the forms of desensitization and induction of tolerance to allergens with applications to foods, and affords an update on the experience gained with the induction of tolerance to different foods, the respective protocols and guides, efficacy and safety considerations, and adverse reaction risk factors. An evaluation is also made of the current state of biological therapy utilization in conjunction with specific immunotherapy for food allergens. Most patients show a favorable course, with disappearance of the allergy in up to 83% of all subjects by 5 years of age. The specific IgE levels are a good predictor of tolerance, though recent publications indicate that longer periods of time are currently needed to acquire natural tolerance, and that tolerance now develops in adolescence and not in the early schooling period as was common in the past [15,16]. Despite measures of caution, accidental intake still occurs, often on a day to day basis in the home [17], with the description of even fatal anaphylactic reactions [18]. Egg Allergy to chicken egg (usually to egg white) is the second most common form of food allergy in pediatric patients, being observed in 1-3% of all children [19]. The underlying pathogenesis is mainly IgE-mediated. Approximately two-thirds of all patients acquire natural tolerance [13,20,21], though a recent study has evidenced the persistence of egg allergy in 42% of the patients upon reaching adolescence [22]. This change in tendency may contribute to increase the number of adults with allergy to egg. In this regard, it has been estimated that 0.2% of all adults are allergic to egg [23], so this figure is likely to increase. Peanut Allergy to peanut is one of the most frequent forms of food allergy in western countries, and can give rise to serious IgE-mediated reactions in response to even small intakes or exposure levels. The condition is found in up to 1.8% of all children in the United Kingdom [24] and in 1.3% of the adult population in the United States [23]. When diagnosed by provocation or challenge testing, the prevalence reaches 3% of all children in Australia [7]. The prevalence of peanut allergy appears to be increasing, and most individuals continue to suffer allergy to this food in adult life [25]. Indeed, only 20% of the affected patients overcome peanut allergy [26], and the percentage of tolerant subjects is related to the degree of sensitization [27]. Over 15% of all affected patients suffer accidental exposure on a yearly basis [28,29]. Multiple foods Allergy to multiple foods is important, since up to 30% of all allergic children suffer allergy to more than one food [6,30] -the magnitude of the condition increasing with the degree of atopy of the patient. These patients have poorer quality of life than those with allergy to a single food [31], and are at an increased risk of suffering nutritional deficiencies [32]. Likewise, patients with allergy to multiple foods have lesser chances of acquiring natural tolerance to the implicated foods [22]. Current treatment of food allergy The traditionally recommended approach to the management of food allergy consisted of strict avoidance of the causal allergen; early recognition of the allergic reaction; and the availability of adrenalin to deal with serious reactions. However it is known that strict avoidance is very difficult to achieve, and is limited by difficulty in interpreting food labels [33] and by the existence of hidden allergens in commercial foods [34]. Accidental intake is therefore common, and can be expected to occur in up to 50% of all patients in the course of a two-year period, even in very cautious patients. Undertreatment is moreover a common problem [35]. Management of anaphylaxis The main risk posed by food allergy is the induction of IgE-mediated systemic reactions. In this regard, food allergy is the leading cause of both anaphylaxis and mortality due to anaphylaxis. Between 40-100% of all deaths attributable to anaphylaxis in patients with food allergy are due to commercial foods or foods prepared outside the home [18,36,37] Management includes teaching the patients and caregivers to quickly recognize the symptoms of anaphylaxis and promptly self-inject insulin and notify the emergency care services [38]. However, difficulties are found regarding correct use on the part of the caregivers [39], and in assuming the responsibility of care on the part of the patients [40] -particularly in the case of adolescents [41,42]. In such patients, one-fourth of all anaphylactic episodes occur outside the home. It is therefore necessary to instruct the caretakers in school on how to handle anaphylaxis, and to reinforce self-care instructions among these adolescents [43]. Quality of life The need for a strict avoidance diet, the high probability of accidental exposure, and the risk of anaphylaxis in food allergies alters the life of the patients and their families, and generates anxiety and psychosocial stress, with a negative impact upon quality of life [44][45][46], to an extent greater than that observed with other chronic disorders of childhood [47,48]. This loss of quality of life affects even the daily relations of the patient, with an increased frequency of bullying in such children [49]. Allergic children are also at an increased risk of suffering abuse. Economic costs Food allergy implies an important economic cost [50], reaching an estimated 25 billion USD each year in the United States -the largest part of this sum (approximately 20 billion USD) being assumed by the families as direct costs, working hours lost, visits to the emergency service, etc. [51]. The generated costs are not only of a personal nature but moreover also affect the healthcare services, the food industry, the caregivers, and society as a whole [52]. Immunology of food tolerance The gastrointestinal tract (GIT) is constantly exposed to an enormous number of exogenous antigens, including commensal bacteria and ingested proteins. In this respect, the GIT is the most relevant site of exposure to antigens in the entire body, and therefore of antigen absorption and presentation to the host. An epithelial layer separates the allergens from the lymphocyte population, antigen-presenting cells (APCs) and other immune cells of the lamina propria that globally conform the socalled mucosa-associated lymphatic tissue (MALT). Within the latter, the dendritic cells (DCs) interact with the food allergens, determining the outcome of the adaptive response (immunity versus tolerance) [53]. In this respect, immune tolerance is defined as suppression of the antigen-specific cellular or humoral immune response. Following the intake of proteins with the diet, enzyme-mediated digestion reduces their immunogenicity, probably through destruction of the conformational epitopes. However, other foods sharing common characteristics (molecular weight < 70 kDa, linear epitopes, water solubility) are resistant to both physical and chemical degradation, and thus maintain their allergenicity upon reaching the small intestine. Under normal conditions, the intact macromolecules are taken up by a transcellular transport mechanism, and the antigenic material is deposited through the basolateral surface of the epithelial cell layer; as a result, a significant amount of food allergens reach the systemic circulation following a meal. Another antigen uptake mechanism consists of direct antigen presentation to the CD11c +dendritic cell population. The function of these cells is related to macrophage activity, and there is evidence that the CD11+macrophage population plays an important role in the T cellmediated antigen-specific response during the development of immune tolerance to food antigens. More recent evidence supports that impairment in regulatory T cell (Treg) induction and innate immunity might also contribute to Th2 polarization in early life. Prospective birth cohort studies have shown that IgE production in response to egg, milk, and peanut commonly occurs even in healthy infants. In non-allergic subjects, this Th2 bias appears to be transient, and IgE levels decrease, possibly through a counterbalancing induction of antigen-specific Th1 responses (i.e., IFN-γ); in contrast, these Th2 responses consolidate and strengthen in allergic children, perhaps through the induction of IL-4 signaling [54]. A full 80% of all plasmatic cells are located in the intestine. The small bowel contains cell generating pIgA (polymeric IgA)(80%), followed in order of prevalence by secretory pIgM (polymeric IgM)(15-20%) and secretory IgG (3-4%). IgA deficiency in children has been reported to be associated with an elevated frequency of food allergy. In this context, it has been postulated that IgA plays a protective role in the context of food allergy [55]. The presence of antigen-specific IgG in the intestinal lumen can exert a significant influence upon immunity to food and flora. IgG-mediated antigen uptake through FcRn in the neonatal intestine is tolerogenic, and suggests that antigen exposure through breast milk would be a helpful preventative strategy, particularly when the mother has existing IgG antibodies to that antigen. Once allergic sensitization has been established, it is not clear whether IgG-facilitated antigen uptake through FcRn would amplify existing proallergic adaptive immune responses or promote active immune tolerance. Studies are needed to address the influence of FcRn on responses to food antigens [55]. Immunoglobulin E can be found in secreted form under the conditions of allergy and helminth infection-this being associated with an epithelial receptor for IgE. IgE-facilitated antigen uptake results in increased delivery of antigen to allergic effector cells, activates proinflammatory pathways in intestinal epithelial cells, and enhances antigen delivery to dendritic cells. IgE-facilitated antigen uptake by B cells can also have an adjuvant-like effect on the resulting adaptive immune response. This complex interaction among physical factors, antigen characteristics and timing, together with the effects of innate immune stimulation, condition the development of oral tolerance through a common pathway directly or indirectly influenced by APCs. It recently has been shown that mucosal dendritic cells are probably the key element in determining allergic sensitization versus tolerance in naïve subjects. Multiple tolerance mechanisms probably intervene, and may include anergy or deletion of T cells. There is evidence relating oral tolerance with the capacity of the mucosal dendritic cells to induce positive forkhead box protein [Foxp3]+Treg cells in MLNs (mesenteric lymph nodes). CD103, retinoic acid (RA), indoleamine-2,3-dioxygenase, co-stimulator molecules of the B7 family and TGF-β appear to act by allowing dendritic cells to induce such conversion. In contrast, the dendritic cells of the lamina propria do not express CD103, and are proinflammatory. This suggests that tolerogenic dendritic cells could inhibit site-specific signaling of the intestinal epithelium through interaction with E-cadherin (a CD103 ligand). This is probably the microenvironment provided by the mucosa to allow antigen presentation resulting in either inflammatory response or tolerance within the MLNs. Antigen-presenting cells other than conventional dendritic cells might also participate in oral tolerance induction. Oral tolerance might be operative through multiple mechanisms in multiple tissue compartments. For example, intestinal macrophages can also efficiently induce Foxp3+Treg cells in an IL-10-, RA-and TGF-β-dependent fashion. Plasmacytoid dendritic cells, a specialized dendritic cell subset known for their ability to produce vast quantities of type I interferons, can also activate inducible Foxp3+IL-10. Integration of environmental information by dendritic cells results in specific activation and differentiation of T cell subsets, including the Foxp3+Treg cells, as the primary effectors of oral tolerance. Repeated exposures to low doses of antigen are thought to be the optimal stimulus for the development of Treg cells, which suppress immune responses through soluble or cell-bound regulatory cytokines such as IL-10 and TGF-β. Natural CD4+CD25+Treg cells develop in the thymus and express the specific transcription factor Foxp3+, which confers regulatory function to these cells to block both Th1 and Th2 responses. Inducible regulatory T cells (iTreg) are CD4+cells that can differentiate from naïve precursors, acquiring regulatory properties in the periphery after exposure to antigen. In many cases these cells acquire the expression of Foxp3, and they exist in at least two forms distinguished by the antiinflammatory cytokines produced: IL-10 (Tr1 cells) and TGF-β (Th3 cells). Whereas natural CD4+CD25+Treg cells are thought to primarily govern peripheral tolerance to self-antigens, Treg cells are more likely responsible for tolerance to exogenous substances, such as allergens [56]. Mechanistically, functional allergen-specific Treg cells can attenuate allergic responses through: 1.-the suppression of mast cells, basophils, and eosinophils; 2.-the suppression of inflammatory dendritic cells and induction of tolerogenic dendritic cells; 3.-the suppression of allergen-specific Th2 cells, hence contributing to T cell anergy; and 4.-the early induction of IgG4 and late reduction of IgE production. All of these mechanisms can be mediated through the secretion of IL-10 and TGF-beta, or through cell contact-dependent suppression. The Treg cells therefore appear to play an important role in tolerance following immunotherapy in food allergy [57,58]. Active therapy against food allergy Between 15-20% of all patients with allergy to cow's milk and egg will remain allergic, while those who acquire tolerance will take years to become tolerant. In contrast, most patients with allergy to fish, crustaceans, peanut or nuts will remain allergic to these foods for life [59]. The health risks for such patients, the alterations in their diet, social discrimination, impaired quality of life, and the costs generated by such illnesses have led to re-evaluation of the passive management strategies with a view to establishing active treatment options -replacing the management through avoidance paradigm with an active intervention approach based on specific desensitization and tolerance of the causal food. In this context, active intervention has been considered for years with the purpose of solving this health problem, particularly in patients with a high risk of anaphylaxis and in those who will not benefit from natural resolution of the problem. Such intervention involves nonspecific therapeutic measures and, more recently, specific treatments for each type of food. Immunotherapy would be a plausible option in view of its demonstrated efficacy in patients with allergy to aeroallergens and stinging insect venom [60]. For this reason, the use of immunotherapy in application to food allergens has been postulated for over two decadesgiving rise to a series of experiments and producing a body of knowledge over the last decade, referred particularly to the oral route, which we will try to explain in this chapter. Considering that non-IgE mediated allergy does not appear amenable to such treatment strategies, we will only deal with IgE-mediated food allergy. Immunotherapy in application to foods The site of antigen administration and contact is important for the efficacy and safety of specific food immunotherapy. According to the administration route involved, we can distinguish among four different types of specific food immunotherapy: subcutaneous immunotherapy (SCIT), epicutaneous immunotherapy (EPIT), sublingual immunotherapy (SLIT) and oral immunotherapy or specific oral tolerance induction (SOTI) [61]. Specific Subcutaneous Immunotherapy (SCIT) There is extensive experience with the use of subcutaneous immunotherapy in application to aeroallergens and insect venom -more than 100 years having gone by since the technique was first developed -though very little experience has been gained to date with its use in application to food allergy. In patients with pollen-fruit syndrome [62], immunotherapy has been found to be effective against aeroallergens that share antigenicity with certain plant foods [63][64][65], securing desensitization to such allergens and foods. However, the utilization of specific food allergens via the subcutaneous route (e.g., peanut), produced important [66] and serious adverse reactions, with the death of one patient following error in the composition of the placebo dose containing allergen. As a result, and despite evidence of a certain degree of efficacy, these problems caused the early evaluation attempts to be suspended [67]. In effect, since then, this specific immunotherapy administration route to treating food allergy has been discontinued. However, the introduction of recombinant allergens, the elimination of epitopes for IgE with the maintenance of T cell-recognized epitopes [68], immunotherapy with peptides, DNA immunotherapy, and other advances that are currently in the preclinical investigation phase, will make it possible to resume studies with this administration route. Specific Epicutaneous Immunotherapy (EPIT) Specific epicutaneous immunotherapy (EPIT) is based on the capacity of the Langerhans cells of the epidermal basal layer to migrate and reach the lymph nodes, where they regulate the cells implicated in allergic inflammation [69,70]. A pilot study in patients with allergy to cow's milk [71] demonstrated a modest increase in the amount of milk tolerated, with only local symptoms, none of which proved serious. A recent phase IIa double-blind, placebo-controlled (DBPC) efficacy study in patients with allergy to peanut (ARACHILD) [72] was able to secure a more than 10-fold increase versus baseline in the tolerated levels after 18 months of treatment in 67% of the patients. The study has currently been extended to 36 months, with good safety results. Other studies involving this same administration route for the induction of peanut desensitization are currently also in course. Specific Sublingual Immunotherapy (SLIT) Specific sublingual immunotherapy (SLIT), which makes use of the capacity of the Langerhans cells of the oral mucosa to suppress allergic cell response [69,73,74], has been successfully applied against aeroallergens in rhinitis and asthma [75,76]. In the same way as SCIT, the technique has afforded improvement in patients with plant allergy exhibiting cross-allergenicity with certain pollens. In this regard, SLIT has been used against the latter [77] and against latex -avoiding the increase in foods to which reactions occurred [78]. Use has been made of SLIT with specific food allergens such as kiwi [79]. One case report documented persistent tolerance after 5 years [80]. DBPC studies have been made with hazelnut [81], with the maintenance of protection over the long term [82], and with peach [83], in which tolerance could be increased 3-to 9-fold. At present, an observational study is underway to evaluate the efficacy and safety of SLIT with Pru p3 extract in pediatric patients. A pilot study with cow's milk [84] was able to increase the tolerated amount of milk three-fold in a group of 8 children. A placebo-controlled SLIT study with peanut [85] in turn secured a 10-fold increase in the amount of peanut that could be ingested without symptoms after 44 weeks of therapy in the active treatment group. A more recent placebo-controlled SLIT study with peanut [86] found 70% of the patients to be able to ingest 5 g of peanut or increase the tolerated amount up to 10-fold versus the amounts tolerated at baseline, after 44 weeks of treatment. Specific Oral Immunotherapy / Specific Oral Tolerance Induction (SOTI) Specific oral tolerance induction (SOTI) is currently the most widely evaluated approach, having exhibited effectiveness over the short and long term, though with limitations in relation to its safety profile. Tolerance is taken to represent non-reactivity to the allergen even after a period of time without contact with the allergen. In this regard, desensitization constitutes a prior step, but does not guarantee lasting tolerance. SOTI is able to achieve desensitization in a large percentage of patients -this being enough to avoid reactions secondary to accidental ingestion and incorporation of the food to the diet. Such desensitization is possibly the most important objective of the technique, since the number of patients that are able to achieve permanent tolerance is considerably smaller. As a result, it has been proposed that SOTI should actually be referred to as specific oral desensitization induction. A prospective comparison of SOTI versus SLIT [87] has confirmed greater efficacy if SLIT is followed by a SOTI phase involving high maintenance doses. The retrospective comparison of SLIT and SOTI in application to peanut allergy [88] has shown greater efficacy with the latter technique, though with more adverse effects. In this respect, it seems that SLIT is comparatively safer but less effective than SOTI in application to foods. Since SOTI is the most widely investigated type of specific immunotherapy in food allergy, we will address the technique a little more in depth. Mechanism of immune tolerance in SOTI SOTI acts at intestinal dendritic cell level [89,56], lowering the specific IgE levels and increasing the specific IgG4 titers, with an increase in IL-10, IL-5, IFN-γ, TNF-α and Foxp3 cells. Studies involving T cell microarrays have shown inhibition oriented towards apoptosis at genetic level [90]. The technique also reduces basophil IgE receptor production [91]. Regimens and phases in SOTI The technique aims to induce desensitization and subsequent tolerance by administering small amounts of allergens that cause no clinical manifestations or only mild manifestations. The amounts are gradually increased over time until the ingested allergen level reached is considered to protect against adverse reactions and secure tolerance after ingestion of the food over the following months. Three phases can be distinguished during this process. On the first day an initial rush-type rapid desensitization phase is established, followed by an escalation or up-dosing phase involving daily administration of the tolerated dose in the home of the patient, with controlled periodic up-dosing (usually on a weekly basis) until the maintenance or desensitization dose is reached. This represents the start of the maintenance phase, in which the maximum dose reached is ingested either daily or on alternating days over the subsequent months in order to maintain desensitization and protection against accidental exposure, and to secure full tolerance in at least some of the patients. This in turn must be confirmed through provocation testing after an exclusion period or treatment cessation period of one or more months. The different management protocols use these phases in different ways as regards the doses and times. Some protocols prolong the initial rush phase to reach maintenance dosing within about 5 days [92][93][94], avoiding the weekly up-dosing phase which usually covers 2-4 months. This practice typically implies more adverse effects. In contrast, the initial protocols used by Patriarca et al. did not use the rush phase and prolonged the escalation phase for more months, with increments in the home of the patient introduced on a daily basis or every few days in the form of very small amounts, until the full desensitization dose was reached [95]. The mentioned group continues to maintain this protocol in modified form [96,97]. There is also some experience with the use of a SLIT desensitization phase followed by SOTI [87] -this being an option in those patients who fail to tolerate the initial rush phase. In other studies the rush phase is prolonged to two days and the up-dosing or dose escalation phase to 16 weeks [98,99]. The authors use a one-day rush phase and a 10-week dose escalation phase. The most recent protocols typically contemplate all three phases, and follow-up evaluation after the last phase, which is essential in order to confirm tolerance. Peanut In a non-controlled SOTI study, 28 children between 1-16 years of age with peanut allergy were randomized 2:1 to active treatment or placebo. Three patients in the active treatment group abandoned the study due to adverse effects, while the rest reached the 4000 mg dose, and after 12 months were able to tolerate 5000 mg (20 peanuts), versus 280 mg in the control group (p<0.001). Significant reductions were observed in the size of the prick test and in the specific IgE and Th2 cytokine levels -with a significant increase in specific IgG4 titers and Treg cell count [100]. In another study, 29 patients completed the protocol and were able to consume 3.9 g of peanut protein, with a significant decrease in the size of the prick test and in basophil activation after 6 months of maintenance dosing. Specific IgE was seen to decrease, with a significant increase in specific IgG4 between months 12-18 of this treatment phase, together with elevations in the levels of IL-10, IL-5, IFN-γ, TNF-α and Foxp3 T cells, with demonstration of inhibition oriented towards apoptosis at genetic level [90]. In another non-controlled study, 23 patients with anaphylaxis due to peanut allergy diagnosed by DBPCFC received SOTI in the form of a rush protocol during 7 days until a dose of 0.5 g was reached. After 8 weeks of daily intake and a two-week avoidance phase, DBPCFC was repeated, with tolerance of only 0.15 g of peanut. Twenty-two patients continued with the maintenance phase, and after an average of 7 months, 13 of them (60%) reached the protective dose, with a final tolerance of 0.25-4 g of peanut (initial tolerance being 0.02-1 g). Three of the 22 patients suspended intake during this phase. A significant increase was recorded in specific IgG4, with a decrease in Th2 cytokine levels [101]. Cow's Milk (CM) A total of 22 children with allergy to cow's milk were randomized 2:1 to SOTI with 500 mg of CM protein (15 ml CM) daily during four months, or to placebo. At final challenge testing, the active treatment group tolerated 5140 mg, versus 40 mg in the control group. The IgE levels did not vary, though the IgG4 levels increased significantly in the SOTI group [102]. Another study selected 97 children with DBPCFC-diagnosed allergy to CM with serious reactions and very high anti-CM IgE titers. Sixty patients reacted to very low doses. The subjects were randomized 1:1 to SOTI or an exclusion diet. After one year of treatment, 36% of the patients in the SOTI group were fully tolerant (> 150 ml), 54% were able to consume limited amounts of milk (5-150 ml), and 10% were unable to complete the protocol because of persistent respiratory or digestive problems. None of the control subjects passed the final DBPCFC test. [103] A case series has described CM desensitization as a result of a rush-type SOTI protocol in four patients, with long-term desensitization being achieved in all cases [93]. A multicenter study involving 60 pediatric patients with a mean age of two years (range 24-36 months) randomized the subjects 1:1 to an exclusion diet or SOTI (2-day rush phase followed by a 16-week escalation phase until reaching a maintenance dose of 200 ml of CM). After one year, 90% of the patients in the SOTI group were fully tolerant, versus 23% of the patients in the exclusion diet group [98]. Another study randomized 30 patients diagnosed with CM allergy by DBPCFC to SOTI or placebo (soya milk), with an 18-week up-dosing phase and no prior rush period. Thirteen patients were maintained in the SOTI group, of which 10 reached the final dose of 200 ml (77%). None of the control subjects passed the DBPCFC test [104]. Thirteen patients with CM allergy were subjected to SOTI with no initial rush phase and with an 18-week escalation period until a dose of 200 ml of CM was reached. The three controls received soya milk. Tolerance was achieved in 8 of the patients in the SOTI group (7 cases of full tolerance and one partial tolerance), while the controls maintained DBPCFC positivity [105]. In another study, 28 children between 6-14 years of age with CM allergy (36% of an anaphylactic type) were recruited after oral provocation testing and randomized in a double-blind, placebo-controlled SOTI study. Sixteen out of 18 patients in the active treatment group and 8 out of 10 in the placebo series completed the study. After one year of SOTI, 81% of the children consumed 200 ml of CM or equivalent products. After confirming the absence of tolerance among the controls, the latter were enrolled in a similar protocol and were seen to tolerate 200 ml of CM after 6 months. After 3.5 years, tolerance was maintained in 79% [106]. Lastly, in another study, 60 children aged between 13 months and 6.5 years were randomized to SOTI or an exclusion diet. After 6 months, 89% of the patients in the active treatment group tolerated 200 ml of CM, versus 60% of the controls (p<0.025). A decrease in the size of the prick test was recorded in the active treatment group, while the size was seen to increase among the controls [107]. Egg A total of 55 patients with egg allergy were included in a randomized, double-blind, placebocontrolled trial. Forty patients received SOTI. Of these, 55% passed a 5 g oral egg white powder provocation test after 10 months, versus none of the individuals in the control group. In turn, 75% of the subjects in the active treatment group passed a 10 g oral provocation test after 22 months. Desensitization was associated to a decrease in specific IgE, an increase in IgG4 after 10 months, a reduction in basophil activation, and a decrease in the size of the prick test after 22 months [108]. Eighty-four patients with egg allergy that tolerated up to 1 g of raw egg white were randomized to SOTI or an avoidance diet. After 6 months, 69% of the patients in the active treatment group and 51% of the controls passed a oral provocation test, and the mean size prick test size and specific IgE titers decreased significantly in the active treatment group. Furthermore, the patients subjected to SOTI who failed to pass the provocation test had comparatively greater tolerance and lesser severity of symptoms [107]. Of 19 patients between 4-14 years of age who started SOTI for egg allergy, 16 achieved full tolerance (85%), being able to consume a 10 g dose of powdered pasteurized egg (equivalent to one egg). In addition, a decrease was recorded in the population of effector-memory CD4+T cells, with an increase in a subclass of CD4+T cells with a hypo-proliferative and non-reactive phenotype [109]. These authors also recorded an increase in Treg cell count in those individuals who reached tolerance [110]. In a study comprising 72 patients between 5-15 years of age, the presence of egg allergy was confirmed by open oral challenge testing. Forty subjects were randomized to SOTI with powdered pasteurized egg -tolerance being achieved in 92.5% of them, and in 21.8% of the controls [111]. In a retrospective review of 43 children with egg allergy, 30 were found to be willing to participate in a SOTI study with egg, involving maintenance of the maximum tolerated dose two or three times a week. The 13 patients who declined to participate conformed the control group. Nine of the 30 children in the active treatment group reached tolerance of one egg after one year of SOTI -a figure that was seen to increase to 17 out of 30 subjects after two years of treatment. In comparison, none of the controls achieved tolerance. Of the 14 desensitized patients that could be followed-up on, 11 reached full tolerance [113]. A randomized and controlled study recorded partial tolerance (10-40 ml of raw hen's egg emulsion) in 90% of the patients (n=9) subjected to SOTI with egg during 6 months, versus none of the controls [114]. Another metaanalysis of SOTI with cow's milk selected 16 publications, of which 5 were clinical trials. The studies were generally small and presented methodological inconsistencies, with low quality evidence. Each study used a different SOTI protocol. A total of 196 pediatric patients were studied (106 subjected to SOTI and 90 controls). Sixty-two percent of the patients in the SOTI group and 8% of the controls reached tolerance of about 200 ml of cow's milk [relative risk 6.61 (95%CI: 3.51-12. 44)]. In addition, another 25% of the subjects in the SOTI group achieved partial tolerance (10-184 ml), versus none of the controls [relative risk 9.34 (95%CI: 2.72-32.09)]. None of the studies evaluated the patients some time after immunotherapy suspension. Adverse reactions were common, affecting 92% of the patients, though most were mild and of a local nature. One out of every 11 patients receiving SOTI required intramuscular adrenalin. The studies conducted to date have involved small numbers of patients, and the quality of the evidence is generally low. The current data show that SOTI can lead to desensitization in the majority of individuals with cow's milk allergy, though the development of long-term tolerance has not been established. A major drawback of such therapy is the frequency of adverse effects, although most are mild and self-limited. The use of parenteral epinephrine is not infrequent [116]. Regarding SOTI for allergy to cow's milk, achieving desensitization or even tolerance to cow's milk does not imply desensitization to milk from other mammalian species to which the patient may be sensitized [117,118] -a circumstance observed in 25% of the cases in one study [119]. Consequently, the exclusion of milk and milk products from other species must be maintained if exposure testing does not confirm the existence of tolerance to such foods. Other foods Individual SOTI studies have yielded positive results in reference to other foods such as tomato [94], celery [120], apple [121] or wheat [122,123]. Multiple foods A field of great interest refers to the study of simultaneous allergy to multiple foods (up to 5), since one-third of all patients with food allergy are allergic to two or more different foods. In this respect, a trial has been carried out in 40 patients [124], of which 15 were allergic only to peanut, while 25 were allergic to more foods. The patients received up to 4 g of each food during the maintenance phase, and tolerance was seen to increase 10-fold versus the initial DBPCFC dose. The same authors have conducted a study of desensitization to multiple foods with omalizumab therapy, which allowed a shortening of the dose escalation phase [125]. ]. The authors concluded that there is strong evidence that oral immunotherapy is able to induce immune changes and promote desensitization to different foods. However, oral immunotherapy should not be used outside the defined experimental conditions [126]. Safety The side effects associated to SOTI are generally mild or moderate, with a predominance of oropharyngeal manifestations that are easy to deal with [100][101][102]127,128]. However, more serious reactions have also been reported, such as generalized urticaria / angioedema, wheezing and dyspnea, laryngeal edema, intense abdominal pain and recurrent vomiting. This latter adverse effect is the most limiting problem, preventing the continuation of SOTI in 10-15% of the patients [129]. There have been reports of eosinophilic esophagitis during the maintenance phase in patients who did not have this problem before SOTI [130]. Although large series report a low incidence (2%), in our experience the problem may be more frequent (10% in a small series of patients). In this respect, the suspicion of eosinophilic esophagitis should be reinforced in cases with classic symptoms of retroesternal pain, dysphagia, or less specific like recurrent cough or digestive discomfort. In a study of SOTI with peanut [100], most patients suffered some symptoms. During the first day of the up-dosing phase, two subjects abandoned the study, another two made use of adrenalin, and 47% developed symptoms requiring antihistamines. Symptoms were observed in 1.2% of the 407 doses during the escalation phase. Despite this observation, however, 16 of the 19 patients subjected to SOTI were able to tolerate 4000 mg with only minimal adverse effects. Likewise, in SOTI with cow's milk, 45% of the doses produced symptoms, versus 11% in the placebo group -most manifestations being mild and of an oropharyngeal nature [102]. During the first year of SOTI with egg, 25% of the 11,860 active treatment doses were associated with symptoms, versus 4% of the 4018 placebo doses [127]. The frequency of adverse reactions is 10 times greater if the patient is moreover asthmatic [17]. Triggering factors Viral infections, menstruation and physical exercise have been associated with reductions in the tolerance threshold among patients who are already receiving SOTI maintenance doses [131]. The development of other acute disease conditions may also require temporary SOTI dose adjustment [101]. In a long-term SOTI follow-up study, 22% of the patients with allergy to cow's milk who had previously completed SOTI and had passed provocation testing with the food reported limitations in milk intake due to symptoms often associated with physical exercise (25%) and disease processes (6%) [132]. Rush-type SOTI protocols designed to shorten the interval required to reach maintenance therapy have been associated to an increase in the incidence of undesired symptoms and adrenalin use [103,101,133,134]. Prevention It is advisable to avoid physical exercise in the hours before and after administration of the dose, and to temporarily reduce the amount ingested by 50% in the case of viral disease or respiratory symptoms. It is also advisable to administer the dose with other foods in order to avoid gastrointestinal adverse effects. Antihistamines have been used as premedication, and a study has used antileukotrienes to control the gastrointestinal manifestations [135]. Thus, SOTI appears to be effective in securing desensitization, but it is not without risks. Different metaanalyses indicate that the existing body of information is still insufficient to guarantee the efficacy of the technique, and concern is still expressed about the safety of SOTI. In this respect, further studies are recommended before considering transfer of the technique from the experimental setting to clinical practice [115,116,126,[136][137][138]. Long-term outcome Does desensitization to a food imply long-term tolerance or only temporary tolerance? Although a considerable number of years have gone by since the first inductions of oral tolerance to food were performed, few controlled studies have examined what happens after several years of SOTI. Such information is crucial in order to define the frequency with which full tolerance is achieved, as well as to identify the underlying patient-related factors involved, and characterize the different desensitization options. A communication published in 2005 [139] reported the loss of tolerance in two patients after a two-month exclusion period following tolerance of the allergen maintenance dose during several months (cow's milk allergy with 27 weeks of tolerance of the 100 ml dose in one case, and egg allergy with 39 weeks of tolerance of half an egg in the other). In turn, a third patient who took 52 weeks in reaching the maximum dose again developed symptoms after four weeks of exclusion. In these individuals the specific IgE titers did not exceed class IV. The authors postulated that tolerance is dependent upon a series of variables such as the baseline tolerance level, the duration of SOTI, the elimination diet involved, and the course of the illness at individual level. Patients with a low probability of natural remission of their allergy may require long-term maintenance therapy. There is little information on the interval between the doses in the maintenance phase needed in order to preserve the acquired tolerance, though for safety reasons, daily intake should be recommended. A study published in 2007 [140] reported the follow-up data corresponding to four patients who had undergone desensitization to cow's milk three years earlier. Three of them were found to have no detectable levels of specific IgE against casein, and presented no symptoms during intake -though no exclusion period followed by reintroduction of the allergen had been applied to ensure definitive tolerance. The first long-term follow-up study on patients with cow's milk allergy [141] revealed that 86% of the individuals reached desensitization (18 out of 21 patients), and tolerance persisted in 14 out of 20 individuals (70%) upon evaluation an average of four years and 8 months after the start of desensitization [142]. In addition, none of the patients needed to use adrenalin. Since no control groups were established, these studies could not rule out the possibility that tolerance in some of these individuals may have been attributable to natural mechanisms, and were unable to establish whether tolerance persisted after the cessation of daily allergen intake. In another study [143], 15 patients with successful induction of desensitization to cow's milk were subjected to oral provocation after 13 to 75 weeks with doses of 16 g. This dose was tolerated by 6 of the patients, though here again there was no prior exclusion period. During the follow-up period, adverse reactions were recorded that required adrenalin injection on 6 occasions (0.2% of the doses). In a much larger patient sample [144], 66 subjects were diagnosed with allergy to cow's milk by DBPCFC (including 44 anaphylactic cases). Initial tolerance was achieved in 64 patients (97%) -complete in 51 (> 150 ml) and partial in 13 (5-150 ml) -and was seen to persist after one year of follow-up, with significant reduction of the specific IgE titers and of the size of the prick test. As in the above studies, tolerance was not evaluated after an exclusion period. In another study [145], following SOTI with egg or milk during a mean period of 21 months, tolerance as demonstrated by DBPCFC was recorded in 36% of the patients two months after suspending SOTI. Surprisingly, tolerance in the control group reached 35%, indicating a lack of efficacy of SOTI in achieving tolerance. In another non-controlled study, 7 patients received SOTI with egg [146], and four of them passed DBPCFC testing after 24 months. In turn, two of these four individuals passed a second DBPCFC test three months after suspending SOTI. The commented relatively low yet promising success rates in inducing tolerance were improved upon in a follow-up study [112] involving a SOTI dosing regimen in which the maintenance dosage was increased stepwise until the levels of specific IgE against egg were < 2 kU/l. At this point DBPCFC was performed, and those patients who passed the test again underwent DBPCFC one month after suspending SOTI. The 6 patients that passed the first test also passed the second test. Another study [87] first administered sublingual immunotherapy (SLIT) with milk, followed by patient randomization to either continuation with SLIT or conversion to SOTI with two different maintenance doses during 80 weeks. Six weeks after the end of immunotherapy, one of the 10 patients in the SLIT group (maintenance dose 7 mg/day) was found to be tolerant, versus three of the 10 patients administered 1000 mg of milk as maintenance in SOTI, and 5 of the 10 patients administered 2000 mg of milk as maintenance in SOTI. Although this was a non-controlled study with few patients, the results obtained support the idea that higher doses and longer durations of immunotherapy can afford a sustained lack of allergic responses after the end of therapy, or tolerance of the allergen. A placebo-controlled study of SOTI with egg [127], involving 40 children in the active treatment group and 15 in the placebo group, recorded desensitization in 75% of the patients after 22 months, with a 28% tolerance rate after 24 months as established by DBPCFC performed two months after the end of SOTI. None of the controls passed the provocation test after 10 months, though they were not again subjected to oral challenge after 22 or 24 months -except one subject with specific IgE levels of < 2 kU/l, who failed to pass the test. After 30 to 36 months of follow-up, those patients who had acquired tolerance were seen to retain tolerance. This study suggests that approximately one-quarter of all children with egg allergy achieve tolerance after two years of SOTI -though the absence of provocation testing after two years in the control group may complicate interpretation of the data -particularly in view of the high degree of spontaneous tolerance registered among the controls [147]. In another study, after SOTI and 5 years of maintenance therapy with 4000 mg of peanut, 50% of the patients passed oral provocation testing and were able to incorporate peanut to their diet without restrictions [148]. Biological therapies associated to SOTI In the course of the induction of oral tolerance to foods, patients may experience serious adverse effects (e.g., anaphylaxis) or problems of lesser magnitude but which preclude desensitization in 10-20% of the cases. This raises doubts not only about the safety of such techniques but also as regards their efficacy in application to patients with antecedents of food-induced anaphylactic reactions, which are precisely the individuals that could benefit most from desensitization or even tolerance. For these reasons, different authors have recommended the use of a protective "umbrella" during the initial phases, in which IgE-mediated allergic reactions are most frequent, with a view to avoiding at least the most serious incidents. Omalizumab The availability of a humanized anti-IgE monoclonal antibody marked as omalizumab (Xolair®, Genentech / Novartis) has allowed its use to prevent adverse effects -particularly anaphylaxis -in those patients who because of their degree of sensitization or seriousness of previous adverse reactions are at particularly high risk. In addition to improving patient safety, such preventive treatment would result in improved efficacy, since it would allow us to reach doses sufficient to ensure complete desensitization and possible subsequent tolerance [149]. On the other hand, under this type of protection against anaphylaxis, we could shorten the escalation or up-dosing period and even reach doses higher than those previously used. Omalizumab is a recombinant anti-IgE monoclonal antibody (anti-IgE mAb) with a molecular weight of 150 kDa; 95% of the antibody is derived from human kappa IgG1, to which certain murine complementary determinant regions are coupled. These in turn bind selectively and with high affinity to the CHε3 domain of the Fc of IgE, preventing binding of this domain to the high-affinity IgE receptors (Fc ε RI) of mast cells and basophils -thus inhibiting the release of mediators by these cells through Fab binding to the antigen. Binding to the low-affinity receptors (Fc ε RII) of dendritic cells, T cells, eosinophils and other cells related to allergic inflammation is also inhibited. The absence of binding to these receptors also down-regulates the expression of IgE receptors (Fc ε RI) on the part of mast cells and basophils, which is dependent upon the levels of IgE. On the other hand, the antigen-presenting cells (APCs), i.e., dendritic cells, also reduce their activity [150], and the formation of Th2 lymphocytes is consequently not stimulated. The basophils also experience a change in activity, paradoxically increasing their sensitivity to the allergen, but maintaining lowered activity in the presence of a specific IgE / total IgE ratio of < 4% [151,152]. The circulating anti-IgE/IgE complexes do not activate complement, and by keeping the antigen-binding fraction free, are able to capture antigens from the bloodstream -preventing them from reaching the specific IgE already bound to the cells. One week after the start of treatment with anti-IgE mAb, basophil FcεRI expression is strongly suppressed, while mast cell FcεRI expression is suppressed after 10 weeks [153]. This rapid basophil suppression, together with the clinical improvement, reflects the importance of these cells [154,155]. After the first hour of treatment with omalizumab, the free IgE titers in blood decrease linearly with respect to the dose, a maximum effect being observed within 6 days, when more than 96% of the IgE levels are cleared from plasma -though total IgE increases because the half-life of the IgE-Anti-IgE complex is longer than that of IgE. The half-life of omalizumab is about 3-4 weeks [156]. Administration regimen and dose The dose is established according to the instructions of the manufacturer in relation to the total IgE titers and patient body weight. In this regard, the minimum dose is 0.016 IU/kg/IgE (IU/ml)/4 weeks, in fractionated subcutaneous doses if needed [157]. It is estimated that 9 weeks are needed to reach the maximum effect, reduction of IgE and a decrease in the expression of its receptors. A clinical trial involving asthmatic patients found the clinical response to manifest after 16 weeks of therapy in most patients (158], though a study in children and adolescents found the maximum effect to manifest after four weeks of treatment (159,160]. Differences in criterion and patient population may account for this discrepancy. Most studies establish a minimum treatment period of 8 weeks prior to the start of induction therapy. The posterior coverage period varies, but corresponds at least to the interval required to reach the maximum maintenance dose. Treatment cessation has been abrupt, and the protective activity is known to cease completely within about three half-lives (some 9-12 weeks). Stepwise cessation over time and/or dose could result in a prolongation of the protective action without incurring in major costs increments. Clinical applications Soon after the marketing of omalizumab, the use of these anti-IgE antibodies in food allergy was considered [161]. Data evidencing its benefit referred to food tolerance were obtained in patients who received the drug for asthma control, and who were seen to be able to consume a larger amount of foods to which they were known to be allergic. Indeed, the patients were able to start consuming some foods which they were previously unable to consume even in very small amounts. The usefulness of omalizumab in avoiding IgE-mediated allergic reactions other than allergic asthma, such as for example food allergy, has been evidenced in different studies [162,163]. The drug has been shown to be effective in raising the oral provocation sensitivity threshold among patients with peanut allergy, when used in monotherapy [155,164,165]. The protective activity of omalizumab has been confirmed in immunotherapy for allergic rhinitis [166,167] and asthma [168]. Its utilization in rush immunotherapy [169], involving an increased frequency of systemic adverse reactions, afforded increased protection, including protection against anaphylaxis, when administered during the 9 weeks prior to immunotherapy and then for 12 months concomitant to immunotherapy. In this respect, omalizumab was seen to be superior to the use of antihistamines as premedication. Likewise, increased efficacy of immunotherapy has been documented in patients receiving anti-IgE mAb. Clinical trials with omalizumab and SOTI The above considerations have led to the use of omalizumab pretreatment in food tolerance induction protocols [170]. To date, only non-controlled double-blind pilot studies involving few patients are available, though several randomized, double-blind, placebo-controlled trials are currently underway. The first published study to report a possible role for omalizumab administered together with oral immunotherapy in allergic patients [171] assessed the usefulness of the drug in inducing tolerance to cow's milk in the context of a phase I pilot trial involving a small patient sample. Evaluation of the immunological changes revealed the inhibition of cutaneous mast cells and peripheral blood basophils in a non-specific allergen manner during therapy with omalizumab, and in an antigen-specific manner after completing the milk desensitization protocol [172]. Thirteen patients with peanut allergy confirmed by DBPCFC participated in a pilot trial involving pretreatment with omalizumab during 12 weeks, after which the drug was continued in combination with the immunotherapy up-dosing phase for another 8 weeks [173]. This made it possible to increase the initial peanut dose without major side effects, and to shorten the weekly up-dosing phase. With the dose of the first day (992 mg of peanut flour, equivalent to about 2 peanuts), the patients could be protected against anaphylactic reactions caused by accidental ingestion of the allergen. Within 8 weeks, the maintenance dose of 4000 mg was reached in 12 out of 13 patients, with tolerance after 30-32 weeks of 8000 mg as evidenced by DBPCFC -this dose being 160-400 times greater than the dose causing symptoms at first DBPCFC testing. Fifty-four percent of the patients (7 out of 13 individuals) suffered no adverse reactions during the first rush phase, while the rest experienced grade 1 effects requiring antihistamine use in only two patients. During the weekly up-dosing phase, 49 adverse effects were documented, of which 97% corresponded to grade 1 and none to grade 3. During maintenance therapy, without the administration of omalizumab, a total of 17 adverse effects were recorded, two of which corresponded to grade 3. Those patients that experienced adverse effects after suspending omalizumab had higher specific IgE titers both at the end and at the start, as well as a larger prick test size. The prolongation of omalizumab in patients of this kind was thus proposed. Another non-controlled phase I study used omalizumab during 16 weeks, including 8 weeks as pretreatment, in the induction of tolerance to different foods in combination with oral immunotherapy [174]. This management strategy allowed rapid desensitization using higher starting doses than those used in another trial carried out by the same authors [175], involving up to 5 foods at once, and with no grade 2 (moderate) or grade 3 (severe) symptoms during the up-dosing phase. Adrenalin proved necessary in only one case, during the maintenance phase (representing 0.01% of the administered maintenance doses). Side effects of omalizumab These antibodies have been reported to cause side effects [156,157] -the most important being local inflammatory reactions. Anaphylaxis has been reported in 0.2% of the patients. It has been suggested that such treatment should be administered in an appropriate healthcare setting, with an adequate period of observation after administration (2 hours on the first occasion and half an hour with the subsequent doses), with the availability of preloaded adrenalin [176,177]. A recent study has observed no increased risk of tumors associated to long-term treatment [178]. In contrast, the risk of parasitic infestations appears to increase; treatment in high-prevalence areas therefore should be restricted. Another very important aspect to be taken into account is the cost-effectiveness ratio, with a view to ascertaining whether the treatment is acceptable from the healthcare and insurance perspectives. No such data referred to the specific therapeutic indication of food allergy have yet been obtained, however. Further clinical data are currently needed, involving double-blind, placebo-controlled trials, as well as cost-effectiveness analyses, in order to establish the recommendations for use in concrete patient groups as treatment in combination with SOTI, such as for example: • Patients at particularly high risk due to increased sensitivity (usually associated to increased clinical reactivity) and/or who have suffered serious reactions to the food allergen • Patients unable to reach levels considered necessary to ensure desensitization • Protocols involving a rush phase and rapid up-dosing • Patients undergoing desensitization to several foods at the same time Interferon-γ and SOTI Few data are available on the usefulness of interferon-γ in combination with SOTI, though the preliminary results are encouraging [179]. Conclusion Immunoglobulin E-mediated food allergy in high risk patients or in individuals with a poor prognosis in terms of tolerance may benefit from new immunotherapeutic techniques such as SOTI. The advantages of SOTI are a great decrease in the risk of serious allergic reactions in patients with particularly severe food allergy, and the possibility of introducing such foods in the patient diet -with the resulting improvement in quality of life. Further studies are needed to better characterize those patients most amenable to effective SOTI, establish the required duration of therapy, define the immunological markers for assessing the course of treatment, the role of associated biological therapies and draft safe and effective consensus-based protocols and guides, before transferring desensitization to the general clinical practice setting. There is evidence that this new approach is changing the management paradigm in food allergy, and in our opinion, like other authors [180], possibly it´s time for the practice of SOTI in medical centers with medical staff trained and under secure supervision of his risks. Author details José Manuel Lucas * , Ana Moreno-Salvador and Luis García-Marcos *Address all correspondence to: josem.lucas@carm.es Pediatric Clinical Immunology and Allergy Unit, "Virgen de la Arrixaca" University Children's Hospital, University of Murcia, Murcia, Spain
2018-03-02T18:35:43.079Z
2015-04-22T00:00:00.000
{ "year": 2015, "sha1": "0c676806c29534591c5b51091fa54394bb724217", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/47608", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "637abf15336723ff16abdd5e6009c8b06a103550", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }
244922135
pes2o/s2orc
v3-fos-license
Application Value of Flexible Endoscopic Examination of Swallowing in Acute Stoke Patients With Dysphagia Background: The aim was to study the application value of flexible endoscopic examination of swallowing (FEES) for the aspiration screening, the diagnosis of dysphagia and evaluation of the therapeutic effect in acute stoke patients with dysphagia. Methods: A total of 525 patients with acute stoke who were hospitalized from October 2015 to January 2021 in the Rehabilitation Medicine Department of our hospital underwent FEES for analyzing the characteristic performance. Twenty-one cases of them were examined by video fluoroscopic swallow study and compared with the results of FEES for evaluating the reliability of the FEES, the reliability of diagnosis of dysphagia, and the consistency of the 2 methods. The effect of rehabilitation was evaluated by comparing the FEES test results before and after treatment. Results: In 525 patients, the FEES revealed 378 cases of aspiration (139 cases were silent aspiration), showing a higher detection rate than water swallow test. Patients with potential cricopharyngeus achalasia got the same results through both of examinations. FEES can provide more positive indicators, guide clinical rehabilitation treatment and objectively assess the effect of rehabilitation. Conclusions: Acute stoke patients with dysphagia have characteristic pharyngeal and laryngeal performance. FEES is simple to operate and has high application value in the diagnosis and treatment of dysphagia. S troke, as a major threat of the health of Chinese people, is a disease characterized by high morbidity, high disability, and high mortality. The main reason for the high mortality rate of stroke patients lies in the nervous system and internal medical complications accompanying stroke. Dysphagia is one of the most common complications of stroke patients, which increases the risk of pulmonary infection and hinders food intake, and it plays an important role in the outcome of stroke. Since the symptoms of patients after stroke are different and complex, how to diagnose the dysphagia accurately and conveniently is very important. Up to now, video fluoroscopic swallow study (VFSS) is the gold standard of swallowing examination, which has been used in clinic for a long time. However, because of the need to transport patients to a special examination site, the exposure to radiation during the examination and the inaccurate evaluation of small retention of laryngopharynx and intralaryngeal leakage caused by partial overlap of axial projection of laryngopharynx and laryngeal cavity, it is not easy to be accepted by patients and be promoted by doctors. Developed in recent years, flexible endoscopic examination of swallowing (FEES) is able to improve the detection rate of dysphygia and to directly observe aspiration, which makes up for the deficiency of VFSS, and therefore the 2 methods have a significant complementary relationship. In view of this, FEES were performed for stroke patients hospitalized in the Rehabilitation Department of our hospital from October 2015 to January 2021. The clinical data and diagnosis experience are summarized as follows: Patients Between October 2015 and January 2021, 525 acute stroke patients (327 men, 198 women) with dysphagia were screened through the repetitive saliva swallowing test and the water swallow test after hospitalized in the Rehabilitation Department. And FEES was performed within 10 to 40 days after the onset of stroke. (Eligibility criteria included a confirmed diagnosis of acute stroke, dysphagia after screening and informed consent. Patients with critical condition, vital organ failure or cognitive impairment were excluded.) Meanwhile, 21 cases with suspected dysfuncion of cricopharyngeal muscle received VFSS. Examination There is no need to fast for solids and liquids before the examination. When the patient was in a sitting or semisitting position, the HD electronic nasopharyngoscope was routinely introduced through the nasal cavity and fixed after entering the nasopharynx, oropharynx, laryngopharynx, and larynx. The patient was instructed to orally take in 3 kinds of food in the form of dilute liquid, dilute thick and paste (thickening agent was prepared in a certain proportion), mostly starting from the paste and from small to large amounts (1, 3, 5 mL). Besides, the other 2 contrast agents were decided whether to use according to the patient's performance during examination. FEES Before eating: endoscope can visually observe the function of the patient's pharyngeal and laryngeal structure during breathing, breath-holding, coughing, pronunciation and swallowing. Suction the secretions and turn the head of the HD video rhinolaryngoscope. Ask the patient to carry on eupnea and pronounce the "E" sound. Carefully observe the laryngeal performance, saliva retention, and the presence of saliva spillage into larynx. Then ask the patient to swallow in order to observe the speed of swallow initiation, cough and nasopharyngeal closure. Use the lens to gently touch the tongue base and posterior pharyngeal wall to observe whether the patient has cough, nausea and other reflexes and to assess whether there is sensory weakness or loss. Oral-preparatory stage: whether food bolus slips into the throat prematurely during chewing, that is, preswallowing spillage. Oral-propulsive stage: whether food bolus can be squeezed into the pharyngeal cavity smoothly and whether the swallow initiated timely and rapidly. Pharyngeal stage: with the lens located in the nasopharynx above the level of soft palate, the clinician can observe whether the nasopharynx is effectively and completely closed off by the soft palate elevation. Esophageal stage: after the pharyngeal stage, food bolus is propelled into the esophagus and the pharynx and larynx are reopened for imaging. This allows the clinician to observe if epiglottis has difficulty in reflection or reduction, the amount and location of food residues in laryngopharynx (especially in vallecula epiglottica and pyriform sinus) and the presence of staining as well as its location in the larynx cavity. Only supraglottic staining is called leakage and subglottic staining is called aspiration. Besides, it is called overt aspiration if accompanied with cough reflex, and if not, it is called a silent aspiration. If laryngopharynx residue is significant, the clinician can ask patients to swallow repeatedly to observe the removal of food bolus in the pharynx and the overflow of food from pyriform sinus into larynx, that is, postswallowing spillage. If there is a large amount of food remains in pyriform sinus after swallowing, repeated swallowing is ineffective and even need to spit it out through the mouth, cricopharyngeal achalasia or esophageal obstruction cannot be excluded. By observing the leakage, retention, aspiration, and pharyngeal clearance of oropharyngeal secretions or food of different consistencies in the process of swallowing, including oral-preparatory stage, oral-propulsive stage, and esophageal stage (the pharyngeal stage cannot be directly observed because the endoscope fails to perform image when the pharyngeal cavity is filled with food), FEES can assess and predict the swallowing function of each stage including pharyngeal stage, therefore it has high accuracy and clinical guiding significance. Preparation of contrast agents: by using 60% barium sulfate suspension and food thickening agent, food of 3 different consistencies, including dilute liquid, dilute thick and paste, were prepared according to a certain proportion. Examination: patients took a sitting position and kept their head naturally straight. Then they swallowed three kinds of barium sulfate suspension with different consistencies in turn, starting from paste and from small to large amounts (1, 3, 5 mL). The other 2 contrast agents were decided whether to use according to the patient's performance during examination. 1 Clinical manifestations such as aspiration, contrast agent residues and leakage were observed during swallowing in the anteropausal and lateral positions. Once the above clinical manifestations occurred during the examination, the clinician would stop the examination in time and help the patient remove the contrast agent. The whole procedure was videotaped. The preparation of contrast agent and specific steps referred to the methods in the relevant literature, 2 and the degree of aspiration and dysphagia was estimated jointly by a trained rehabilitation physician and a radiologist. CRITERIA FOR RESULTS Observing infraglottic staining by FEES is confirmed as aspiration, which can be classified into 2 categories. Overt aspiration refers to the one accompanied with cough reflex, and silent aspiration means an aspiration without cough. Meanwhile, aspiration can be identified by entry of barium inferior below vocal cords observed by VFSS. The absence of cough or other clinical symptoms 1 minutes after aspiration is considered as silent aspiration, otherwise it is considered as overt aspiration. The degree of dysphagia can be assessed respectively by analyzing the video of FEES and VFSS according to the dysphagia severity scale of VFSS (Table 1). The swallowing function is assessed on the basis of the score: 10 is basically normal, 9-7 is mildly abnormal, 6-2 is severe, and 2-0 is extremely severe. In this study, a score <10 was considered positive for dysphagia and a score = 10 was considered negative for dysphagia (Table 1). RESULTS The presence and proportion of positive laryngeal signs of 525 patients who underwent FEES are described in Table 2. A total of 378 cases of aspiration were detected by FEES (139 cases were silent aspiration), while 267 cases were suspected aspiration in water swallow test. Therefore, FEES is more sensitive to aspiration, especially to silent aspiration. FEES and VFSS have their own advantages and limitations ( Table 3). The results for grading the severity of dysphagia are consistent, but VFSS is able to assess the esophageal stage under direct vision and to determine the degree of cyclopharyngeal muscle opening and esophageal peristaltic velocity. FEES were rechecked 1 month after rehabilitation treatment and the result showed that all patients were improved to varying degrees compared with when they were just admitted ( Table 4). FEES can provide more positive indicators and sensitively detect the efficacy of rehabilitation treatment. DISCUSSION The incidence of dysphagia is high after acute stroke, with about 51% to 73% of stroke patients reported to develop dysphagia. 4 Poole et al 5 counted the incidence of dysphagia in 128 patients with first stroke and showed that the detection rates were 51% in clinical evaluation and 64% in flexible endoscopic examination. Therefore, cooperating with flexible endoscopic examination is beneficial to improve the detection rate of dysphagia. If dysphagia after stroke occurs in oropharyngeal stage, it will not only cause malnutrition, but also cause aspiration pneumonia in severe cases, leading to hospitalization and even death of patients. 6 So it is vital to understand the risk of dysphagia. 7 Stroke patients should be routinely screened for dysphagia before eating and drinking. 8 In clinical practice, medical staff have been constantly pursuing simple and feasible screening methods. 9 FEES is able to confirm retention, residue, leakage, aspiration, and other abnormal signs in the process of swallowing under direct vision, and to precisely assess the degree and position of dysphagia. 10 St John and Berger 11 pointed out that the positive prediction rate of aspiration risk was 76% and the negative prediction rate was 100% in the endoscopic assessment of dysphagia in pharyngeal stage. Many scholars 10 claim that FEES fails to assess oral and esophageal stage of swallowing. However, we found that the situation of these 2 stages can be inferred by spillage, residual quantity and its site, though the food transportation cannot be observed under direct vision. And the results of this method were consistent with that of VFSS. Besides, FEES has significant advantages in terms of pharyngeal sensory deficits, 12 space-occupying lesion in pharynx and larynx, vocal cord paralysis, clonus of lateral pharyngeal wall, minor leakage, and postswallowing spillage, etc. There were consistencies between the results of FEES and VFSS (eg, oral disorder with inability of squeezing food into pharynx, delayed swallow initiation, swallow initiation disorder which requires the assistance of external manipulation, retention, aspiration, etc.). These indicators serve as the main basis for assessment, therefore FEES and VFSS both can evaluate the degree and position of dysphagia precisely. Twenty-one patients with potential esophageal dysphagia screened by FEES were also confirmed by VFSS, with 12 cases of cricopharyngeal achalasia and 9 cases of weakened esophageal peristalsis. The results were highly consistent, but VFSS had irreplaceable advantages in identifying the position of esophageal dysphagia. VFSS, as the gold standard for the assessment of dysphagia, has been widely recognized for its accuracy, whereas FEES can also draw almost the same results in a simpler and more feasible way. Besides, FEES is of great significance for the assessment, rehabilitation and food selection of dysphagia because of its higher sensitivity to mild and potential dysphagia and its ability to provide more comprehensive information, therefore it is expected to become an important method for objective assessment of swallowing function. The FEES reviewed 1 month after the patient's rehabilitation showed that it could detect subtle changes in swallowing process sensitively. For example, pharyngeal and laryngeal sensation was restored at different degrees, vocal cord paralysis was improved in a few patients and swallow initiation also showed an improvement of varying degrees. However, postswallowing spillage and penetration were slightly deteriorated with the improvement of swallow initiation, which was considered to be associated with the increased amount of food swallowed, the enhanced force of swallowing and the incomplete closure of the laryngeal cavity. Mild aspiration and no aspiration can easily and accurately be distinguished by FEES. In conclusion, FEES has consistency with VFSS. FEES can be used as the main clinical assessment method for dysphagia because of its advantages of convenient operation, noninjury, and low price. Combined with VFSS, it can comprehensively assess dysphagia, provide reference for rehabilitation treatment and evaluate its therapeutic effect. FEES is expected to be the first choice of examination for dysphagia in clinical practice, while its reliability needs to be further compared with VFSS. Currently, FEES can be repeatedly performed in the process of rehabilitation treatment according to the patient's condition in order to ensure the pertinence and effectiveness of treatment as well as provide convenient and feasible objective assessment for clinical rehabilitation.
2021-12-08T06:17:04.104Z
2021-12-06T00:00:00.000
{ "year": 2021, "sha1": "e036852eac6c2e32630b69e11bb54f0f2eda6751", "oa_license": "CCBYNCND", "oa_url": "https://journals.lww.com/theneurologist/Abstract/9000/Application_Value_of_Flexible_Endoscopic.99960.aspx", "oa_status": "HYBRID", "pdf_src": "WoltersKluwer", "pdf_hash": "16322518e7722045a580fa6fc4ecc7806e609b3a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
214435272
pes2o/s2orc
v3-fos-license
Quantitative analyses reveal extracellular dynamics of Wnt ligands in Xenopus embryos The mechanism of intercellular transport of Wnt ligands is still a matter of debate. To better understand this issue, we examined the distribution and dynamics of Wnt8 in Xenopus embryos. While Venus-tagged Wnt8 was found on the surfaces of cells close to Wnt-producing cells, we also detected its dispersal over distances of 15 cell diameters. A combination of fluorescence correlation spectroscopy and quantitative imaging suggested that only a small proportion of Wnt8 ligands diffuses freely, whereas most Wnt8 molecules are bound to cell surfaces. Fluorescence decay after photoconversion showed that Wnt8 ligands bound on cell surfaces decrease exponentially, suggesting a dynamic exchange of bound forms of Wnt ligands. Mathematical modeling based on this exchange recapitulates a graded distribution of bound, but not free, Wnt ligands. Based on these results, we propose that Wnt distribution in tissues is controlled by a dynamic exchange of its abundant bound and rare free populations. Introduction The Wnt family of secreted signaling proteins has diverse roles in animal development, stem cell systems, and carcinogenesis (Clevers et al., 2014;Loh et al., 2016;Nusse and Clevers, 2017). It has been generally accepted that in the extracellular space, morphogenic Wnt ligands form a concentration gradient by dispersal (Clevers et al., 2014;Kiecker and Niehrs, 2001;Müller et al., 2013;Smith, 2009;Strigini and Cohen, 2000;Tabata and Takei, 2004;Yan and Lin, 2009;Zecca et al., 1996;Zhu and Scott, 2004). In contrast to this classical view, evidence also suggests dispersal-independent functions of Wnt ligands. For instance, a membrane-tethered form of Wingless (Wg) can recapitulate an almost normal pattern of Drosophila wings, suggesting that dispersal of Wg is dispensable for patterning (Alexandre et al., 2014). This dispersal-independent patterning can be explained by gradual attenuation of Wg expression in distally localized cells in which Wg was formerly expressed. However, it remains unclear to what extent dispersal-dependent and/or -independent mechanisms contribute to the graded distribution of Wnt proteins in tissue patterning. Visualization of Wnt ligands is essential to understand their distributions. In the wing disc of Drosophila, Wg proteins are widely distributed from wing margin cells, where Wg is expressed (Strigini and Cohen, 2000;Zecca et al., 1996). Furthermore, long-range dispersal of Wg was evidenced by an experiment in which Wg was captured by distally expressed Frizzled2, a Wg receptor (Chaudhary et al., 2019). Similarly, endogenous Wnt ligands tagged with fluorescent proteins showed long-range distributions in C. elegans (Pani and Goldstein, 2018). In addition to these observations in invertebrates, we found that endogenous Wnt8 ligands disperse far from their source cells in Xenopus embryos (Mii et al., 2017). On the other hand, mouse Wnt3 accumulates within a few cell diameters of its source cells in the microenvironment of the intestine (Farin et al., 2016). These studies show that Wnt ligands apparently disperse in tissues and embryos, although the dispersal range varies. Importantly, in many of these studies, Wnt ligands accumulate locally on cell surfaces, showing punctate distributional patterns (Pani and Goldstein, 2018;Strigini and Cohen, 2000;Zecca et al., 1996). Furthermore, we demonstrated that Wnt8 and Frzb, a secreted Wnt inhibitor, accumulate separately and locally on cell surfaces in Xenopus embryos (Mii et al., 2017). However, these punctate accumulations on cell surfaces, largely ignored in the literature in the context of Wnt gradient formation, raise the question of whether such accumulations contribute to formation of concentration gradients in tissues and embryos. Studies in Drosophila wing disc have shown that cell surface scaffolds, such as heparan sulfate (HS) proteoglycans (HSPGs), are required for both distribution and delivery of morphogens, including Wg, Hedgehog (Hh), and Decapentaplegic (Dpp) (Franch-Marro et al., 2005;Lin, 2004;Yan and Lin, 2009). From these studies, the 'restricted diffusion' model, in which morphogens are transferred extracellularly by interacting with cell surface scaffolds, has been proposed (Yan and Lin, 2009). In this model, the movement of each morphogen molecule is constrained in a 'bucket brigade' fashion by interactions with cell surface scaffolds. As a result of continuous interactions, morphogen molecules are slowly transferred (Han et al., 2005;Yan and Lin, 2009 #152;Kerszberg and Wolpert, 1998;Takei et al., 2004). However, it seems difficult to explain local accumulations of Wnt proteins by the restricted diffusion mechanism, because passive diffusion alone should result in smoothly decreasing gradients. On the other hand, we recently showed that HSPGs on cell surfaces are discretely distributed in a punctate manner, which varies with heparan sulfate (HS) modification, forming two different types of HS clusters, N-sulfo-rich and N-acetyl-rich forms (Mii et al., 2017). Notably, Wnt8 and Frzb, a secreted Frizzled-related protein (sFRP), accumulate separately on Nsulfo-rich and N-acetyl-rich HS clusters, respectively. Frzb expands the distribution and signaling range of Wnt8 by forming heterocomplexes (Mii and Taira, 2009), and Wnt8/Frzb complexes are colocalized with N-acetyl-rich HS clusters (Mii et al., 2017). N-sulfo-rich clusters are frequently internalized together with Wnt8, whereas N-acetyl-rich HS clusters tend to remain on the cell surface. This difference in stability on the cell surface may account for the short-range distribution of Wnt8 and the long-range distribution of Frzb (Mii and Taira, 2009;Mii et al., 2017) and suggests that the distribution of HS clusters should be considered in order to understand extracellular dynamics of Wnt ligands (Mii and Takada, 2020). To explain the dynamics of Wnt ligands in tissues, quantitative analyses of Wnt ligands are required. Dynamics of secreted proteins have been investigated using fluorescence recovery after photobleaching (FRAP) (Sprague and McNally, 2005;Sprague et al., 2004) and fluorescence correlation spectroscopy (FCS), although optimal ranges for diffusion coefficients differ (Hess et al., 2002;Kicheva et al., 2012;Müller et al., 2013;Fradin, 2017). For example, FRAP measurements have shown that Dpp and Wg diffuse slowly in the Drosophila wing disc with diffusion coefficients ranging from 0.05 to 0.10 mm 2 /s, suggestive of the restricted diffusion model (Kicheva et al., 2007). In contrast, FCS measurements of FGF8 in zebrafish embryos showed fast, virtually free diffusion, with a diffusion coefficient of~50 mm 2 /s (Yu et al., 2009). Furthermore, in contrast to the FRAP results, free diffusion of Dpp measured in the Drosophila wing disc using FCS yielded a diffusion coefficient of~20 mm 2 /s (Zhou et al., 2012). FCS is based on fixed-point scanning within a confocal volume (typically sub-femtoliter) for several seconds, while FRAP evaluates considerably larger regions of photobleaching/photoconversion, containing tens or hundreds of cells (Rogers and Schier, 2011) and spanning long time windows (typically several hours). Under these experimental conditions for FRAP, it is proposed that diffusion of secreted proteins is affected by zigzag paths of the narrow intercellular space between polygonal epithelial cells, instead of an open, unobstructed space (hindered diffusion model) (Müller et al., 2013), and/or by endocytosis, which reduces the concentration of the diffusing species in the extracellular space. Thus, we need exercise caution when comparing data derived from FRAP and from FCS analyses. In this study, we examined extracellular dynamics of Wnt8 and Frzb, both of which are involved in anteroposterior patterning of vertebrate embryos (Clevers and Nusse, 2012;Kiecker and Niehrs, 2001;MacDonald et al., 2009;Mii et al., 2017). First, we visualized their localization in Xenopus embryos by fusing them with fluorescent proteins and we examined their dispersion by capturing them in distant cells. We also examined their dispersal dynamics using FCS and fluorescence decay after photoconversion (FDAP) measurements in embryonic tissue. In particular, we refined FDAPbased analysis by focusing on a limited area at the cell boundary, which enabled us to quantify dynamics comparable to those measured by FCS. Based on these results and our previous findings, we propose a basic mathematical model to explain distribution and dispersion of secreted proteins. Results Extracellular distributions of secreted proteins depend on interactions with cell-surface molecules As we have previously shown (Mii et al., 2017), Wnt8 and Frzb fused with monomeric Venus (mV) were visualized along cell boundaries when expressed in Xenopus embryos ( Figure 1A). We note that biological activities of these proteins were not severely impaired by the fusion of mV and that the reduced activity of mV-tagged Wnt8 compared to untagged Wnt8 could possibly be due, at least in part, to differences in translation (Figure 1-figure supplement 1). In contrast, we found that only the secreted form of mV (sec-mV), which was not expected to bind specifically to the cell surface, was hardly visible along the cell boundary under the same conditions ( Figure 1A, right). Since Wnt8 and Frzb colocalize with heparan sulfate clusters on cell surfaces, we speculated that binding to cell surface proteins, like heparan sulfate proteoglycans (HSPGs), affects the distribution of Wnt8 and Frzb. To examine this possibility, we added heparin-binding (HB) peptides, consisting of 16 (ARKKAAKA) 2 (HB2) or 32 amino acids (ARKKAAKA) 4 (HB4) (Verrecchio et al., 2000; Figure 1C) to sec-mV. Addition of HB peptides significantly increased the intensity of mVenus fluorescence in the intercellular region compared to that of sec-mV. This suggests that the intercellular distribution of secreted proteins depends on interactions with docking molecules on cell surfaces. To directly examine this idea, we constructed a reconstitution system, consisting of HA-epitopetagged secreted mVenus (sec-mV-2HA) and a membrane-tethered anti-HA antibody ('tethered-anti-HA Ab') ( Figure 2A, see Figure 2-figure supplement 1 for cDNA cloning and validation of anti-HA antibody). This artificial protein and tethered-anti-HA Ab were expressed in separated areas in the animal cap region of Xenopus gastrulae. As with sec-mV, sec-mV-2HA was hardly visible in the intercellular space, even close to the source cells ( Figure 2B). In contrast, sec-mV-2HA was observed around tethered-anti-HA Ab-expressing cells that were traced with memRFP, even though these cells were distantly located from the source cells ( Figure 2B). Thus, interaction with cell surface proteins can affect distributions of secreted proteins. This result also indicates that diffusing proteins are not readily visible using standard confocal microscopy, unless they are trapped by cell surface proteins. In fact, quantitative analysis of artificial secreted proteins revealed a slight, but significant increase of photon counts in the intercellular region by injection of mRNA for sec-mV, compared to uninjected embryos, indicating that sec-mV actually exists in the intercellular region ( Figure 1D Populations of secreted Wnt8 and Frzb proteins disperse long distance We next examined dispersal of molecules of mV-Wnt8 and mV-Frzb. Both mV-Wnt8 and mV-Frzb accumulated locally along the cell boundary at the subapical level ( Figure 1A and B), consistent with previous observations (Mii et al., 2017), which indicated that populations of Wnt8 and Frzb in the intercellular space were bound to the cell surface at HS clusters. On the other hand, given that some molecules of mV-Wnt8 or mV-Frzb may drift away from the cell surface, these proteins would be almost undetectable with standard confocal microscopy, as exemplified by sec-mV ( Figure 1A) and tethered-anti-HA Ab ( Figure 2B). To examine such mobile proteins, we tried to capture them using 'morphotrap' located distantly from the source cells ( Figure 2C). Morphotrap is a membrane-tethered form of anti-GFP nanobody, originally devised to block dispersal of Dpp-GFP from source cells Figure 1. Extracellular distributions of Wnt8, Frzb, and artificial secreted proteins. All images presented were acquired using live-imaging with the photon counting method, which enables saturation-free imaging even with samples having a wide dynamic range. (A) Distribution of secreted proteins in the superficial layer of a living Xenopus gastrula (st. 10.5-11.5). Observed focal planes were at the subapical level, as illustrated. mRNAs for indicated mVenus (mV) fusion proteins were microinjected into a single ventral blastomere of four-or eight-cell stage embryos to observe regions adjacent to the source cells (indicated with asterisks). All images were acquired in the same condition with photon counting detection. Look-up tables (LUT) show the range of the photon counts in the images. (B) Intensity plots for mV-Wnt8 and mV-Frzb in the intercellular space. Plots along the arrows in enlarged pictures in (A) are shown. (C) Distribution of artificial secreted proteins in Xenopus embryos. The data of sec-mV is the same as in (A). sec-mV was not apparent in the intercellular space, whereas sec-mV-HB2 and sec-mV-HB4 were distributed in the intercellular space (arrowheads). SP, signal peptide; HB, heparin binding peptide. (D) Quantification of fluorescent intensities in the intercellular space. Photon counts per pixel are presented. All samples Figure 1 continued on next page (Harmansa et al., 2015). We supposed that morphotraps could be utilized to detect or visualize diffusible proteins, similar to tethered-anti-HA Ab. As expected, sec-mV accumulated on the surface of morphotrap-expressing cells remote from source cells ( Figure 2C). Similarly, mV-Wnt8 and mV-Frzb were trapped ( Figure 2C), evidencing the long-distance dispersal (over 15 cells/200 mm) of some of secreted mV-Wnt8 and mV-Frzb molecules. These proteins are not likely to be transferred by cellmovement-based mechanisms, including distant migration of source and morphotrap-expressing cells, because cells in the animal cap region form an epithelial sheet and are tightly packed. In addition to this dispersing population, mV-Wnt8 and mV-Frzb were also detectable in gradients from producing cells to morphotrap-expressing cells, unlike the case of sec-mV ( Figure 2C and D). These results suggest that populations of mV-tagged Wnt8 and Frzb do not associate tightly with cell surfaces, thereby potentially dispersing far from source cells. FCS analyses combined with quantitative imaging reveal cell-surfacebound and diffusing Wnt8 and Frzb proteins in the extracellular space We next attempted to quantify the populations of Wnt8 or Frzb proteins associated with cell surfaces and diffusing in the extracellular space. For this purpose, we employed fluorescence correlation spectroscopy (FCS). FCS analyzes fluctuation of fluorescence by Brownian motion of fluorescent molecules in a sub-femtoliter confocal detection volume ( Figure 3, A and B). By autocorrelation analysis ( Figure 3C), FCS can measure diffusion coefficients (D) of mobile molecules and the number of particles in the detection volume, but inference of diffusion coefficients depends on mobile molecules (Hess et al., 2002;Fradin, 2017). FCS analyses were performed by injecting the same doses of mV-Wnt8 and sec-mV that were used in the experiments shown in Figure 1 (250 pg mRNA/embryo) to consider the relationship between photon counting from live-imaging and NoP from FCS. Furthermore, to measure the dynamics of mV-Wnt8 and mV-Frzb at a concentration equivalent to the endogenous concentration, a decreased amount of RNA was also injected (20 pg mRNA/embryo, see Figure 1-figure supplement 1A). To analyze the data obtained by FCS measurements, we compared the suitability of one-component and two-component diffusion models using the Akaike information criterion (AIC) (Tsutsumi et al., 2016). AIC supported fitting with the two-component model, comprising fast and slow diffusing components (Figure 3-figure supplement 1A). Consistent with predictions, the result indicated that the number of particles (NoP) of mV-Wnt8 (250 pg/embryo) was significantly higher than that of mV-Wnt8 (20 pg/embryo, endogenous-equivalent level; Figure 3D Figure 3D), because theoretical, as well as reported D values of freely diffusing proteins of similar size, range from 10 to 100 mm 2 /s (Pack et al., 2006;Yu et al., 2009;Zhou et al., 2012). Importantly, even at the endogenous-equivalent level, mV-Wnt8 and mV-Frzb show freely diffusing populations (D fast >10 mm 2 /s, Figure 3-figure supplement 1B). We also note that the diffusion coefficient of the fast component of mV-Wnt8 (20 pg/ embryo, endogenous-equivalent level) was significantly lower than that of mV-Wnt8 (250 pg/ embryo), suggesting stronger constraints with the endogenous-equivalent expression level. Thus, we conclude that within the small volume of FCS measurements, a population of mV-Wnt8 and mV-Frzb molecules diffuses freely under physiological conditions. As shown in Figure 1D, photon counts of mV-Wnt8 were much higher than those of sec-mV. However, under the same conditions as in Figure 1 (250 pg mRNA/embryo), NoP of mV-Wnt8 was similar to that of sec-mV (Figure 3-figure supplement 1B) or even smaller in another set of measurements (Figure 3-figure supplement 2). Thus, molecules detected in FCS appear not to contribute to the photon counts in the confocal imaging under these conditions. We speculate that FCS show statistically significant differences (p<2e-16, pairwise comparisons using the Wilcoxon rank sum test adjusted for multiple comparison with Holm's method). Scale bars, 20 mm. Amounts of injected mRNAs (ng/embryo): mV-wnt8, mV-frzb, sp-mV, sp-mV-hb2, or sp-mV-hb4, 0.25. The online version of this article includes the following figure supplement(s) for figure 1: Otherwise, HS-bound, immobile molecules cause strong photobleaching, which results in large drift of the fluorescence intensity. In general, such a data is not suitable for analysis. Interestingly, slow components were observed not only in mV-Wnt8, but also in sec-mV ( Figure 3D). To characterize these slow components, we examined the effects of HS-chain digestion with FCS. These analyses were performed with embryos injected with 250 pg/embryo RNA for mV-Wnt8 or sec-mV, because at the endogenous-equivalent level, measured values showed a large variance, possibly reflecting heterogeneity of the extracellular space, and also signal detection was difficult for sec-mV. For this purpose, we made a membrane-tethered form of Heparinase III (HepIII-HA-GPI, also known as heparitinase I) (Hashimoto et al., 2014). HepIII-HA-GPI enables us to digest HS chains in a region of interest (Figure 3-figure supplement 3), allowing us to examine the effects of HS-digestion in the same embryos. For mV-Wnt8, NoP and the fraction of fast components, F fast was significantly increased by HepIII, suggesting release of mV-Wnt8 from HS chains ( Figure 3E and F). Thus, we suggest that HS chains contribute to the slow components of mV-Wnt8. For sec-mV, although NoP was not significantly changed by HepIII, F fast was slightly, but significantly increased by HepIII ( Figure 3G and H). Furthermore, fluorescence cross-correlation spectroscopy (FCCS) analysis indicated that sec-mV did not interact with the cell membrane (Figure 3-figure supplement 1C and D). Thus, HS-chains showed some contribution to the slow components of sec-mV even without interaction. We speculate that such a slow population of sec-mV could be explained by hindered diffusion, in which torturous diffusion results from HS chains and other ECMs, because HS chains are highly hydrophilic and well hydrated ( FDAP analyses suggest exchange of cell-surface-bound and unbound states of Wnt8 and Frzb proteins Although FCS analysis is suitable for measuring diffusing molecules, it cannot directly analyze molecules with extremely low mobility (Hess et al., 2002). To directly analyze dynamics of such molecules, we next employed fluorescence decay after photostimulation/photoconversion (FDAP) assays (Matsuda et al., 2008;Müller et al., 2012) in the intercellular space of Xenopus embryos. Since FRAP/FDAP measurements usually examine considerably larger regions (typically containing tens or hundreds of cells) than with FCS (Rogers and Schier, 2011), direct comparisons of dynamics between FRAP/FDAP and FCS may need careful consideration. Therefore, we restricted the area of photoconversion to a diameter of 1.66 mm and reduced the measurement time (16 s), allowing us to obtain dynamic data in the intercellular region under conditions comparable to those for FCS ( Figure 4A). We refer to this FDAP mode as 'cell-boundary FDAP.' In this analysis, we fused a photoconvertible fluorescent protein, mKikGR (mK) (Habuchi et al., 2008) to Wnt8 and Frzb (mK-Wnt8 and mK-Frzb). These fusion proteins showed distributions in embryos similar to mV-tagged proteins ( Figure 4A), and retained biological activities (Figure 4-figure supplement 1). Importantly, observed distribution patterns of mK-Wnt8 and mK-Frzb were stable for up to tens of minutes ( Figure 4A). Therefore, we assumed a steady state during the FDAP analysis (16 s). After photoconversion, red fluorescent intensity of mK-tagged proteins was measured in the same rectangular area as photoconversion ( Figure 4B). Because puncta of Wnt8 are often internalized with HS clusters (Mii et al., 2017), we excluded data in which vesicular incorporation was observed during measurement of mK-Wnt8. As an immobilized control, mK-Frzb in formaldehyde- traced with memRFP (magenta). (C) Morphotrap at a distant region from the source. The superficial layer of a Xenopus gastrula (st. 11.5) was imaged as a z-stack and its maximum intensity projection (MIP) was presented for the fluorescent images. The intercellular mVenus signal of an artificial ligand, sec-mV (green), was not detected in the vicinity of source cells (green) (left panel), but was detected around the morphotrap-expressing cells that can be traced by mCherry fluorescence (middle panels). Also, mV-Wnt8 and mV-Frzb were trapped and accumulated on distant morphotrap-expressing cells, suggesting the existence of diffusing molecules in the distant region. Source regions are indicated with cyan lines according to memBFP (tracer for mV-tagged proteins, not shown). (D) Distribution of mVenus and morphotrap. Fluorescent intensity of mVenus and mCherry (for morphotrap) was plotted from the left to the right. Scale bars, 100 mm. Amounts of injected mRNAs (ng/embryo) sp-mV-2ha, 1.0; memRFP, 0.15; ig gamma2b-gpi, 1.1; ig kappa, 0.63 (B); sec-mV, mV-wnt8, or mV-frzb (high dose), 0.25; mV-frzb (low dose), 0.063; morphotrap, 1.0; memBFP, 0.1 (C). The online version of this article includes the following figure supplement(s) for figure 2: )2 ) Given that the punctate distribution of mK-Wnt8 and -Frzb results from their binding to HS clusters (Mii et al., 2017), we considered whether a simple dissociation model (Equation 1) is suitable for curve-fitting of FDAP data. Indeed, bleaching-corrected FDAP curves of mK-Wnt8 and mK-Frzb were well fitted to this model ( Figure 4D; residuals were mostly within 5% and all within 10%) with the indicated parameters ( Figure 4E, the off-rate constant k off , and the rate of the constantly bound component C; for individual data plot, see Figure 4-figure supplement 2D). As a result, both mK-Wnt8 and mK-Frzb show large C values, indicating that the majority of these proteins can be considered immobile on the timescale of the measurement ( Figure 4E). In addition, k off of mK-Wnt8 was significantly lower than that of mK-Frzb (Figure 4-figure supplement 2D), suggesting relatively rapid dissociation of mK-Frzb from the binding site. This difference appears to be consistent with FDAP spatial intensity profiles, in which photoconverted mK-Frzb, but not mK-Wnt8, accumulated in adjacent areas (Figure 4-figure supplement 2C, see also Videos 1, 2 and 3). Thus, we conclude that most mK-Wnt8 and mK-Frzb molecules are bound, but can be exchanged with unbound molecules, and also dissociation rate values of mK-Wnt8 and mK-Frzb differ significantly. Mathematically modeling diffusion and distribution of secreted proteins Based on our quantitative imaging (Figure 1) of tethered-anti-HA Ab and morphotrap ( Figure 2) FCS ( Figure 3) and FDAP (Figure 4), we conclude that most Wnt8 and Frzb molecules are bound to cell surfaces, while small numbers of freely diffusing molecules exist in the extracellular space. Furthermore, we have already shown that Wnt8 and Frzb utilize different types of HS clusters, N-sulforich and N-acetyl-rich, as cell-surface scaffolds, respectively (Mii et al., 2017). Thus, we examined whether free diffusion and binding to HS clusters can explain the extracellular distribution or gradient formation of secreted proteins, using mathematical modeling. Here, we consider two states of ligands: free and bound. The free state corresponds to the fast diffusing component in FCS, and we consider the bound component as immobile molecules. This model includes five dynamic processes: (i) ligand production, (ii) diffusion of free molecules in intercellular space, (iii) binding of ligands to HS clusters on cell surfaces, (iv) release of bound molecules from HS clusters and (v) internalization of bound molecules into cells. In one-dimensional space, the model is written as: where u and v represent the concentration of free molecules and numbers of bound molecules, respectively, of a secreted protein. The symbols x and t are position and time, respectively. The symbols a(x), b, c, and g(x) represent binding, release, internalization, and production rates, respectively ( Figure 5A); a(x) is equivalent to the amount of HS in HS clusters (for details, see Materials and methods). D (= 20 mm 2 /s) represents the diffusion coefficient of the free component in the extracellular space, which corresponds to the fast diffusing component measured by FCS (Figure 3D). Under a wide range of appropriate parameter values, distributions of u and v converged to steady states within a few minutes. Compared to the fast diffusing component ( Figure 5B), the contribution of the slow component (D = 0.50 mm 2 /s) to the distribution range is much smaller (Figure 5-figure supplement 1A). Hense, we mainly consider the fast component observed in FCS ( Figure 3D), as the diffusing population in the model. The free component, u, quickly decreases, displaying a shallow continuous distribution pattern due to diffusion. In contrast, the bound component, v, shows a discrete distribution following a(x), and the level of v is much higher than u at any position, reflecting our conclusion that the majority of Wnt or sFRPs molecules in the extracellular space are bound. Given that activation of Wnt signaling requires internalization of the ligands (Kikuchi et al., 2009;Yamamoto et al., 2006) of the bound component, corresponding to cv in Equation 4, the distribution of the bound component in this model could be equivalent to the 'actual' gradient of Wnt signaling, even though it is not diffusing. We demonstrated that consistently some portion of Wnt8 ligands accumulated on N-sulfo-rich HS clusters initiate canonical Wnt signaling by forming the signaling complex 'signalosome' (Mii et al., 2017). We have shown that N-sulfo-rich HS, but not N-acetyl-rich clusters, are frequently endocytosed (Mii et al., 2017). In this model, different internalization rates of N-acetyl-rich and N-sulfo-rich HS clusters can be reflected by varying the internalization rate of the docking sites, c. A smaller value of c results in long-range distribution (compare Figure 5B and C), explaining why Frzb shows a longrange distribution by binding to N-acetyl-rich HS clusters (Mii et al., 2017). We can evaluate the distribution by the decay length, l. l represents a distribution range when the steady state gradient is written as (Kicheva et al., 2012;Kicheva et al., 2007). We calculated l by curve-fitting the peak values of v to Equation 5. The value of l with c = 0.1 or 0.01 is 6.346 or 10.79 mm ( Figure 5B,C and Figure 5figure supplement 1B,C for normalized plots), respectively, showing that internalization rates of HS clusters can affect distribution ranges, as observed between Wnt and sFRPs (Mii and Taira, 2009). In addition, we examined the contribution of dissociation from the bound to the diffusing state, suggested by our cell-boundary FDAP (Figure 4). Without dissociation, a shorter range distribution of the bound component was obtained (l = 4.504 mm, Figure 5-figure supplement 1D) than with dissociation (l = 6.346 mm, same data as in Figure 5B). Furthermore, in Xenopus embryos, Wnt8 in the Video 1. Photoconversion of mKikGR-Wnt8 in a cellboundary region of a Xenopus embryo. https://elifesciences.org/articles/55108#video1 Video 2. Photoconversion of mKikGR-Wnt8 in a cellboundary region of a Xenopus embryo (another example). https://elifesciences.org/articles/55108#video2 intercellular space exhibited local accumulations ( Figure 1B). In our model, when the binding rate a n,max (Equation 7 in the Materials and methods) fluctuates randomly (i.e. the amount of HS at position x), the bound ligand component also fluctuates ( Figure 5D, blue), reproducing the local accumulation of Wnt8 and Frzb in Xenopus embryos. Even under these conditions, the free component shows a continuously decreasing gradient ( Figure 5D, red), which probably corresponds to the FCS-measured, diffusing component of the FGF8 gradient in zebrafish embryos (Yu et al., 2009; measuring concentrations by FCS in a wide field is technically difficult in larger Xenopus embryos). Thus, our mathematical model can generalize protein distributions in the extracellular space. Discussion As one of the major secreted signaling molecules, mechanisms of Wnt dispersal are crucial when we consider embryonic patterning and various other systems involving Wnt signaling (Routledge and Scholpp, 2019). Among many Wnt proteins, Wg distribution in the Drosophila wing disc has long been investigated as a morphogen gradient (Strigini and Cohen, 2000;Zecca et al., 1996). Various genetic studies show that the extracellular distribution of Wg largely depends on HSPGs, such as Dally and Dally-like glypicans (Baeg et al., 2004;Franch-Marro et al., 2005;Han et al., 2005). Furthermore, FRAP-based analysis suggests that the effective diffusion coefficient of Wg is much slower (0.05 mm 2 /s) than free diffusion (>10 mm 2 /s) (Kicheva et al., 2007). However, such dynamics of secreted signaling proteins still remain a matter of debate (Rogers and Schier, 2011). On the other hand, recently we found that HS chains on the cell surface are organized in clusters with varying degrees of N-sulfo modification in Xenopus embryos and HeLa cells. Furthermore, we demonstrated that endogenous Wnt8 protein visualized by immunostaining shows a punctate distribution, specifically associated with N-sulfo-rich HS clusters (Mii et al., 2017). Similar punctate distributions have also been observed with Wg in Drosophila (Strigini and Cohen, 2000;van den Heuvel et al., 1989), but the significance of these distributions has not yet been explained. Therefore, to gain insight into the mechanism of Wnt distribution, we examined Wnt8 protein dynamics. Based on quantitative live-imaging techniques, we propose that most Wnt8 molecules distributed among cells are mostly cell-surface-bound, while a small portion of them are diffusing. Similarly, Wnt/EGL-20 shows that puncta mostly overlap with Frizzled and a small population of mobile/diffusing molecules is also suggested in C. elegans (Pani and Goldstein, 2018). In Xenopus embryos, Frizzled may also contribute to bind Wnt ligands because some Wnt8 puncta overlapped with Frizzled8 (Mii et al., 2017). Furthermore, a small population of diffusing Dpp has been shown in Drosophila wing disc (Zhou et al., 2012). Importantly, it has been suggested that these populations disperse over long distances, similar to our observation of mV-Wnt8 trapped using morphotrap ( Figure 2C, D), generalizing the existence of long-dispersing populations in various model systems. It is plausible that cell-surface-bound Wnt8 is mostly associated with HS clusters (Mii et al., 2017). The function of HSPGs in Wnt dispersal has been examined by genetic studies of Drosophila. These studies show that HSPGs are required for accumulation and transfer of Wnt ligands. Based on these results, it has been proposed that Wnt disperses by restricted diffusion, in which HSPGs transfer Wnt ligands in a bucket brigade manner (Yan and Lin, 2009). In our FDAP assay, most photoconverted mK-Wnt8 does not diffuse laterally, even when other puncta of Wnt8 exist near the site of photoconversion (Figure 4-figure supplement 2C, left panel, Video 1). We further considered this observation with modeling ( Figure 5-figure supplement 1E). Unlike the experiment, modeling shows lateral dispersal of photoconverted molecules in neighboring regions, over time. For this difference, we mainly consider two possibilities: (i) Our imaging system may not be sufficiently sensitive to detect such a small increase, or the increase of the ligands may be obliterated by photobleaching. Video 3. Photoconversion of mKikGR-Frzb in a cellboundary region of a Xenopus embryo. Photoconversion of mKikGR fusion proteins was performed at a cell-boundary region in the animal cap of a Xenopus gastrula (st. 10.5 st.11.5). mKikGR-Wnt8 (Videos 1 and 2) or mKikGR-Frzb (Video 3) was photoconverted at the region indicated with the blue box after 100 frames scanned (about 4 s), and another 400 frames were scanned for measurement. The width of the region for photoconversion and intensity measurement was 20 pixels (1.66 mm). The play speed is x1. https://elifesciences.org/articles/55108#video3 x dv a x u bv cv dt (ii) If binding dynamics of mK-Wnt8 are slower than in our model, ligands may diffuse away before re-binding in neighboring regions. On the other hand, mK-Frzb showed some lateral dispersal similar to the model. As previously discussed, these behaviors in FRAP experiments can be classified into some cases including 'reaction dominant' and 'effective diffusion' by a balance among the on-rate, the off-rate, and the diffusion coefficient (Sprague and McNally, 2005;Sprague et al., 2004). Restricted diffusion can be understood as a kind of effective diffusion in which dynamics of ligand binding/dissociation to HSPGs are similar to those of free diffusion. Although we did not derive binding constants, at least superficially, mK-Frzb showed an effective diffusion-like behavior, whereas mK-Wnt8 showed a reaction dominant-like behavior, in which free diffusion is much faster than binding/dissociation. In order to compare our data with those previously reported (Kicheva et al., 2012), we also performed curve-fitting with an effective/apparent diffusion model (Figure 4-figure supplement 3, Equation 2). As a result, the apparent diffusion coefficient D a (mm 2 /s) was calculated as 0.042 and 0.059 for mKikGR-Wnt8 and mKikGR-Frzb, respectively. These values are very close to a previously reported FRAP value for Wg (0.05 mm 2 /s) (Kicheva et al., 2007). Thus, such small values of D a relative to free diffusion could be interpreted as the result of interaction with cell surfaces, regardless of whether the protein of interest actually shows lateral diffusion in bucket brigade fashion. We found that sec-mV is almost invisible with standard confocal microscopy ( Figure 1B). Furthermore, binding to cell surface molecules such as HSPGs and membrane-tethered antibody was sufficient for visible distribution for artificial secreted proteins ( Figures 1C and 2B). These findings are similar to recent demonstrations that secreted GFP can be synthetic morphogens with specific scaffold molecules in the Drosophila wing disc (Stapornwongkul et al., 2020) and in cultured cells (Toda et al., 2020). On the other hand, secreted GFP appears visible in some tissues, such as deep cells in early zebrafish embryos (Yu et al., 2009) and developing zebrafish brain (Veerapathiran et al., 2020). In the zebrafish brain, secreted EGFP did not show slow components (Veerapathiran et al., 2020), which is different from our observation in the Xenopus animal cap region ( Figure 3D). Together, considering detection of diffusing molecules (Appendix), we speculate that these differences may reflect narrowness of the extracellular space in the tissues. Our cell-boundary FDAP suggests that cell-surface-bound and diffusing populations are probably exchangeable. Although this result can be explained by dissociation of molecules from the bound state as described above, it also seems possible that endocytosis reduces the number of photoconverted molecules ( Figure 4C). However, we consider this less likely. Endocytosis of Wnt8 is possibly mediated by caveolin (Mii et al., 2017;Yamamoto et al., 2006), and we have already shown that internalized Wnt8 was detected as puncta in the cell (Mii et al., 2017). However, in FDAP analyses in this study, we excluded observations with internalization of Wnt puncta from curve-fitting analysis. In our mathematical model, when dissociation from the cell-surface does not occur (b = 0 in Equation 3 and 4, Figure 5A), the range of the gradient (decay length, l) was shortened from 6.35 to 4.50 mm ( Figure 5-figure supplement 1D). Thus, at least in cases we analyzed, dissociation from the bound state seems to contribute to the long-range distribution and rapid formation of the gradient. A goal of this study is to link quantitative measurements of local protein dynamics to larger spatiotemporal patterns of extracellular protein dispersal in embryos. We hypothesized that local dynamics of diffusion and interaction with HS chains measured by FCS and FDAP could be extrapolated to explain mechanisms for gradient formation across many cells. We mainly consider protein dispersal within a single plain, and this is exemplified in the animal cap region since mV-Wnt8 and This value represents a wider range than that in (A). See also Source code 2. (D) Local accumulation similar to intercellular distribution of mV-Wnt8 and -Frzb. a n,max is given randomly for each n by an absolute value of the normal distribution. See also Source code 3. (E) Distant scaffolds from the source region. a n,max is given to depend on space: 10.0 for 50 x 60, otherwise, a n,max is 0.0. This situation is similar to tethered-anti-HA Ab ( Figure 2B). See also Source code 4. (F) Ligand accumulation in front of the HS-absent region. a n,max is given depending on space: 10.0 for 0 x 10 and 0 for 10 < x 1000. Values of v in (B) (v B ) are also shown with green dashed lines, for comparison. Note that ligand accumulation occurs in front of the HSabsent region (10 < x.). See also Source code 5. The online version of this article includes the following figure supplement(s) for figure 5: mV-Frzb accumulated on the proximal side (to the source) of morphotrap-expressing cells ( Figure 2C). But when we consider dispersal of secreted proteins in embryos, other routes can be involved. For example, a BMP antagonist, Chordin exhibits dispersal within the Brachet cleft, which is a fibronectin-rich ECM (Plouhinec et al., 2013). In addition, several other mechanisms, such as cell lineage-based dilution (Farin et al., 2016) and cytonemes/signaling filopodia (Roy et al., 2011;Stanganello et al., 2015) may contribute to dispersal of a morphogen. We emphasize that immobilization of morphogen molecules is a prerequisite for cytoneme/filopodium-mediated transfer of signaling. Gradient formation over long ranges has not been examined experimentally in this study. However, we attempted to understand the outcome of diffusion and binding, basic properties of morphogens. Thus, we propose a mathematical model consisting of free and bound components of Wnt based on observed local dynamics ( Figure 5A). This model can be widely applied to secreted proteins that bind to cell surfaces, including sFRPs and other peptide growth factors. Notably, in our mathematical model, distributions of both free and bound components converged to steady states within a few minutes, showing rather fast dynamics in the context of embryonic patterning. This characteristic could solve perceived weaknesses of diffusion-base models (Müller et al., 2013), especially dilemmas related to the speed and stability of gradient formation. From this point of view, the combination of abundant cell-surface-bound and minimal diffusing populations would be beneficial for signaling stability and speed of pattern formation, respectively. Like tethered-anti-HA Ab ( Figure 2B), atypical distributions of FGF (Shimokawa et al., 2011) and Nodal (Marjoram and Wright, 2011), in which ligands accumulate in locations distant from their sources, have been reported, although a theoretical explanation of these atypical distributions has proven elusive. In our model, atypical distributions can be reproduced if specific scaffolds for ligands (ligand binding proteins) are anchored on the surfaces of cells ( Figure 5E). Furthermore, our model explains the puzzling localization of ligands in tissues. In mosaic analyses of the wing discs of Drosophila mutants, Hh and Dpp ligands accumulate at the edges of clones defective in HS synthesis Yan and Lin, 2009). Distributional patterns of these ligands are explained by our model, which accounts for accumulations of ligand in regions lacking HS ( Figure 5F). Thus, our model provides a basic framework to understand of the extracellular behavior of secreted proteins. Materials and methods Key resources table Continued on next page Xenopus embryo manipulation and microinjection Unfertilized eggs of Xenopus laevis were obtained by injection of gonadotropin (ASKA Pharmaceutical). These eggs were artificially fertilized with testis homogenates and dejellied using 4% L-cysteine (adjusted to pH 7.8 with NaOH). Embryos were incubated in 1/10x Steinberg's solution at 14-17˚C and were staged according to Nieuwkoop and Faber, 1967. Synthesized mRNAs were microinjected into early (2-16 cell) embryos. Amounts of injected mRNAs are described in figure legends. Fluorescent image acquisition Image acquisition was performed using confocal microscopes (TSC SP8 system with HC PL APO Â10/NA0.40 dry objective or HC PL APO2 Â40/NA1.10 W CORR water immersion objective, Leica or LSM710 system with C-Apochromat 40x/1.2 W Corr M27 water immersion objective, Zeiss). Photon counting images were acquired with a HyD detector (Leica). Detailed conditions for imaging are available upon request. mV was constructed by introducing an A206K mutation to prevent protein aggregation (Zacharias et al., 2002). For FDAP and FCS measurements, gastrula embryos were embedded on 35 mm glass-based dishes (Iwaki) with 1.5% LMP agarose (#16520-050; Invitrogen) gel, which was made of 1/10x Steinberg's solution. For other types of live-imaging, embryos were mounted in a silicone chamber made in-house with holes 1.8 mm in diameter. Fluorescent intensity was measured using Fiji, Image J (NIH) or Zen2009 (Zeiss). Cell lines Hybridomas (derived from mouse, 12CA5, anti-HA; 9E10, anti-Myc) were used to obtain their total RNA and subsequent cloning of immunoglobulin genes. These hybridomas have been neither authenticated nor tested for mycoplasma because no assays were performed with these hybridomas themselves. Instead, we confirmed generation of functional anti-HA or anti-Myc IgG from the cloned genes by co-IP assay. Immunostaining of Xenopus embryos Immunostaining of Xenopus embryos was carried out according to a previous report (Mii et al., 2017). Briefly, embryos were fixed with MEMFA (0.1 M MOPS pH 7.4, 4 mM EGTA, 2 mM MgSO 4 , 3.7% formaldehyde) 2 hr at room temperature. Fixed embryos were dehydrated with EtOH (EtOH treatment improves staining with anti-Wnt8 and anti-HS antibodies). After rehydration, embryos were washed with TBT (1x TBS, 0.2% BSA, 0.1% Triton X-100) and blocked with TBTS (TBT supplemented with 10% heat-treated [70˚C, 40 min] fetal bovine serum). The following procedures are similar for primary and secondary antibodies. Antibody was diluted with TBTS and was centrifuged 15 min at 15,000 rpm before use. Embryos were incubated with the supernatant of antibody solution overnight at 4˚C. Then embryos were washed five times with TBT. cDNA cloning of IgG from cultured hybridomas Cultured hybridomas were harvested by centrifugation and total RNAs were prepared using ISO-GEN (Nippon Gene), according to the manufacturer's protocol. First strand cDNA pools were synthesized using SuperScript II reverse transcriptase (Invitrogen) and random hexamer oligo DNA. These cDNA pools were used as templates for PCR to isolate cDNAs for heavy chains and light chains of anti-HA and anti-Myc IgGs. See Supplementary file 1 for all primers used for PCR cloning. Full-length cDNAs were cloned into the pCSf107mT vector (Mii and Taira, 2009). Cultured hybridoma cells were harvested by centrifugation and total RNAs were prepared using ISOGEN (Nippon Gene), according to the manufacturer's protocol. First strand cDNA pools were synthesized using SuperScript II reverse transcriptase (Invitrogen) and random hexamer oligo DNA. These cDNA pools were used as templates for PCR to isolate cDNAs for heavy and light chains of anti-HA and anti-Myc IgGs. Procedures for PCR cloning were as follows. IgG cDNAs for 3' regions of CDSs were obtained by PCR with degenerate primers (5' g-F and 5' k-F) and primers corresponding to constant regions of Ig genes (g2b-const-R, g1-const-R, 3' k-R) (Wang et al., 2000). To obtain the complete CDSs, 5'RACE was carried out to obtain the first codons of Ig genes, using a modified protocol in which inosines are introduced into the G-stretch of the HSPst-G10 anchor (personal communications from Dr. Min K. Park). cDNAs were synthesized with gene-specific primers (HA-heavy-R1, Myc-heavy-R1, HA-light-R1, and Myc-light-R1), and tailed with poly-(C) by terminal deoxynucleotidyl transferase, and subsequently double-stranded cDNAs were synthesized with the HSPst-G10 anchor. 5' ends of cDNAs were amplified by PCR between the HSPst adaptor and gene-specific primers (HA-heavy-R2, Mycheavy-R2, HA-light-R2 and Myc-light-R2) using the double-stranded cDNAs as templates. Full length CDSs were amplified using primers designed for both ends of the CDSs (HH-Bam-F, MH-Bam-F, HLbam-F, and ML-Bgl-F for 5' ends; 3' g2b-R, 3' g1-R and 3' k-R for the 3' end) and the first cDNA pools. See Supplementary file 1 for all primers used for PCR cloning. Full-length cDNAs were cloned into the pCSf107mT vector (Mii and Taira, 2009). Sequence data for anti-HA IgG genes have been deposited in Genbank/DDBJ under accession codes LC522514 and LC522515. FDAP measurements For expression in the animal cap region of Xenopus embryos, four-cell-stage Xenopus laevis embryos were microinjected with mRNAs for mK-Wnt8 and mK-Frzb (4.0 ng/embryo) at a ventral blastomere. Injected embryos were incubated at 14˚C until the gastrula stage (st. 10.25-11.5) for subsequent confocal analysis. FDAP measurements were performed using the LSM710 system (Zeiss) with a C-Apochromat Â40, NA1.2 water immersion objective. Time-lapse image acquisition was carried out for 20 s each at 25 frames/s, and after 4 s (100 frames) from the start, intercellular mK-fusion proteins were photoconverted at a small rectangular region (1.66 Â 2.49 mm) with 405 nm laser irradiation. After photoconversion, images were acquired for 16 s (400 frames). Red fluorescent intensities within the rectangular region where photoconversion was performed, were analysed by curve-fitting to Equation 1 ( Figure 4D FCS measurements FCS measurements were carried out using a ConfoCor2 system (objective: C-Apochromat Â40, NA1.2 water immersion) (Zeiss; Figure 3-figure supplement 2 only) according to a previous report (Pack et al., 2006) or a TSC SP8 equipped with FCS (objective: HC PL APO 63x/1.20 W motCORR CS2) (Leica). mRNAs for mV-Wnt8 and sec-mV were microinjected into four-or eight-cell stage Xenopus embryos. Injected embryos were measured at gastrula stage (st. 10.5-11.5). Rhodamine 6G (Sigma-Aldrich) was used to calibrate detection volume, with a reported value of its diffusion coefficient (280 mm 2 /s) (Pack et al., 2006). PyCorrFit software (Müller et al., 2014) was used for curve-fitting analyses of FCS data from the Leica system. Models considering three-dimensional free diffusion with a Gaussian laser profile, including a triplet component ('T + 3D', a one-component model or 'T + 3D + 3D', a two-component model) were used for fitting. Akaike information criterion (AIC) was used to compare fitting with the one-component and two-component models according to a previous report (Tsutsumi et al., 2016). Plasmid construction pCSf107mT (Mii and Taira, 2009) was used to make most plasmid constructs for mRNA synthesis. pCSf107SPw-mT and pCSf107SPf-mT were constructed, which have the original signal peptides of Wnt8 and Frzb, respectively. The coding sequence (CDS) for mVenus (mV) or mKikGR (mK) was inserted into the BamHI site of pCSf107SPw-mT or pCSf107SPf-mT to construct pCSf107SPw-mV-mT, pCSf107SPf-mV-mT, pCSf107SPw-mK-mT, and pCSf107SPf-mK-mT. Constructs for SP-mV, SP-mV-HB, and SP-mV-2HA were made with pCSf107SPf-mT. pCS2 +HA-IgH-TM-2FT (the heavy chain for anti-HA IgG with the transmembrane domain of a membrane-bound form of IgG heavy chain) was made by inserting the full length CDS of heavy chain of anti-HA IgG without the stop codon (using the EcoRI and BglII sites) and a partial CDS fragment corresponding to the IgG transmembrane domain (using the BglII and XbaI sites) into the EcoRI/XbaI sites of pCS2 +mcs-2FT-T. To construct pCSf107-SP-HepIII-HA-GPI, HepIII CDS was inserted into pCSf107-SP-mcs-4xHA-GPI. Luciferase reporter assays Luciferase reporter assays were carried out as previously described (Mii and Taira, 2009). Multiple comparisons were carried out with pairwise Wilcoxon rank sum test (two-sided) in which significance levels (p-values) were adjusted by the Holm method, using R. Mathematical modeling Two ligand components were considered: free and bound. This model includes five dynamic processes: (i) production of ligands, (ii) diffusion of the free component in intercellular space, (iii) binding of ligands to dotted structures ('docking sites') such as HS clusters on the surface of cells, (iv) release of the bound component from 'docking sites,' and (v) internalization of the bound component into cells. The model in one-dimensional space is written as: where u and v represent the amounts of free and bound components of morphogen molecules, respectively. The symbols a(x), b, c, and g(x) represent binding, release, internalization, and production rates, respectively. D represents the diffusion coefficient of the free component in the extracellular space. The ligand is assumed to be produced in a limited region using the following function: g x ð Þ ¼ g max ð0 x RÞ 0 ðelsewhereÞ : We assumed that the binding rate a(x) depends on the position x, following heterogeneous distribution of HS clusters on the cell surface. The following function was used for a(x): a x ð Þ ¼ a n;max ðp 1 n x P 1 n þ p 2 ; n ¼ 1; 2; :::Þ 0 ðelsewhereÞ ; where p 1 and p 2 are the interval and width, respectively, of docking sites. We used the no-flux (Neumann) boundary condition at x = 0 and L. We calculated the model by numerical simulation. The initial distributions of u and v were set at 0 throughout the entire space. In the one-dimensional space, distributions of free (u) and bound (v) components of secreted proteins were obtained by computer simulation, where the spatial length L = 1000 (mm). Distributions of u (red) and v (blue) are presented at time t = 100 (sec), which almost reached steady states. We used the forward difference method with the spatial step Dx = 0.1 and temporal step Dt = 0.0001 in numerical calculations. In Figure 5B, parameter values are: D = 20.0, a n,max = 10.0, b = 0.1, c = 0.1, g max = 0.2, R = L/1000, p 1 = 2, and p 2 = 0.2. In other panels, distributions in specific conditions are shown (see figure legends). . Source code 7. Source code for Figure 5-figure supplement 1E. All source code files are written in C. An executable file 'a.out' will be generated by compilation. By executing 'a.out', amounts of u (free) and v (bound) at each position in the field is recorded in a data file 'dists_uv.dat'. . Supplementary file 1. Primers used for molecular cloning of IgG cDNAs from hybridomas. See the section of 'cDNA cloning of IgG from cultured hybridomas' in Materials and methods for details. . Transparent reporting form Data availability Sequence data for anti-HA IgG genes have been deposited in Genbank/DDBJ under accession codes LC522514 and LC522515. The following datasets were generated: Author (
2020-02-27T09:31:51.481Z
2020-02-20T00:00:00.000
{ "year": 2021, "sha1": "4c58f522c68629459640f7315da143c965022d72", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7554/elife.55108", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "741fb09b51de4fc1e99fd6e5260ebbda78e1e623", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Chemistry", "Medicine" ] }
214564377
pes2o/s2orc
v3-fos-license
LEGISLATIVE CHANGES AND PRACTICAL ASPECTS REGARDIND DAY LABORERS IN ROMANIA In this article I will discuss the problem of one of the atypical forms of work, that of the day workers, the main purpose being to identify the changes that took place on the text of the normative act, between 2011-2019. Also, in order to achieve a better monitoring on the observance of the law of the day workers, I will also present briefly the results of the control carried out by the Territorial Labour Inspectorate of Suceava on this subject for the year 2018, in order to identify those legal provisions that are frequently violated, intentionally or at fault, by the beneficiaries of the day workers’ activity. 1.The day workers' activity-a necessary measure to make work relationships more flexible and combat undeclared work in Romania In order to cover the legislative vacuum in the field of occasional work and to reduce the work without legal forms, it was adopted the Law no. 52/2011 regarding the exercise of occasional activities carried out by day workers [5]. The day workers' law is the most flexible employment tool for both beneficiaries and day workers by eliminating bureaucratic administrative procedures for registration, as well as facilitating the employment and cease of employment relationships.The aforementioned law aims to make flexible and simplify the employment procedures for day workers for certain exclusively unqualified activities, into areas limited by law, but also to reduce illegal work cases [1]. As the Work Inspection statistics show, this normative act has also led to the reduction of cases of unqualified work without legal and nonsupervised forms, when, in practice, undeclared work has been shown to be manifesting itself by illegally using day workforce. Therefore, day work activity represents a derogation from the provisions of Law no. 53/2003 of the Labor Code, the forms of employment in this case are simplified, with no need for an employment contract and formalized procedures of registration at the territorial work inspectorates [6]. The author I.T.Stefanescu questions the existing opinions in the specialized literature on the legal source of the relationship between the day worker and the beneficiary of a civil contract or individual employment contract of a particular character. He concludes that the activity carried out by the day worker is not based on a civil contract but points out some minuses that the law still has, such as: the lack of a concrete qualification by law of the legal nature of the day work contract, the rules regarding dismissal, of the applicability in this area of the collective work contract to the extent of its existence, but also the regulations on work jurisdiction [2] [3] [4]. All these constitute the author's perspective landmarks of future changes required to achieve adequate regulation in this area. So, the labor reports of the day laborers are atypical because their source is not the individual labor contract itself, but the agreement of the parties, derived from Law no. The law in its initial form raised a number of problems in the implementation concerning the health and safety of the day workers, the protection of the minors who carry out daily work, the areas in which the unskilled work can be carried out on an occasional basis. As a result, through Law no. 277/2013, the beneficiary's area of works was extended so that the daily workforce could be used, in addition to the legal person, also by the authorized natural person and the entrepreneur, the natural person who is the owner of the individual enterprise, or of the family entreprise. In view of the legal framework in force that restricts the employment of personnel in public institutions and to prevent circumvention of the law from this point of view, they have been exempted from applying these provisions [7]. Another novelty is that the law creates the possibility of children between the ages of 15 and 16 to perform unqualified activities on an occasional basis only with the consent of the parents or legal representatives and only for activities appropriate to their physical development and the skills they demonstrate, without violating their right to physical, mental, spiritual, moral and social development, the right to education and endangered health status, and the agreement of the parents / legal representatives is expressed in authentic form specifying the activities to be carried out by the minor / minors and are recorded by the beneficiary in the register; this completion leads to a better individualization of the legislator's requirements regarding the possibility of children to carry out occasional unqualified activities. The possibility of paying the day worker is created by using the electronic means of payment as well as the possibility of making the payment at the end of the week only with the agreement expressed in writing by the day worker and the beneficiary; the obligations in the field of occupational safety and health are incumbent on the beneficiary as well as on the individual who carries out unqualified activities. The minimum value of the amount of the negotiated gross hourly remuneration cannot be below the value / hour of the minimum gross basic wage per country guaranteed in the payment which, according to the Government Decision no. 23/2013, was of 4.44 lei / hour until June 2013 and of 4.74 lei / hour starting with July 1, 2013. This amendment aimed to achieve the protection of the minimum level of daily income that could not be lower than the value / hour set for the minimum gross basic wage in the country guaranteed in payment. The sanction regime that concerns the acts committed by the beneficiary is supplemented with the one concerning the prohibition of the beneficiary to hire day workers to carry out activity for the benefit of a third party, aiming to prevent abuses that could occur in the area of workforce used as day work regime which would mean the diversion from the purpose proposed, respectively the fact that the day worker executes unqualified activities with occasional character for the beneficiary. For occasional work, the law introduces new contravention sanctions for the beneficiary who uses day workers in other activities that involve unskilled work compared to those expressly provided by the law. a) The communal management services managed directly by the local councils, such as greenhouses, green spaces and zoos and the units subordinated to the Ministry of Youth and Sport -the county departments for sports and youth, respectively the Directorate for Sport and Youth of the Municipality of Bucharest -which organize in the centers national recreational camps, recreational camps, social camps, themed camps for children, students and young people, as well as camps for people with disabilities and camps for the Olympics, will no longer be able to use day workers for the day-to-day activity, in the same situation being the institutes, centers and stations of agricultural research and development, subordinated to the Academy of Agricultural and Forestry Sciences "Gheorghe Ionescu-Sisesti", which can no longer be the beneficiary for the unskilled work with occasional character for the fields of agriculture and forestry. b) Units subordinated to the Ministry of Youth and Sport for the following areas: hotels and other accommodation facilities; accommodation facilities for holidays and short periods -children's camps (organized by the Ministry of Youth and Sports, directly or through units subordinated to it); accommodation facilities for holidays and short periods -cottages; activities of sport grounds and sport clubs. Amendments made by c) The Academy of Agricultural and Forestry Sciences "Gheorghe Ionescu-Șișești" and the institutes, research centers and development stations under its subordination, of the State Institute for Testing and Registration of Varieties under the subordination of the Ministry of Agriculture and Rural Development, as well as of the county offices of pedological and agrochemical studies for the following fields: agriculture, hunting and related services; forestry, except for exploitation; fishing and aquaculture. d) Public institutions authorized to carry out archaeological researches, according to the existing norms in the field, respectively universities, research institutes, institutions with a museum profile, for archaeological excavations. Another significant change made by GEO no. 114/2018 was to restrict the areas in which the unskilled work with occasional character can be rendered to a number of 3 areas out of the 23 [9]. The 3 remaining areas were: a) agriculture, hunting and related services; b) forestry, except for logging; c) fishing and aquaculture. Regarding the duration of the activity of a day worker, the initial form of the law stipulated that no day worker can perform activities for the same beneficiary for a period longer than 90 days accumulated over a calendar year. Through GEO no.114/2018 a person cannot perform day work activities for more than 120 days during a calendar year, regardless of the number of beneficiaries, except for the days for activities in the field of animal breeding in an extensive system through the seasonal grazing of sheep, cattle, horses, seasonal activities within the botanical gardens, subordinated to the accredited universities, as well as in the viticulture field, for which the period can be 180 days during a calendar year. Through the GEO 114/2018, the beneficiary cannot use an individual for more than 25 calendar days continuously in day work activities. If the day worker's activity requires a period of more than 25 days, the day worker can be used on the basis of a fixed-term employment contract. A new sanction is introduced. Thus, the day worker is sactioned with a fine from 500 lei to 2,000 lei by the failure to comply with the provisions regarding the duration of day work regime activity. Amendments made by GEO no. 26/2019 Starting with May 1, 2019, the categories of activities that can be performed by the day workers are extended to 12 areas, thus reintroducing 9 fields removed in 2018 [10]. The reasons invoked by the Government in the Explanatory Note for GEO 26/2019 are acute lacking of the personnel faced by the employers, as well as by the specificity of certain activities, which involve the use of unskilled work, with a seasonal, occasional and relatively short-term character, and that the day worker is deprived of the rights conferred by the obligatory public system of social insurance [11]. In addition to the 3 areas established by GEO 114/2018, the new ones in which the daily workers can be employed are: -activities for organizing exhibitions, fairs and congresses; -advertising; -activities of artistic interpretation -performances, activities-support for the artistic interpretation -performances and activities of management of the theaters; -breeding of semi-domestic animals and other animals; -catering activities for events; -landscape maintenance activities -planting, maintaining and maintaining parks and gardens, with the exception of private housing; -restaurants; -bars and other drinks serving activities; -activities of zoos, botanical gardens and nature reserves. The Government's Explanatory Note invokes data on the statistics of daily work in these areas from the Labour Inspection Activity Report. For example, in activities of artistic interpretation -performances, support activities for artistic interpretation, performances and management activities of the theaters -83,455 day workers -1,208,692 days; advertising -75,578 day workers -1,563,198 days; catering activities for events -22,572 day workers -182,374 days; restaurants -15,889 day workers -149,218 days; for organizing exhibitions, fairs and congresses -7,881 day workers -51.125 days; bars and other beverage services -2,723 day workers -21,653 days; breeding and breeding of semi-domestic animals and other animals -1,748 day workers -33,062 days [11]. By GEO 26/2019 it was established that ''no day worker could perform activities for the same beneficiary for a period longer than 90 days accumulated during a calendar year, except for the day workers who carry out activities in the field of agriculture, the raising of animals in an extensive system through seasonal grazing of sheep, cattle, horses, seasonal activities within the botanical gardens subordinated to the accredited universities, as well as in the viticulture field; in their case, the period may not exceed 180 days accumulated over a calendar year.''It returned to the initial 90-day form. Another novelty brought by GEO 26/2019 is that the compulsory contribution to the social insurance pensions has been established with the 25% percentage of the daily income, the calculation, payment and declaration of the social insurance contribution due to the budget of the state social insurance for the incomes from the activity carried out by day workers are the responsibility of the beneficiary.The day workers are not insured in the health system, in the system for accidents at work and occupational diseases [10]. Results of the control activity carried out by the territorial labour Inspectorate of Suceava, at the units that employ day workers (for the year 2018) According to data provided by Territorial Labour Inspectorate of Suceava [13], on November 16, 2018, for the day workers' activity, in September 2018, the mentioned institution carried out the national campaign, regarding the observance by the beneficiaries of the provisions of Law no. 52/2011, regarding the exercise of occasional activities carried out by day workers, and of the legal provisions regarding the prevention and combating the exploitation of young people, through work in these areas. The objective was to identify the beneficiaries who use day workers, and to take the necessary measures for their non-compliance with the legislation as well as to increase the awareness of the beneficiaries and the day workers regarding the necessity of applying and observing the legal provisions, in the field of work relationships, security and occupational health, established by Law no. 52/2011. The purpose of the campaign was also to determine the beneficiaries to comply with the obligation to set up, complete the register of day workers and to send an extract of it to the Territorial Labour Inspectorate in which the company has its headquarters, within the term provided by law. The labour inspectors checked a number of 28 beneficiaries, a number of 157 of their day workers were verified. Following these controls, a number of 7 sanctions were applied. Among the deficiencies found in the field of labour relationships, the institution mentioned the following:  Day workers were unregistered in the register;  Gross hourly wage was less than the hourly value of the minimum gross wage in the country;  No extract from the Registry of Day Workers was sent to Territorial Labour Inspectorate of Suceava, until the 5 th of each month. Conclusions and proposals Day work activity is an exception from the provisions of Law no. 53/2003 of The Labour Code, the forms of employment in this case are simplified, there is no need for a conclusion of an employment contract but only its registration in the Registry of Daily Records. This type of work was a response both to the urgent needs of the beneficiaries of unskilled workforce, which operates in the branches of the national economy listed by the law of the day worker , as well as to the financial needs for living of the people without a certain qualification. During the application of the law, from 2011 until now, the legislation has been subject to a continuous modification regarding the quality of beneficiary, so that, if in 2011 only legal entities could have the status of beneficiary of day workers, in 2013 their scope was extended also to the authorized natural person, natural person who owns the individual enterprise or the family enterprise. Also the duration for which a day worker can carry out such a type of work for the same beneficiary or different beneficiaries during a calendar year has been modified successively, so that if such work was required for a longer period of time, the employment contract for a certain period of time would be the solution. The significant changes took place, as we found, also regarding the fields of activity as well as the obligation from May 2019 to pay the 25% contribution to the social insurance budget calculated on the daily or weekly income of the day worker. As some deficiencies were found through practicing controls in the field of labour relationships, such as unrecorded day workers in the register, gross hourly payment less than the hourly value of the gross minimum wage in the country and the fact that no extract of the Registry of Daily Records kept by the beneficiary in scripted format was submitted until the 5th of each month to the labour inspectorates, it is urgently required to comply with the provisions of GEO 26/2019 regarding the establishment of the Electronic Registry of Daily Records starting with December 20, 2019. Practically this register would be one similar to the REVISAL(The
2019-12-12T10:27:51.786Z
2019-12-11T00:00:00.000
{ "year": 2019, "sha1": "19d653ebed3e7dd858a082f660b1d6dbc7a7da90", "oa_license": null, "oa_url": "https://doi.org/10.18662/eljpa/103", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "58ecbbd90cd87738ce88fa67abe440dd1f1eb10d", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Political Science" ] }
95606546
pes2o/s2orc
v3-fos-license
Investigation on Thermal Degradation Process of Polymer Solar Cells Based on Blend of PBDTTT-C and PC 70 The effects of thermal treatment on the photovoltaic performance of conventional and inverted polymer solar cells (PSCs) based on the combination of poly[(4,8-bis-(2-ethylhexyloxy)-benzo[1,2-b;4,5-b]dithiophene)-2,6-diyl-alt-(4-(2-ethylhexanoyl)thie-no[3,4-b]thiophene))-2,6-diyl] (PBDTTT-C) and [6,6]-phenyl C 70 -butyric acid methyl ester (PC 70 BM) are investigated. The transient photoconductivity, the absorption spectra, and the transmission electron microscopy (TEM) images have been employed to study the thermal degradation of the inverted PSCs. The degradation is attributed to the inefficient charge generation and imbalance in charge-carrier transport, which is closely associated with the morphological evolution of the active layer with prolonged heating time. Introduction Bulk heterojunction (BHJ) PSCs have been paid much attention as candidates for sustainable solar energy converters because of their unique advantages of low cost, light weight, large area, and simple solution-based fabrication with mechanical flexibility.The power conversion efficiency (PCE) has been extensively enhanced using low-bandgap materials, optimizing the device structures and controlling the morphology of active layers [1][2][3][4].Although the reported PCEs nearly meet the initial requirements for commercial applications, the poor stability of the PSCs is still a major barrier to realize the commercialization [5][6][7][8].Improvements are necessary to enhance the stability of these devices. Related studies have suggested decay mechanisms, such as the electrode degradation under the presence of oxygen and moisture [9], metal ion diffusion into the active layers from the electrodes [10], and photoactive material degradation with oxygen and moisture upon illumination and heating of the device [11][12][13][14].Several groups have investigated the thermal degradation of PSCs by simulating the actual working environment under sunlight.For instance, Wong et al. reported that the microscopic and nanoscopic crystallization of [6,6]-phenyl C 60 -butyric acid methyl ester (PC 60 BM) influenced the morphological stability and performance of polymer-fullerene solar cells under thermal stress [15]. However, the mechanisms of PSC degradation under thermal stress, as well as the sustained morphological changes and charge transport kinetics, remain unclear.In this study, inverted and conventional PSCs based on the blend of PBDTTT-C and PC 70 BM were fabricated.The thermal stability and optoelectronic behavior of these devices were investigated via morphological analyses and transient photocurrent measurements.Conventional PSCs were fabricated with the structure of glass/ITO/PEDOT : PSS/PBDTTT-C : PC 70 BM/Ca/Al (Figure 1(c)).The precleaned ITO substrates were treated with UV-ozone for 20 min, then modified by spin-coating PEDOT:PSS with a thickness of around 40 nm, and then baked at 150 ∘ C for 15 minutes at ambient conditions.Then the substrates with PEDOT : PSS were transferred to glove box to fabricate the photoactive layer at experiment condition similar to those in the inverted device.Finally, 20 nm Ca and 100 nm Al were thermally deposited on the active layer as cathodes. Device Characterizations. The current density-voltage (J-V) measurements of the devices were performed by a computer-controlled Keithley 6430 Source Measure Unit.Photovoltaic characterization was conducted under simulated AM1.5G irradiation (100 mW/cm 2 ) of solar simulator XES-301S+EL-100 (SAN-EI ELECTRIC).The thermalstability tests were carried out on a hotplate (IKA RCT basic magnetic stirrer) as heating source to control temperature at an accuracy of ±1 ∘ C. All electrical measurements were performed in a nitrogen-filled glove box.A UV/visible spectrophotometer (Shimadzu UV-3101) was used to characterize the absorption properties of the PBDTTT-C : PC 70 BM layer.TEM images were obtained by using a Tecnai G2 F30 transmission electron microscope operated at 300 kV.PBDTTT-C and PC 70 BM spin-coated on glass substrate were collected for differential scanning calorimetry (DSC) measurements, which were performed on a NETZSCH DSC 200F3 equipment.N 2 (flux about 60 mL/min) was used as the purge gas.Approximately 5 mg of the samples was sealed in perforated aluminum crucibles.The scan rate during DSC measurements was set to 5 K/min. To measure transient photoconductivity, the samples were excited by an optical parametric oscillator (OPO) pumped by a Q-switched neodymium ion-doped yttrium aluminum garnet (Nd 3+ :YAG) laser.The OPO delivered pulses with a duration of about 5 ns, a wavelength of 460 nm, and an energy of 3 nJ per pulse at a repetition rate of 10 Hz.The carriers of the devices were generated by a short laser pulse; then the transient photocurrent was obtained by measuring the voltage drop across a 5 Ω resistor load that was connected in series with the solar cell; the current traces were recorded by a Tektronix TDS 540D oscilloscope. Results and Discussion High temperatures can accelerate PSCs degradation [16].The Arrhenius model describes this process [17], in which the degradation constant is defined as follows: where is the activation energy, is the Boltzmann constant, is the temperature (K), and is a constant dependent on the degradation mechanisms and the experimental conditions.Typically, degradation process is in close association with the intrinsic properties of materials, such as the trapping behavior of electrical carriers and structural and chemical changes of polymers.The activation energy is a common parameter that the activated processes have to overcome.Hence, the kinetic processes are strongly temperature dependent.Hou et al. reported that the glass transition temperature of PBDTTT-C is about 400 ∘ C to 410 ∘ C, which is above the operating temperature of the device [18].In this study, the thermal behavior of PBDTTT-C, PC 70 BM, and their blends is investigated by DSC to determine the temperature of phase transitions.The traces of heat flow during the annealing of PBDTTT-C, PC 70 BM, and their blends are illustrated in Figure 2.For pure PC 70 BM, an exothermic peak (197 ∘ C) and a melting peak (204 ∘ C) are observed in Figure 2(a).From Figure 2(c), we can find a new distinguishable peak locating at 64.3 ∘ C defined as the eutectic temperature of the blend of PBDTTT-C and PC 70 BM.The new peak corresponds to the melting temperature of bimolecular crystals formed by fullerene intercalation between the polymer side chains [19,20].The well-ordered bimolecular crystals can cause strong charge-transfer states due to the strong coupling of the dissimilar nearest neighbor molecules [20,21].So the reduction of bimolecular crystals would affect the electronic properties of the PSCs such as exciton dissociation, recombination, and charge transport. Considering our DSC results and the actual working temperature of the solar cells in outdoor environments (in the range of 30 ∘ C to 80 ∘ C), the temperature ranging from 50 ∘ C to 90 ∘ C was selected to investigate PSCs degradation.All measurements were performed in a glove box to avoid oxygen and water induced degradation.Figure 3 reveals that the performance of devices exhibits burn-in losses under thermal stress at different temperatures for 1 h.The open-circuit voltage (Voc) is stable at 0.72 V, and the short-circuit current density (Jsc) acts as the major degradation factor.The heated devices display a slightly decline in Jsc compared with the control device when the temperature is below 70 ∘ C.However, Jsc shows a steep decrease at 80 ∘ C and 90 ∘ C. The decrease of the Jsc means the reduced interfacial area between the donor polymer and the acceptor fullerene after the annealing [22].Integrating the results of DSC measurements, we selected The J-V characteristics of PSCs are shown in Figure 4, and the performance of the devices is summarized in Table 1.The conventional and inverted devices exhibit significantly reduced performances after heat treatment at 70 ∘ C for 18 h.For conventional devices, efficiency decreases from 6.62% to 5.76%, Jsc decreases from 15.29 mA/cm 2 to 14.76 mA/cm 2 , and fill factor (FF) drops from 60.56% to 58.28%.For inverted devices, Jsc decreases from 15.28 mA/cm 2 to 13.86 mA/cm 2 , and FF increases to a maximum value of 62.64% after 2 h heating treatment and then slightly decreases to 62.08%.Variations in the parameters result in PCE degradation from 6.74% to 6.11%.To further compare the photovoltaic performances of conventional and inverted PSCs, the normalized PCE, FF, Voc, and Jsc are shown in Figure 5. Jsc of the conventional devices is degraded by 11%, which is slightly higher than that of the inverted devices (9%).Although Voc and FF of both PSCs fluctuate during heating, inverted PSCs exhibit a more stable FF value.PCE of the inverted PSCs decreases by 10%, whereas that of the conventional devices drops to 85% of the original PCE during 18 h thermal treatment.The better thermal stability of the inverted PSCs may be attributed to the natural self-encapsulated structure that comprised a well vertical phase separation in the active layer [23,24], on which the metal oxides (MoO 3 ) and stable metals (Ag) are deposited as the buffer layer and top electrode, respectively [25,26].Considering that the inverted PSCs have better thermal stability than that of the conventional PSCs, we choose the inverted PSCs as the study object in the following discussion. In general, Jsc is determined by the generation, transport, and extraction of charge carriers.Reduction in this parameter is larger than that of the other parameters during the International Journal of Photoenergy The extraction of photogeneration charges is low because of the reduced absorption with prolonged heating time [28].To further study the photogeneration charges of the PSCs after thermal treatment, the corresponding absorption spectra were measured (Figure 7).The absorption intensity of inverted PSCs decreases with increasing heating time because of the unstable morphological evolution (Figure 7(a)).This evolution leads to the excessive phase separation in the The absorption maximum exhibits a redshift with increasing heating time, which indicates that the high crystalline order and large domain sizes are obtained during the thermal process.The charge-carrier transport is unbalanced because of the uneven crystalline state (Figure 6) [29,30].TEM images of the blend films which are obtained before and after thermal treatment are given in Figure 8.The ascasted blend film exhibits a well-developed phase separation with the PBDTTT-C and the PC 70 BM uniformly distributed across the network of the blend film and donor-acceptor domains with appropriate sizes (Figure 8(a)).The outlines of the phase separation between PBDTTT-C and PC 70 BM are very clear as shown in Figure 8(a).The interpenetrating networks with appropriate phase sizes can provide a favorable interface for exciton dissociation and pathways for chargecarrier transport without too many chances of recombination during the transit processes [16,22].The distribution of the donor-acceptor phases becomes more compact, when the heating time is increased from 1 h to 5 h (Figure 8(b) to Figure 8(e)).It means that the clusters of the PC 70 BM have grown with big sizes by the incorporating molecules drawn from the distributed PC 70 BM network.Subsequently, the states of the domains become ambiguous.With the growing numbers and sizes, the clusters touch each other and deplete the polymer matrix due to lack of space.The outlines of the phase separation between the PC 70 BM and PBDTTT-C appear to be blurry.The TEM image of the blend film reveals the phase separation with large domain size after 18 h thermal treatment, which changes the absorption, exciton dissociation, and carriers transport. Conclusion In summary, the effects of heating temperature and heating time on the thermal stability of conventional and inverted PSCs based on PBDTTT-C : PC 70 BM in a N 2 atmosphere are analyzed.The results reveal that the inverted PSCs exhibit better thermal stability than the conventional devices.The absorption of the blend film decreases with increasing heating time because of the formation of high crystalline orders and large domain sizes.Transient photoconductivity results indicate that the inefficient charge generation and nonequilibrium charge-carrier transport may result in the thermal degradation of PSCs.These results provide methods to enhance the thermal stability of the PSCs. 2. 1 . 2 InternationalFigure 1 : Figure 1: (a) Molecular structures of PBDTTT-C and PC 70 BM.(b) The structure of the inverted polymer solar cell.(c) The structure of the conventional polymer solar cell. Figure 3 : Figure 3: J-V curves of the inverted solar cells treated at different temperatures. Figure 4 : Figure 4: J-V curves of the (a) conventional and (b) inverted solar cells treated at 70 ∘ C for different time. Figure 5 : Figure 5: Normalized PCE, FF, Voc, and Jsc of conventional (red dot) and inverted (black square) solar cells at 70 ∘ C for different time.All curves are normalized to the initial value. Figure 6 :Figure 7 : Figure 6: (a) Decay of the transient photocurrent density and normalized (insert) of control device and the devices thermally treated at 70 ∘ C for different time.(b) Total swept-out charge density as a function of decay time for solar cells. Table 1 : Device parameters of the conventional and inverted devices which are kept at 70 ∘ C for different time.
2019-01-03T09:12:40.937Z
2014-08-19T00:00:00.000
{ "year": 2014, "sha1": "6e84d9977b383649a3ebdba70a926383186a80ea", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ijp/2014/354837.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6e84d9977b383649a3ebdba70a926383186a80ea", "s2fieldsofstudy": [ "Materials Science", "Physics", "Engineering" ], "extfieldsofstudy": [ "Chemistry" ] }
259584683
pes2o/s2orc
v3-fos-license
Low-Complexity Wideband Interference Mitigation for UWB ToA Estimation Reliable time of arrival (ToA) estimation in dense multipath (DM) environments is a difficult task, especially when strong interference is present. The increasing number of multiple services in a shared spectrum comes with the demand for interference mitigation techniques. Multiple receiver elements, even in low-energy devices, allow for interference mitigation by processing coherent signals, but computational complexity has to be kept at a minimum. We propose a low-complexity, linearly constrained minimum variance (LCMV) interference mitigation approach in combination with a detection-based ToA estimator. The performance of the method within a realistic multipath and interference environment is evaluated based on measurements and simulations. A statistical analysis of the ToA estimation error is provided in terms of the mean absolute error (MAE), and the results are compared to those of a band-stop filter-based interference blocking approach. While the focus is on receivers with only two elements, an extension to multiple elements is discussed as well. Results show that the influence of strong interference can be drastically reduced, even when the interference bandwidth exceeds 60% of the signal bandwidth. Moreover, the algorithm is robust to uncertainties in the angle of arrival (AoA) of the desired signal. Based on these results, the proposed mitigation method is well suited when the interference bandwidth is large and when computational power is a critical resource. Introduction The location-awareness of electronic devices has become an integral element for a variety of applications. Secure access-granting is one example where reliable, accurate range estimation is of major importance. Although the superior time resolution of ultra-wideband (UWB) signals allows for high-accuracy time of arrival (ToA) estimation in line of sight (LoS) scenarios, reliable estimation is still a demanding task in harsh environments including dense multipath (DM) and interference. While ToA estimation in DM scenarios has been the subject of extensive research, e.g., [1][2][3][4], discussion of the problem of interference mitigation within the UWB ToA estimation framework is rather sparse in existing literature. With a rising number of services sharing the same spectrum, this becomes an increasingly important issue. Frequency bands at 5.945-6.425 GHz in Europe [5] and at 5.925-7.125 GHz in the USA [6] are now available for unlicensed transmission, and overlap to a large extent with channel 5 and 6 of the high-repetition pulse physical layer (HRP PHY) of the IEEE 802.15.4 standard [7], centered at 6.490 GHz and 6.989 GHz. Wi-Fi will arguably be the most prominent service within these frequency bands in the near future. With a maximum channel bandwidth of 160 MHz in Wi-Fi 6E and 320 MHz in Wi-Fi 7 [8], a single Wi-Fi channel occupies up to 64% of the UWB signal bandwidth of 500 MHz. Low-energy devices, such as a key fob in an access granting system, are strictly limited in their power consumption to ensure a long life-cycle. State-of-the art devices with one receiver use band-stop filters (BSF) to mitigate the influence of interference signals. However, a large bandwidth of the interference signal strongly limits the performance of BSFs. More sophisticated signal processing techniques are enabled by multiple coherent receivers, which are recently becoming available even in low-energy UWB devices. When computational power is at a premium, optimal estimators, such as the maximum likelihood (ML) estimator, can be implemented, incorporating statistical models of the DM and the interference. For low-energy devices, the computational complexity of these estimators is prohibitive in general. Furthermore, the second-order statistics of the DM and the interference signal are often unknown in practice. We propose a combined interference mitigation and ToA estimation method of low complexity. The ToA estimation is performed by an LoS detection algorithm which performs well under good signal-to-interference-plus-noise ratio (SINR) conditions, even in non-line of sight (NLoS) scenarios. In order to obtain the required SINR, we implement a linearly constrained minimum variance (LCMV) processor prior to the ToA estimator. It is shown that a low-dimensional implementation of the LCMV processor, combined with the detectionbased ToA estimation, obtains good results in the presence of wideband interference and is indeed of low complexity. In the presentation of the results, we will focus on the case of a 2 × 1 linear array, but the discussion will be extended to the higher-dimensional case. Related Work and Contribution The impact of interference on UWB systems has been the subject of research, but the vast majority deals with UWB communication systems under the assumption of narrowband interference. Performance analyses of UWB communication systems in the presence of narrowband interference for different receiver types are given in [9][10][11][12], and narrowband interference mitigation schemes are presented in [13][14][15][16]. The performance of UWB ToA estimation is analyzed in the presence of narrowband and multi-user interference [17,18] and wideband interference [19]. The mitigation of wideband interference in UWB ToA systems has not been investigated thoroughly. This might be explained by the fact that its practical relevance is only recently emerging due to the availability of the 6 GHz-band for unlicensed transmission. A general approach to wideband interference mitigation can be found in the wideband beamforming literature, e.g., [20], including the LCMV processor. The general solution to the LCMV problem was derived in [21]. In [21,22], algorithms for adaptive calculation of the optimal solution were developed, where the pre-steering of the receiver array was assumed. In [23], this was generalized to the case of an arbitrary angle of arrival (AoA) without pre-steering, and the most efficient low-rank approximation for this case was found. The processor obtained for a simple set of point constraints, as in [21][22][23], is generally quite sensitive to deviations of the AoA and other parameters. In [24][25][26], the original approach was modified in various ways in order to improve the robustness with regard to parameter deviations. In this work, we assume no pre-steering of the receiver array [23], and we implement the robust method based on a probabilistic approach in [25]. Both choices are motivated by the requirement of low complexity. A good overview of additional work is given in [20]. In the literature referred to above, the analysis of the LCMV processor is limited to a few (mostly two) interference sources with independent signals and significantly separated locations. In this work, the interference signal is assumed to originate from a single source and to arrive at the UWB receiver after propagation over a DM. This means that the interference signal impinges from many different directions, including both spatially correlated and uncorrelated components. The interference mitigation capability of the LCMV processor in such a scenario has not been investigated yet. Furthermore, the performance analysis in the LCMV literature is typically carried out for a large number of antenna elements and a large filter-length. In this work, we show that a low-dimensional implementation of the LCMV processor is able to significantly reduce the performance degradation of the ToA estimation due to wideband interference. Although the LCMV approach has been used in the context of ToA estimation and localization [27][28][29], it was only used with the purpose of self-interference mitigation. Research on the mitigation of wideband interference based on LCMV processing in UWB ToA estimation is not known to the authors. Given the discussion above, the key contributions of this paper are briefly listed as follows: • Adoption of the presented signal model to the LCMV processor • Presentation of a low-dimensional implementation of the LCMV processor in combination with a detection-based ToA estimator • Virtual array measurements of a UWB indoor channel • Statistical evaluation of ToA estimation error based on measurements and simulations Paper Outline In Section 2, we discuss the signal model of multiple receivers in the presence of DM and interference. The combined interference mitigation and ToA estimation method is presented in Section 3. The indoor channel measurement campaign is described in Section 4, and the results are presented in Section 5. Receive Signal A UWB pulse s(t) is transmitted from a single transmit antenna to K receive antennas. Considering propagation over a multipath channel and perturbation by additive interference and noise, the equivalent complex-valued baseband signal at the kth receive antenna after the matched filter is where g(t) is the autocorrelation of the pulse, ν (k) (t) is the dense multipath component (DMC), * denotes the convolution operator, and p (k) (t) is the perturbation signal. The deterministic signal part g (k) (t) received via the LoS is where α (k) and τ (k) are the attenuation and delay introduced by the distance between the transmitter and the kth receive antenna, and ϕ 0 ∼ U (0, 2π) is a uniformly distributed carrier phase offset, which is equal for all receive elements due to a shared phase reference. The DMC ν (k) (t) is a complex random process with no further specification, and the perturbation signal p (k) (t) includes interference and noise and is specified in Section 2.2. Let q (0) be a reference point in the vicinity of the receive antenna locations q (k) (q (0) is typically chosen to be the geometrical center of the receiver array) such that This means that the deterministic signal part g (k) (t), given by Equation (2), is approximately where F −1 is the inverse Fourier-transform, and G (0) (ω) is the Fourier-transform of g (0) (t). The phase difference ∆ϕ (k) between q (0) and q (k) for a signal with an AoA of θ ∈ [0, 2π) is a function of q (0) , q (k) , θ, and ω ( extension to the three-dimensional case is straightforward by adding an elevation angle to the model). The dependency on q (0) and q (k) in Equation (4) is not stated explicitly. For a given θ, the desired signal part g (k) (t) at the kth receive element is therefore solely expressed through g (0) (t). Note that Equation (4) corresponds to a shift of g (0) (t) in the time domain. Frequency domain representation will be used in the method described in Section 3 and is therefore chosen here. It is further noted that this is different from the narrowband assumption, due to the large bandwidth of the transmit signal s(t). Interference The perturbation signal in Equation (1) is given by where u (k) and z (k) denote the interference and noise, respectively. While the zero-mean, white Gaussian noise is i.i.d. for all k, the interference signal is correlated across the receive elements, which is used by the method described in Section 3 to reduce the interference power. The interference signal u I (t) is transmitted by a single source and received by the kth element via the interference channel h I (t). The received interference signal at antenna k is thus where u I (t) is a wide-sense stationary (WSS) signal, effectively bandlimited to B I . The interference channel h I (t) includes the direct path and multipath components and is described by a random process. It is unknown to the receiver, and no knowledge about its statistical properties is assumed. In a beamforming framework, this can be interpreted as follows. Due to h (k) I (t), the signal u I (t) impinges upon the receiver array from multiple angles with different delays resulting in u (k) (t), but the number of signal components and the angles at which they arrive at the receiver are unknown. Time-Discretization The receive signal is sampled at frequency f s = 1/T s , where T s is the sampling time. In the following, boldface vector notation will be used to represent the sampled version of a signal, e.g., s = [s 0 , s 1 , . . . , s N−1 ], with s n = s(nT s ), N being the number of samples, and T = NT s the observation duration. The sampled version of the baseband signal is then where g (k) is the sampled version of the delayed pulse, given in Equation (2), and v (k) is the sampled version of the pulse convolved with the DMC in Equation (1). The perturbation vector is given by where u (k) and z (k) are the sampled versions of the received interference and noise. LCMV Processing and ToA Estimation This section describes the combined approach to the interference mitigation and ToA estimation. Figure 1 shows the overall structure of the receiver processing. The K receive signals r (k) , given by Equation (7), are combined by the LCMV processing in order to obtain a single signal r with an increased SINR. Based on this result, the ToA τ (0) , which corresponds to the distance between the transmitter location and the reference point q (0) , is estimated by a detection-based estimator. MF LCMV Estimation Figure 1. Block diagram of the receiver processing and ToA estimation. Each of the K receive antennas is followed by an MF. The K sampled receive signals r (k) after the MF are combined by the LCMV processor into a single vector r. Based on the resulting vector r, the ToA is estimated. Problem Formulation We start with a brief overview of the LCMV processor [21,22] and emphasize the motivation for choosing this approach within the present framework. Figure 2 shows the filter-and-sum structure of the LCMV processor. The vectors r (k) are the K sampled receive signals after each matched filter (MF) (cf. Figure 1). They are the input for K finite impulse response filters of length L, and the resulting output vectors are combined to obtain a single vector r. The KL complex-valued filter coefficients w (k) l shall be chosen such that the following two goals are achieved: (i) The resulting interference power is minimized. (ii) The envelope of the desired signal is preserved. The motivation for using the LCMV processing in combination with the detectionbased ToA estimation is the realization of goals (i) and (ii) by means of low complexity. It has been shown, in [19], that the estimation error due to multipath propagation produced by the detection-based ToA estimator tends to be small under good SINR conditions, which shall be ensured by the realization of (i). The local error in the vicinity of the LoS due to pulse distortion effects is minimized by the realization of (ii). A formal description of this problem follows. Consider the input matrix with delay index m ∈ {0, 1, . . . , N − L}, in structural analogy to Figure 2, and the stacked input vector where the vectorization operator vec(·) stacks the columns of a matrix on top of each other. If the weighting matrix and the stacked weighting vector are constructed in the same fashion, i.e., and then the mth element of the output vector r is given by the inner product of the weighting and input vectors Figure 2. Filter-and-sum structure of the LCMV processor (revised from [23]). Each of the K receive vectors r (k) is passed through a finite impulse response filter of length L with filter coefficients w (k) l and a tap-delay of T s . The resulting vectors are summed up to obtain the single receive vector r. By inserting Equation (7) into Equation (9) and Equation (9) into Equation (10), the stacked input vector x m can be rewritten as where g m , v m , and p m are the stacked pulse vector, stacked DMC vector, and stacked perturbation vector. Equation (13) is then rewritten as Under the assumption of statistical independence and zero-mean property of all three components, the covariance matrix of x m is where and are the pulse covariance matrix, DMC covariance matrix, and perturbation covariance matrix, respectively. Jointly realizing goals (i) and (ii) can now be formulated as a constrained minimization problem: where C ∈ C KL×J is the constraint matrix, and f ∈ C J is the response vector. That is, the mean power w H R p m p m w of the mth element of the stacked perturbation vector p m is minimized with regard to w, while the latter has to fulfill the constraint of Equation (20b). Equations (20a) and (20b) are a quadratic program with the well known solution found by the method of Lagrange multipliers [21], Note that w opt does not depend on the delay index m. It will be shown in Section 3.3 that, for a stationary perturbation signal, the covariance matrix R x m x m and, as a consequence, the optimal weighting vector w opt do not depend on m. For any given AoA of the desired signal component, the constraint matrix C and the response vector f can be pre-calculated. The design of these constraints is discussed in Section 3.2. The perturbation covariance matrix R p m p m is unknown a priori and has to be estimated, as described in Section 3.3. Constraint Equation We will now discuss the design of the constraints and the choice of optimal parameters with regard to the ToA estimation. The constraint equation is responsible for the realization of goal (ii), which is an undistorted envelope of the desired signal g (k) . In the frequency domain, this is achieved by a constant gain and linear phase spectrum. Pre-steering of the receiver array ( the virtual look-direction of the array could be steered to a desired AoA by placing wideband filters in front of the LCMV processor), as assumed in [21], is not employed because it would come at the cost of computational expense. Alternatively, the steering can be incorporated into the constraint matrix C [20]. If a signal with an AoA of θ and bandwidth B excites the receiver structure given in Figure 2, C is constructed according to where the jth column is (2) (ω j ,θ)) e j(∆ϕ (2) (ω j ,θ)+ω j T s ) . . . e j(∆ϕ (2) and the frequency points ω j are evenly spaced within [−πB, πB]. The phase difference ∆ϕ (k) (ω j , θ) is determined by the array structure and the AoA, and is the same as in Equation (4). Equation (23) shows that each column of C corresponds to the excitation of the combined array and filter-and-sum structure by a tone of frequency ω j with an AoA of θ. The unit gain and linear phase requirement for the desired signal results in a response vector f with elements where τ f is the resulting filter delay. The dimensions of C given by KL and J determine the degrees of freedom available to control the filter response. For J < KL and C of full rank, the degrees of freedom are equal to KL − J. If K and L are fixed, the problem is to find the best value for J. If the chosen J is too small, then not enough constraints are imposed on the response in AoA-direction, and the desired signal is distorted. This results in an increased error in the ToA estimation around in the vicinity of the LoS. If the chosen J is too large, the degrees of freedom are too low for sufficient interference suppression. This results in an increase in errors introduced by interference and multipath components. Furthermore, the conditioning of C depends on θ, and thus, C is not of full rank in general. Consequently, the effective degrees of freedom and the relation to the number of constraints J are dependent on θ as well. This has to be considered in the search for the optimal values of these parameters. An optimality criterion in terms of ToA estimation is, for example, the mean squared error (MAE) or the variance of the estimation error in the vicinity of the LoS. As there is no analytical solution apparent to such a criterion, it has to be found empirically. Results are presented in Section 5. Estimation of the Perturbation Covariance Matrix It was established in Section 2.2 that the transmitted interference signal u I (t) is WSS. If the transmission duration of u I (t) is at least as long as the observation duration T and the interference channel h (k) I is assumed to be static within T, then the stacked perturbation vector p m is WSS as well. This means that the perturbation covariance matrix R p m p m is independent of the delay index m, i.e., However, R p m p m is unknown and has to be estimated. For an observation time T larger than the maximal delay of the DMC term v m , this can be achieved by the sample methodR where M I is an offset index and M R is the number of sample vectors used for the estimation. The offset index M I is chosen to be large enough so that no multipath components, but only interference and noise, are assumed to be present in x m (cf. Figure 3). Finally, R p m p m in Equation (21) is replaced byR p m p m . Figure 3. An example of the estimated channel impulse response in a NLOS scenario is shown (a) before and (b) after the LCMV processing. It is emphasized that all quantities involved in the computation of the optimal filter coefficients w opt in Equation (21), with the exception ofR p m p m , can be calculated offline for all possible θ because they do not depend on the data vector x m . This is an important fact if computational power is a critical resource. Spatial Flattening When the constraint matrix C is calculated with regard to a certain AoA θ, the resulting filter is, in general, quite sensitive to deviations in θ. If θ has to be estimated and is subject to errors, this can result in substantial performance degradation. One approach to mitigate this problem is to include additional derivative constraints in the constraint Equation (20b) [24]. In the case of a linear receiver array, J is increased by a factor of two for first-order derivative constraints, and by a factor of three for second-order derivative constraints. Even though C can be calculated offline, it is also involved in the computation of the optimal weight coefficients in Equation (21). Thus, an increased dimensionality of C results in an increase in the calculations to be carried out adaptively. For this reason, a probabilistic spatial flattening approach, proposed in [25], is chosen. Let us consider that θ is a random parameter with a probability density function (pdf) p(θ). Then, the mean constraint matrixC is given bȳ and the constraint matrix C in Equation (21) is replaced by Equation (27). It is evident that the dimensionality of the constraint matrix, and as a result, the computational complexity, does not increase. In a practical implementation, θ takes on discrete values, and the integral is replaced by a sum. Note that a spatial flattening method comes at the cost of reduced array gain. If p(θ) is rather flat, then the LCMV processing becomes more robust to deviations in θ, but the gain in the direction of the true θ is reduced. For a narrow p(θ), this relation is inverted. ToA Estimation We will now discuss the estimation of the ToA τ (0) . It is shown in Figure 1 that the input for the ToA estimation is the signal r obtained by the LCMV processor. As for the interference mitigation, low computational complexity is crucial for the ToA estimation. For this purpose, a detection-based estimator is implemented, which searches for the first peak of r exceeding a certain threshold γ. This is written aŝ where r m is given in Equation (13), andm (0) is the estimated discrete-time index corresponding to the ToA τ (0) . The threshold γ is chosen with regard to the mean power σ 2 p of the residual perturbation vector p, of which the elements are given by the last term in Equation (15). This is an unknown parameter and is estimated bŷ where, again, M I is chosen to be large enough that only interference and noise are assumed to be present in r m , as indicated in Figure 3. The threshold γ is then defined via the threshold-to-interference-plus-noise ratio TINR = 10 log 10 After the indexm (0) is found, peak interpolation is performed by a parabolic function f (·), and the ToA is estimated bŷ It has been shown, in [19], that such an estimator performs well in multipath environments when operated at a sufficiently high SINR. Figure 3 schematically represents the combination of the LCMV processing and the ToA estimation. In Figure 3a, an example of a channel estimate at a single receive element is shown, where the estimation error is given by If the SINR is low, then τ is outlier-driven (this has also been shown for the ML estimator in [30]). This means that τ is introduced by detecting an interference peak or a multipath component instead of the peak at the true ToA τ (0) . Errors of this type are also called global errors. The goal of the LCMV processing is to obtain a combined channel estimate with an SINR, such that the τ is not governed by global errors. This is shown in Figure 3b. Furthermore, local errors, introduced by distortion of the pulse at τ (0) , are minimized by imposing unit gain and linear phase constraints on the filtering of the desired signal. It is shown in Section 5 that there is a trade-off between these two types of errors in parameterizing the LCMV processing. Measurement Setup In order to evaluate the method described in Section 3 in a realistic environment, channel measurements were performed in a seminar room at the TU Wien, shown in Figure 4. The room measures 7.8 m × 7.8 m × 3.8 m, and the interior includes tables, chairs, measurement equipment, three large wooden cabinets with glass front, a shelf, and a metallic white board, all placed at the sides of the room to create enough space for the antenna placement. The right antenna in Figure 4 is mounted on a static tripod at a height of 1.55 m and is connected to the vector network analyzer (VNA) via coaxial cable. The left antenna is mounted at the same height on a two-dimensional, horizontal positioner for fine antenna placement with sub-millimeter accuracy, and the positioner is placed on a wheeled table for coarse placement in the room. The antenna is connected to a coaxial cable that winds down along the two axes using a cable carrier to guarantee a defined bending of the cables. From there, through a fixed connector, another coaxial cable is connected to the second port of the VNA. While the latter has to be slightly moved for each table position, the cable along the axes experiences a different bending for each position of the axes. Although measurements showed good phase stability of both cables, it should be mentioned that this might have a small impact on the phase accuracy of the measurements, as calibration between the two antenna ports is carried out for a single position of the table and axes. Aside from this fact, the measurements are phase coherent, which is a crucial requirement for the investigated method. The same antenna, shown in Figure 5, is used at the transmitter and receiver sides. It is a redesign of the conical monopole antenna (CMP) introduced in [31]. The choice of the CMP is motivated by the requirement for a constant gain within the frequency span of interest and by radial symmetry of the antenna. Furthermore, the measurement results are comparable to previous measurements with the purpose of ToA estimation, as in, e.g., [32]. With calibration between the two antenna ports, the CMPs are considered part of the channel. A total of 8000 measurements were performed in a rectangular area of 3.4 m × 2.7 m. Per table position, the axes' positions are on a 10 × 10 grid with equidistant spacing of half a wavelength at center frequency f c . It is noted that the characterization of the channel in the form of a power delay profile (PDP) does not require 8000 measurements. However, another objective is to form virtual antenna arrays for the statistical evaluation of the method described in Section 3. In order to form arrays of variable dimension and orientation and still have enough data points to obtain reliable statistics, a large number of measurement locations was chosen. Virtual antenna arrays are only formed within the locations of one table position, to guarantee high accuracy spacing between the elements. Especially problematic for ToA estimation within multipath environments are NLOS scenarios, where the path between transmitter and receiver is obstructed. For this purpose, an absorber mounted on a tripod was placed between the antennas, as shown in Figure 6. The measurements of the interference channel are performed in the same fashion, but with the static antenna, which in this case represents the interference source location, placed in a different area of the room. The path between the transmitter and receiver of the Results In this section, we evaluate the method proposed in Section 3 within the multipath environment described in Section 4. The performance of the method in terms of the ToA estimation error depends on the (statistical) properties of the propagation channel. A discussion of the channel measurements is, therefore, given in Section 5.1. Channel If the channel cannot be measured at all possible transmitter and receiver locations in the environment, then the statistics of the multipath depend on the choice of these locations. Figure 7 shows the floor plan of the seminar room (cf. Section 4). The antenna location of the UWB transmitter Tx S and of the interference source Tx I are fixed for all results provided in this section. The green rectangle represents the area where the receiver was placed at 8000 evenly distributed locations. In ToA estimation, the channel is typically characterized by its PDP because it provides information about the mean power of multipath components at a certain delay. Figure 8 shows the PDP for the transmitter and receiver locations described above. It is obtained by averaging the squared absolute value of all measurements aligned at the first path. The UWB channel was measured under NLoS conditions (cf. Figure 6), and the interference channel under LoS conditions. It can be seen in Figure 8a that there is noticeable energy in the channel up to a delay of 200 ns before the curve flattens out. Figure 8b shows the same plot in greater detail for small delay values. Here it is evident that, in the UWB channel, there is a gap after the first path before it reaches a plateau introduced by the first strong multipath components. This gap corresponds to a distance of about 2 m between the first path and the first multipath components and is a result of the rather central placement of the transmitter and receiver within the room. As such a gap is typically not present in practical scenarios, the measured channel is modified in the post-processing. For every channel measurement, the first strong multipath component is identified and and all successive samples are shifted 1.8 m towards the first path by an overlap-and-add method. The PDP of the resulting channel is also shown in Figure 8b. Note that the same height of the first peak and the plateau in the NLoS case might be misleading at first sight. Due to alignment with regard to the first path, the power at this path is always summed up coherently, while the multipath components arrive at different delays and are averaged to a lower level. The first path is generally much weaker (about 10 dB on average) than the first multipath components. It is further apparent that the first peak in the interference channel has about twice the width of the peak in the UWB channel. This is caused by the placement of the interference transmitter Tx I right next to a cabinet with a glass front. Almost all of the energy transmitted in the direction of the cabinet is reflected back and interferes constructively or destructively (depending on the receiver location) with the signal transmitted in the direction of the receiver, which results in a widened receive pulse. Performance of the ToA Estimation In order to evaluate the proposed method, propagation of the UWB signal and the interference signal over their respective channels is performed in a simulation using the measurement results described above. This enables the investigation with regard to a variety of parameters presented below. The pulse s(t) is a raised-cosine pulse ( in compliance with ([7], Section 16)), with a roll-off factor of β = 0.5 and a bandwidth of B = 500 MHz. The interference signal is a Gaussian random process, effectively bandlimited to B I at a center frequency of 50 MHz below the UWB center frequency f c . The virtual arrays are formed at 4000 positions within the measurement area in Figure 7. We evaluate the estimation performance in terms of the distance error given by where τ is given in Equation (32), and c 0 is the speed of light. The SINR is defined by the ratio of the peak power of the pulse g (0) at the index m (0) corresponding to the ToA and the mean power of the perturbation vector p (0) and the interference power is 30 dB above the noise power. Note that here, the mean is not with regard to multiple realizations of the random vector p (0) , but with respect to the samples within one realization. LCMV Parameterization Let us first consider the lowest-dimensional case of a 1 × 2 receiver array, i.e., K = 2 and a filter length of L = 4. First, we evaluate the performance of the proposed method with regard to the number of point constraints J. Figure 9 shows the empirical cumulative distribution function (ECDF) of the distance error. The ECDF is normalized to the total number of observations. It does not, therefore, attain a probability of one if there are observations for which no valid estimate is obtained. The three curves show the results for different numbers of point constraints J. As the best choice of J is usually in the vicinity of KL/2, it is represented by the centered number of point constraints It is evident in Figure 9a that most of the errors are centered around zero. These errors are introduced by correctly detecting the first path. The errors above 0.2 m arise from erroneously detecting multipath components. If the chosenJ is too small (J = −1), then the desired signal is distorted, and the local errors in the vicinity of the LoS increase. If the chosenJ is too large (J = 4), then the desired signal form is well preserved but the interference mitigation capability decreases, and the estimation error becomes increasingly dominated by multipath errors. In order to quantify the performance of the ToA estimation, we define the mean absolute error (MAE) as˜ where d,i is the distance error at the ith position, and N pos is the total number of positions. Figure 10 shows the MAE over a range of values ofJ for different numbers of antenna elements K, where the array configuration is a uniform linear array (ULA). The U shape of the curves follows the same explanation as for Figure 9: ifJ is too small, then the desired pulse is distorted, and the MAE is increased due to local errors. The MAE for the two smallest possible values ofJ is equal because one constraint leads to a similar signal distortion as two constraints. IfJ is too large, then the degrees of freedom are too low for a sufficient interference suppression, and the MAE is dominated by multipath errors. Figure 9. ECDF of (a) the ToA estimation error d and (b) the absolute value of d for various numbers of point constraints J. The receiver array is a 1 × 2 ULA with broadside in the x-direction (cf. Figure 7), and the filter length is L = 4. The interference bandwidth is B I = 160 MHz, the SINR is 9 dB, the TINR is set to 10 dB, and the AoA θ is assumed to be known. The legend is valid for both figures. Figure 10. MAE over the number of constraintsJ for (a) SINR = 9 dB and (b) SINR = 0 dB. The receiver array is a ULA with broadside in the y-direction (cf. Figure 7). The number of constraintsJ is chosen to be the optimal value for each L. The interference bandwidth is B I = 160 MHz, and the AoA θ is assumed to be known. For SINR = 9 dB, the optimal values forJ are larger than for SINR = 0 dB. This is explained as follows. When the SINR is large, the LoS is detected correctly without a large amount of interference suppression. In this case, the MAE is dominated by local errors, which are small for a large number of constraintsJ. When the SINR is low, a high amount of interference suppression is needed in order to minimize multipath errors. This is achieved with more degrees of freedom, i.e., whenJ is low. The MAE at the right side of the minimum is much larger than on the left side because multipath errors are typically much larger than local errors (cf. Figure 3). In addition, for any fixedJ, using more antenna elements K decreases the MAE, as expected, due to having more spatial information available. In Figure 11, the MAE is evaluated with regard to the number of antenna elements K and the filter length L. It is evident that no performance improvement can be gained by investing in a larger filter length. In order to minimize the computational complexity, the lowest value of L is most favorable. While, for SINR = 9 dB, all curves approach the same MAE, additional antenna elements improve the performance significantly at SINR = 9 dB. It is seen, in particular, that for K = 4 and K = 5, the MAE is at almost the same value for an SINR difference of 9 dB. Note, however, that a larger K results in a higher computational complexity. The number of constraintsJ in Figure 11 is chosen to be optimal for each K and for both values of the SINR. This optimum corresponds to the location of the minima in Figure 10, but the SINR is unknown a priori. While the interference-plus-noise power is given by the diagonal elements ofR p m p m in Equation (26), the signal power depends on the ToA of the first path τ (0) , which is the quantity to be estimated. In a scenario where multiple successive measurements are performed, one approach is to start with a small value forJ in order to increase the probability of correctly detecting the first path. Once the ToA τ (0) has been estimated, the power of the first path can be estimated. This value can then be used to determineJ for the next measurement. In practice, this is a reasonable approach if the duration between two successive measurements is not too large, i.e., when the first path power is not likely to vary strongly between the measurements. Figure 11. MAE over the filter length L for (a) SINR = 9 dB and (b) SINR = 0 dB. The receiver array is a ULA with broadside in the y-direction (cf. Figure 7), and the filter length is L = 4. The interference bandwidth is B I = 160 MHz, and the AoA θ is assumed to be known. AoA and SINR Uncertainty The AoA θ of the desired signal enters the model via the constraint matrix C. If θ is unknown a priori, then it has to be estimated, and the estimation is subject to errors. When C is calculated for a value of θ that deviates from the true value, the desired signal is distorted, which results in larger local errors. A method to mitigate this effect without increasing the computational complexity is described in Section 3.4. In order to evaluate the sensitivity of the proposed method to uncertainties of θ, we assume that θ follows a wrapped normal distribution where the mean µ θ is the true value at each location and σ θ is the standard deviation in radian. Figure 12 shows the MAE over the SINR for σ θ = 0 (known θ) and σ θ = 0.3 rad. For K = 2 and K = 3, the curves almost coincide, because for a low number of antenna elements, the array response does not change rapidly in the spatial domain. For K = 4 and K = 5, the array response is more sensitive in the spatial domain and, consequently, the constraints are not strictly satisfied for an AoA which deviates from the true value. This results in a slightly larger MAE. The difference, however, is in the sub-10 cm regime, which shows that the method is robust to a certain amount of deviations in θ. Comparing Figure 12a and Figure 12b, it is seen that this behavior is independent of the interference bandwidth B I . Figure 12. MAE over SIR for interference bandwidth (a) B I = 160 MHz and (b) B I = 320 MHz. The curves show the mean value of the results for the cases in which the receiver array is a ULA with broadside in the x-direction and in the y-direction (cf. Figure 7). Solid lines show the results for a known AoA θ, and dashed lines for σ θ = 0.3 rad. The number of constraintsJ is optimal for each K and SINR. It is shown in Section 5.2.1 that the optimal value ofJ depends on the SINR, and that the SINR is unknown in general. Figure 13 compares the MAE for a fixed value ofJ and an adaptively chosen optimal value forJ. The optimal value is determined by the minimum of the curves in Figure 10 for each value of K and SINR. In Figure 13a,b, the chosen fixedJ is rather high. This choice favors an undistorted signal over a high interference suppression. For high SINR values, the error is therefore not much larger than in the optimal case. In the low SINR regime, the poor interference suppression capability results in higher multipath errors, and the MAE is significantly larger than in the optimal case. In Figure 13c,d, the chosen fixedJ is rather low, and the behavior described above is reversed. The error floor in this case is at roughly 0.5 m due to the pulse distortion. At low SINR values, the fixed value forJ coincides with the optimal value and, consequently, the MAE also coincides with the optimum. Note that, for a fixed value ofJ, no knowledge about the SINR is required. If multiple successive measurements are available, a rough estimate of the SINR can be obtained as described in Section 5.2.2, andJ can be determined adaptively. Comparison to Interference Blocking Finally, we compare the proposed method to an approach where the interference mitigation is performed by band-stop filtering (BSF) the received signal. Figure 14 compares the results obtained for the two methods for different interference bandwidths B I . The SINR is assumed to be unknown, and thus, the number of constraintsJ is fixed, as explained in Section 5.2.2. At B I = 160 MHz, the proposed method performs significantly better in the low-SINR regime. With two antenna elements, the MAE is improved by more than 1 m compared to the BSF approach. The BSF introduces a deterministic smaller peak prior to the peak that corresponds to the LoS. For high SINR values, the detection threshold decreases in relation to the power of the first peak (cf. Figure 3), and the smaller peak prior to the LoS is detected erroneously. This explains the rise of the MAE at SINR = 15 dB for the BSF. This effect could be equalized by adapting the detection threshold with regard to the SINR, but it was assumed before that the SINR is unknown. If the detection threshold is chosen to be larger, this effect would be shifted to higher SINR values. However, the MAE in the low-SINR regime would increase as well. At B I = 320 MHz, the proposed method performs significantly better over the entire SINR range. The large error floor of the BSF is explained by the severe bandwidth reduction of the desired signal. The interference band with B I = 320 MHz corresponds to 64% of the signal bandwidth B. Consequently, a large fraction of the desired signal is suppressed by the BSF. This results in a merge of several multipath components, and thus, in a shift of the first peak towards higher delay values. Figure 14. MAE over SIR for (a) interference bandwidths B I = 160 MHz and (b) B I = 320 MHz. The curves show the mean value of the results for the cases in which the receiver array is a ULA with broadside in the x-direction and in the y-direction (cf. Figure 7). The number of constraints is fixed (cf. Figure 13), and the standard deviation of θ is σ θ = 0.3 rad. Complexity The complexity of the proposed method is determined by the estimation of R p m p m in Equation (26), the inversion of the matrices R p m p m and C H R p m p m C, and the matrix multiplications in Equation (21). In the case of K = 2, L = 4, and J = 3 (J = −1), the perturbation covariance matrix R p m p m is 8 × 8-dimensional, the constraint matrix C is 8 × 3dimensional, and C H R p m p m C is 3 × 3-dimensional. The number of arithmetic operations required for the inversion of an n × n-dimensional matrix is approximately 2n 3 /3 ( the exact number depends on the used algorithm). The inversion of R p m p m and C H R p m p m C, therefore, requires approximately 340 and 18 operations, respectively. For the estimation of R p m p m , we used 10 sample vectors, which results in 8 × 8 × 10 = 640 operations. The multiplication of a k × l-dimensional by an l × m-dimensional matrix requires (l + l − 1)km operations. All of the matrix multiplications in Equation (21) sum up to 975 operations. This results in a total number of approximately 2000 operations for the calculation of the optimal filter coefficients in Equation (21). A low-dimensional implementation of the proposed method is, therefore, indeed of low complexity. Conclusions We proposed a low-complexity approach to wideband interference mitigation and ToA estimation in an UWB system. The method combines an LCMV processor with a detection-based ToA estimator. In order to evaluate the performance of the proposed method with regard to the MAE, we conducted a large number of virtual array channel measurements for both the UWB and the interference channel. The most important parameters involved in the LCMV processing are the number of antenna elements, the filter length per antenna element, and the number of constraints. While the filter length has almost no impact on the MAE, additional antenna elements lead to a significant improvement in low-SINR regimes. The optimal number of constraints depends on the SINR, which is unknown in general. Deviations from this optimum are critical only for a low number of antenna elements at low SINR values. Using a probabilistic model for the constraint matrix, the method is also robust to uncertainties in the AoA. Furthermore, we compared the results to those obtained by using a band-stop filter approach. The LCMV approach performs significantly better over the entire SINR range, especially when the interference bandwidth exceeds 60% of the signal bandwidth. Based on the presented results, we conclude that the proposed method is suitable when the interference bandwidth occupies a large fraction of the signal bandwidth and when computational power is limited. Conflicts of Interest: The authors declare no conflict of interest.
2023-07-11T18:33:50.348Z
2023-06-21T00:00:00.000
{ "year": 2023, "sha1": "f68f6d91fb2f41a3593568b7c9e8176c8f4b49d2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/23/13/5806/pdf?version=1687667546", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5157ccefd7a465c15d1774aa235cf3398f71f798", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
235296082
pes2o/s2orc
v3-fos-license
Self-Management in Stroke Survivors: Development and Implementation of the Look after Yourself (LAY) Intervention Objective: Self-management is recommended in stroke rehabilitation. This report aims to describe timing, contents, and setting of delivery of a patient-centered, self-management program for stroke survivors in their early hospital rehabilitation phase: the Look After Yourself (LAY) intervention. Methods: After extensive literature search, the LAY intervention was developed by integrating the Chronic Disease Self-Management Program, based on the self-efficacy construct of social cognitive theory, with evidence-based key elements and input from stroke survivors. Results: the LAY intervention aims to implement self-management skills in stroke survivors, enabling them to be active in goal setting and problem solving using action plans and to facilitate the critical transition from hospital to community. It includes both group sessions to facilitate sharing of experiences, social comparison, vicarious learning, and increase motivation and one-to-one sessions focused on setting feasible action plans and on teaching personalized strategies to prevent falls. Standardization is ensured by manuals for facilitators and patients. Conclusion: The LAY intervention is the first Italian program to support early self-management in stroke rehabilitation; it has been experimented and its efficacy proven in improving self-efficacy, mental health, and activities of daily living, and detailed results have been published. The LAY intervention is described according to the TIDieR checklist. Introduction Stroke is the second most common cause of death and a leading cause of adult physical disability, affecting 17 million people worldwide each year. Its incidence and prevalence are increasing, with burden on both stroke survivors' quality of life and on health systems [1,2]. Current stroke care management supports early discharge from hospital when rehabilitation is still under way [3][4][5]. Nevertheless, studies have underlined that stroke survivors and their caregivers often feel unprepared to face the transition from hospital to community [6,7] International stroke guidelines have recommended that "All patients should be offered training in self-management skills, including active problem solving and individual goal setting" [8,9]. Thus, in recent years, self-management has become part of the stroke care pathway, since it could support individuals facing the long-term consequences of stroke [10], and thus it could facilitate interventions related to transitional care [11]. Over the past five years, our research group has conducted a project funded by the Emilia Romagna Region: Look After Yourself (LAY)-an educational intervention for stroke patients to improve self-management and fostering transition from hospital to community. The first phase of the project was dedicated to the development of this educational intervention and to the training of healthcare professionals, while the second phase consisted in a quasi-experimental study. This was conducted on a sample of 185 postacute stroke patients recruited from three different rehabilitation centers: two implemented the experimental intervention, and one acted as control (ISRCTN75290225). Given the nature of the intervention (educational), which led to a global reorganization in the two experimental centers, it would not be appropriate to set up a randomized controlled trial. This is because, in those two experimental centers, all patients, even those who did not participate in the research, were exposed to the learning of new skills they acquired from the trained professionals, as usually happens when complex and long-term educational interventions are promoted. The results of this study indicate that the LAY intervention improves self-efficacy, mental health, and activity of daily living. There were no side effects, and overall user/professional satisfaction was good. There was therefore evidence to support the implementation of self-management programs in stroke survivors. The design and results of this quasi-experimental study have been described in a recently published paper [12]. The aim of this paper is to describe in detail phase one of the LAY project, explaining how the LAY intervention has been built, developed, and implemented in the context of three Italian rehabilitation centers. The purpose is to spread the knowledge acquired in order to encourage the use of self-management programs and their adaptation to different contexts. Intervention Description The TIDieR checklist [13] was used to structure the detailed description of the LAY intervention. Item 1. Brief Name The LAY intervention is a self-management program that aims to improve selfefficacy and foster transition from hospital to community of stroke survivors in post-acute phase. It is an Italian adaptation from the Chronic Disease Self-Management Program (CDSMP), which is a standardized, evidence-based intervention for the comprehensive self-management of chronic conditions. It is based on the self-efficacy theory [14,15] and was developed at the Stanford Patient Education Research Center of Stanford University. Item 2. Why: Description of the Rational and Theory Essential to the Intervention The LAY intervention was developed by a multi-professional research group composed of physicians, physiotherapists, nurses and psychologists. The development of the LAY intervention consisted of three steps: -Identification of the evidence base for the LAY intervention. -Set-up of the LAY intervention in terms of contents, duration, and delivery. -Test the relevance of contents and the feasibility of sessions and the fine-tuning review process. In the first step, an extensive literature search for the best evidence on self-management programs targeting stroke patients and their needs in the early stage was conducted. Then, an in-depth analysis of the literature retrieved was made to identify the key elements of the existing programs in order to inform the LAY intervention. The literature search yielded a systematic review by Lennon [16], published in 2013, which extensively examined the evidence of self-management programs specific for stroke survivors. It included nine randomized, controlled trials highlighting the potential importance of self-management but also underlining relevant differences in timing, content, and mode of delivering the self-management interventions in this population. Moreover, a further randomized clinical trial aimed at verifying the feasibility of a self-management program for stroke survivors in the community setting was retrieved [17]. Table 1 shows the main characteristics of the trials that informed the LAY intervention, whose quality has been assessed by way of a critical appraisal tool developed by The American Academy for Cerebral Palsy and Developmental Medicine [18]. 3 one-to-one and 2 telephone sessions guided by a workbook. The workbook provided information about stroke and recovery and included activities designed to allow the patient to attain the coping skills to encourage self-management. An audio relaxation cassette tape was provided. Although statistically significant findings in favour of the self-management programs were found in most of the studies examined and reported in Table 1, some criticisms were highlighted, such as small study samples (<100 participants) [17,23] and, in general, a poor description of the interventions applied. To be noted, four trials experimented with self-management interventions based on principles of self-efficacy [17,[23][24][25], and three other trials offered self-management interventions based on the theoretical model of the Chronic Disease Self-Management Program (CDSMP) [21,22,26]. The CDSMP is one of the most internationally widespread interventions to support self-management [28]; it consists of six weekly group sessions and is founded on three main assumptions: (1) individuals with different chronic diseases share similar self-management problems and disease-related tasks; (2) individuals can learn to take responsibility for the day-to-day management of their disease; (3) individuals confident and knowledgeable in practicing self-management will improve their health status. Workshops can be led by peer-leaders or by healthcare professionals and are directed towards patients and their caregivers. The CDSMP, originally developed to promote selfmanagement among individuals with a variety of chronic conditions [29], has also been applied to diabetes [30] and cancer [31], and it is currently being broadly disseminated across various countries [32]. Among the self-management programs based on the CDSMP, the Stroke Self-Management Program (SSMP) implemented by Damush and collaborators [22] is an intervention addressing the very early phase needs of patients with stroke. The SSMP, which was developed by the National Stroke Foundation, proved to be feasible and beneficial in veterans enrolled in the program within one month post-stroke; it was delivered in six sessions, both one-to-one (three sessions) and by telephone (three sessions), over a three-month period. The feasibility of this program is to be highlighted, as stroke survivors experience sudden, considerable, and frequently long-lasting disability that often spreads to the cognitive or communication areas. Thus, full participation in self-management programs can be challenging, especially in the early rehabilitation phase. Thus, self-management programs specifically adapted to stroke survivors' needs are now recommended, and the evidence in this area of research is growing [33,34]. Furthermore, most of the structured self-management programs directed towards stroke survivors have been developed in English-speaking countries [34]; until today, no such program has ever existed in Italy. Item 3. What: Description of Materials The LAY intervention was set up to match stroke survivors' needs and context elements, taking into account the clinically relevant issues typical of the early rehabilitation phase and features of the in-hospital rehabilitation setting and organization. The LAY intervention is an adaptation of the CDSMP to stroke patients and their caregivers, and its contents and overview are described in Table 2. The research group asked and obtained a research license from the Stanford University before drawing inspiration from the CDSMP contents. The main tool of the LAY intervention is the action plan, as in the CDSMP. The action plan helps the patient to set and achieve his/her personal goals, which have to be specific, measurable, achievable, realistic, and need to be addressed to a relevant action that the individual is motivated to carry out in the short-term (SMART) [35]. In order to do this, patients must answer a number of questions establishing how, when, and how often they plan to perform the action, anticipating any helpful aid they might need as well as any possible causes of failure (in order to prevent it) and/or risks to their safety. Lastly, patients must rate their level of confidence about the success of the action planned on a ten-point scale; if the score is lower than seven, the action plan has to be modified in order to get higher levels of confidence [36]. In order to standardize and to give the possibility to replicate the intervention, two manuals were developed: one for program leaders (physicians, physiotherapists, and nurses), which is a guide to conduct the group and individual sessions and one for participants (stroke survivors and their caregivers), which describes the topics addressed during the sessions. The program leader's manual was informed by both the original 2012 version and the Italian version of the CDSMP Leader's Manual, integrated with the standardized manual of SSMP [22] and recommendations provided by Teresa Damush. The participants' manual was informed by different sources, such as the Living a Healthy Life with Chronic Conditions [36] and the National Stroke Association materials [37]. This manual contains the action plan templates and an activity diary that the patient is meant to fill in individually. It also includes information on resources available in the community to support the social reintegration of individuals after hospital discharge. Both manuals are available in Italian version, on request to the authors. Item 4. What: Procedures Followed Stroke survivors and their caregivers were invited to participate in the LAY intervention within the first 1-2 weeks after discharge from the stroke unit, during the in-hospital rehabilitation phase. The program consists of six weekly group sessions for patients and their caregivers plus two one-to-one sessions (Figure 1). Participants who were discharged from the hospital before program completion attended the group sessions as outpatients. Item 5. Providers The group sessions were conducted by two healthcare professionals included in the research group composed by 2 physicians, 2 physiotherapists, and 2 nurses (program leaders). The professionals were present in turn week after week. Individual session 1 was conducted by one healthcare professional in turn, while individual session 2, which focused on fall prevention and balance exercises, was always conducted by a physiotherapist. Two levels of training were offered in the two hospitals where the experimental intervention was carried out. The first level was addressed to all the healthcare professionals The group sessions shared a common schedule; session 0 differed from the other in the fact that action plans were not discussed, because this tool was presented to the patient in the first individual session ( Table 2). Item 5. Providers The group sessions were conducted by two healthcare professionals included in the research group composed by 2 physicians, 2 physiotherapists, and 2 nurses (program leaders). The professionals were present in turn week after week. Individual session 1 was conducted by one healthcare professional in turn, while individual session 2, which focused on fall prevention and balance exercises, was always conducted by a physiotherapist. Two levels of training were offered in the two hospitals where the experimental intervention was carried out. The first level was addressed to all the healthcare professionals working in the rehabilitation wards and focused on key self-management elements, communication skills, and practice in collaborative goal setting with stroke survivors and their caregivers. The second level of training was addressed to the program leaders of the research group. It focused on small group management, deepening of contents of the basic training, and practice with focus group in leading group sessions. The first and the second level of training were led by a psychologist and a national expert certified trainer in CDSMP (Master Trainer in 2009; T Trainer in 2014) (DP), who also contributed to the Italian network supporting self-management in chronic conditions and participated in the two-year project of Diabetes Self-Management Program, funded by the Italian Ministry of Health. Furthermore, all the physiotherapists working in the two hospitals where the experimental intervention was carried out received specific training in patient education on accidental falls prevention, which was delivered by the two physiotherapists of the research group. 2.6. Item 6. LAY Intervention Delivery 2.6.1. Group Sessions The group sessions lasted 1-1.5 h, were led by two program leaders, and were held in the early afternoon. In order to foster the participation of stroke survivors, who may experience attention deficits in their post-acute, early-rehabilitation phase, groups were composed of a maximum of ten participants (smaller than in the CDSMP). Group session 0 was the only one opened to all stroke patients hospitalized for rehabilitation, not just to the participants of the LAY Project. Sessions had to be attended in sequence because the topics addressed were connected to the changing condition of patients during the recovery process. One-to-One Sessions The two one-to-one sessions lasted about 20-30 min and were planned in the morning during the hospitalization period. The first session took place between group sessions 0 and 1, and it was led by a program leader who supported participant to make the first action plan. The goal of the program was to empower the participant to establish his/her own achievable action plans focused on relevant goals. The following action plans could be made by participants themselves or with the support of their caregiver or a trained healthcare professional. Participants were taught to plan their action plan after each group session and to share their results at the following one. Action plans could be made in hospital, while patients were hospitalized, or at home afterwards. The second one-to-one session was led by a trained physiotherapist; it focused on accidental fall prevention and was always planned before hospital discharge and before group session four, when possible. Item 7. Where: Type of Location Both group and individual sessions occurred in hospital in dedicated locations of the rehabilitation wards involved. Group sessions took place in a large room, such as the rehabilitation gym or meeting room depending on number of participants. The rooms were equipped with chairs arranged in semicircle, one personal computer, one projector, and one flipchart with markers. Item 8. When and How Much The timing of intervention, schedule, and delivery were already described in Items 4 and 6. A total of 32 group sessions and 112 individual sessions were done between June 2015 and March 2017. Item 9. Tailoring The LAY intervention is an adaptation of the CDSMP to post-acute stroke patients: topics, timing, format, and strategies to lead group and individual sessions were defined to match the specific needs of stroke survivors and their changes during the recovery process. The adaptation consisted in integrating the CDSMP with: (a) evidence-based, key elements of self-management programs experimented in stroke survivors; (b) inputs from 6 focus groups made with 3 individuals recently discharged from rehabilitation after a stroke and their caregivers, who contributed suggestions regarding the relevance of the LAY contents and the feasibility of the sessions; (c) expertise in stroke rehabilitation of clinician included in the research group; (d) inputs from Teresa Damush, SSMP developer, who provided the research group with an overview of her program and useful recommendations on self-management delivery strategies for stroke survivors. The LAY intervention adaptation took into account the peculiarities of patients affected by stroke: - During the first few weeks after the event, individuals need time to understand what has happened, so the program focuses on the development of coping and adaptation strategies from the very first rehabilitation phase. -Duration of each group session was reduced compared to CDSMP, because during the in-hospital phase, stroke survivors frequently require long periods of rest, as they experience a lack of energy, defined as post-stroke fatigue, which negatively impacts participation in activities [38,39]. -Furthermore, in the post-acute phase, stroke survivors may experience reduced attention span, memory capacities, and communication deficits; for these reasons, the CDSMP contents were simplified and individual sessions were introduced. -Both the individual sessions and the action plan guarantee the tailoring of the intervention to each patient because the individual sessions targeted at accidental fall prevention explored the patient's specific performance and context, and because the action plan trained the individuals to identify their own significant goals and to solve their specific problems. At the end of this adaptation course, the whole research group participated in the fine-tuning review process, and a final consensus on the LAY intervention was reached. Item 10. Modifications during the Course of the Study No change of the planned intervention was made during the course of the study. Item 11. How Well (Planned) Patients' adherence to group and individual sessions was assessed by the research group through the detection of presence at the session theoretically planned for each patient. In case of absence, participants were contacted and invited to the following session. These data have been previously reported [12]. Item 12. How Well (Actual) The adherence was considered good if patients attended at least 6 over the 8 total sessions provided (both group and individual) and has been previously reported [12]. Discussion The LAY intervention is a structured self-management program directed towards stroke survivors and includes the five self-management skills described by Lorig and Holman: problem solving, decision making, appropriate resource utilization, partnership with healthcare professionals, and implementation of actions necessary to manage health issues autonomously [40]. The mediator between self-management skills and the proper self-management behaviours is self-efficacy, that is, the individual's perception of one's own confidence in the ability to carry out specific, health-promoting behaviours [15]. Table 3 describes the LAY key elements addressing self-efficacy and self-management. Table 3. Key elements of the LAY intervention. Mastery experiences Breaking the task into smaller, achievable components to achieve a positive result in a task or skill Weekly realistic action plan Vicarious experiences Observe someone perceived to be a peer (model) successfully performing a task, i.e., learning from others' experiences of the post-stroke recovery period Appropriate resource utilization Giving information to facilitate knowledge and access to community resources Oral information in a group session and written information in patients Manual Partnership with healthcare professionals Training in how to ask for help Training patients' ability to communicate and collaborate in a group session Taking necessary actions Action plan as a good instrument to focus on achievable goals Training in action planning every week for 6 weeks The LAY intervention, an adaptation of the CDSMP, targets stroke survivors and is delivered from the early, post-acute phase of rehabilitation. The main adaptations made to meet stroke survivors' needs consists in the integration of stroke-specific themes with issues common to other chronic diseases, the delivery of simplified contents in quite concise manner to a limited number of participants, and the introduction of one-to-one sessions to personalize the intervention, matching the clinical features of each participant, and also through the action plan implementation. These adaptations were made to suit the LAY intervention to the early post-acute phase of stroke, when individuals are still facing subacute problems, and may present a limited attention span, communication impairment, and fatigue. This requires short group sessions, plain language, and flexibility. Compared to large groups, small groups allow program leaders to pay more careful attention to each participant and to allow for in-depth interaction also with participants with physical and emotional frailty. The action plan, that is the core of the LAY intervention, is a simple tool to enhance problem-solving and goal-setting skills, and it can be used both in hospital and in the community. The LAY intervention was set up to be delivered in the first few weeks after stroke to facilitate the critical transition from hospital to community, providing individuals with skills to control their global health condition by self-managing disease symptoms and risk factors, emotional consequences of illness, and role management (how to maintain a previous social and family role or how to create a new one). As stroke can result in permanent disability which can lead to social isolation, the LAY intervention incorporates information regarding resources locally available offered by community or by patient associations (transportation, events, sports facilities, etc.). The repetition of such an intervention requires consideration of the specific context and adaptation of only the chapter that describes community resources in order to fix the information to the local situation. A particular focus is on training required to deliver the intervention: our research group was supported by a psychologist and a national expert certified trainer in CDSMP, who planned two levels of training: the first addressed to all the health professionals of the rehabilitation wards involved in the project and the second designed for LAY sessions leaders. A lesson we learned is that a great cultural change is needed for healthcare professionals to let patients take health in their hands; not all healthcare professionals are ready to share responsibility and power with people. Stroke rehabilitation teams still work mainly on therapy-led, multidisciplinary goal setting: physiotherapists and nurses identify the problems, define the goals, assess whether they have been achieved, and decide how the patients should progress [41]. This makes clinicians only partially meet the recommendations in national clinical guidelines. Collaborative goal setting is a central element in rehabilitation, but it requires skills and training. This is a major challenge for future if we trust in patients' engagement. Limitations of the Intervention Despite having simplified the original program, this kind of intervention might still be difficult for stroke patients with severe aphasia or cognitive impairment. This subgroup of patients is often excluded from research studies on innovative approaches because of comprehension and/or communication barriers. Therefore, as for other similar interventions, the generalizability of the LAY intervention cannot be immediately extended to the whole population with stroke. Because the LAY intervention is conducted in the hospital setting, this facilitates participation of frail patients in the very early stages of their recovery after stroke, even though a certain degree of flexibility is always recommended to adapt to the unstable clinical condition and the need for intensive care. However, patient participation might be hindered after hospital discharge if residual limitation in autonomy and mobility could, by itself, prevent participation in the sessions. Moreover, the LAY intervention was designed to be almost completely delivered by healthcare professionals, with small peer-leader representation. The research group involved patient associations in the development of the LAY intervention, but their active role in the provision of the intervention was limited to the last group session, when community services were presented. Introducing a peer-educator within the program might provide ongoing self-management support, improve the patients' level of selfefficacy, and assist patients in dealing with the emotional components associated with their chronic condition [42]. Strengths of the Intervention A strength of the LAY intervention is its mixed format, which includes both one-to-one and group sessions. One-to-one sessions allow participants to learn how to use an action plan to plan actions focusing on health goals. Using the action plan to set clinically relevant and realistic goals is reinforced during group sessions. Furthermore, the second one-to-one session, on fall prevention, is led by the physiotherapist and actually follows the patient during clinical rehabilitation; this makes it possible to personalize the information and to teach appropriate balance and resistance exercises to the individual patient. Concerning group sessions, their repetitive structure helps patients to reinforce the main principles and topics of the program. Group sessions have an important role for peer support, for example, in discussing successful/unsuccessful action plans within the group and in sharing ideas and advice for the next action to be planned. As highlighted by a recent systematic review [43], increasing knowledge, effective collaboration and/or communication, accessing resources, goal setting and problem solving, and peer support are common key features of self-management interventions and are also present in the LAY intervention. In particular, peer support among stroke survivors facilitates the sharing of experiences, social comparison, vicarious learning, and it increases motivation. Vicarious learning, in turn, influences self-perception related to one's own ability to self-manage stroke outcomes [43]. Another strength of the LAY intervention is the early timing of delivery after stroke; the program matches the great need for information that stroke survivors and their caregivers report since the very first weeks after the stroke. A recent systematic meta-review [44] confirms that self-management interventions in stroke survivors, delivered soon after the event (<1 year), reduce patients' dependence/institutional care or death, are beneficial to daily living activities, and seem to facilitate reintegration in the community. A positive aspect of the LAY intervention is the inclusion of caregivers in group sessions as facilitators of patient self-management behaviors. We consider of great value the presence of caregivers for their role in providing the family with assistance and support, as stroke is a life-changing event that often causes long-term disability. Finally, the LAY intervention has been completely described using the TIDieR guide [13], which allows for reliable implementation and potential replication in similar contexts, and adaptation to different ones. Conclusions The LAY intervention is the first structured Italian program to support self-management for stroke survivors in their early rehabilitation phase. Since self-management is strongly related to recovery after stroke [45] the LAY intervention could support the critical transition from hospital to community in the stroke survivors' care pathway. The results of the LAY project, including patients' adherence to the program, changes in self-efficacy, modification in activity of daily life, quality of life, resource utilization, and other outcome measures, support the implementation of structured self-management interventions in the rehabilitation process of stroke survivors [12]. In this line of research, more insight is needed to explore the barriers to and opportunities for delivering self-management interventions in post-acute stroke settings. Furthermore, investigations should assess the feasibility and efficacy of self-management interventions across secondary, primary, and community settings. Funding: This work was funded by the Emilia-Romagna Region (Italy) ["Progetto-Regione Università" -Research for Clinical Governance-2013], for the project: "Patient Therapeutic Education (PTE) in the rehabilitation process of stroke patients: improving Self-Management and fostering transition from hospital to community". The Emilia-Romagna Region had no role in the definition of the study design, in the collection, analysis, and interpretation of data, in the writing of the report, or in the decision to submit the article for publication. The authors had full access to all of the data in this study and take complete responsibility for them.
2021-06-03T06:17:23.347Z
2021-05-31T00:00:00.000
{ "year": 2021, "sha1": "6b7d430c87a0735bdbc217dd30fd2e656cdfe371", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/18/11/5925/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "87668e02c9a111860945feae502fdccdf9eb0eae", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
213648599
pes2o/s2orc
v3-fos-license
γ-Alumina-supported Pt17 cluster: controlled loading, geometrical structure, and size-specific catalytic activity for carbon monoxide and propylene oxidation Although Pt is extensively used as a catalyst to purify automotive exhaust gas, it is desirable to reduce Pt consumption through size reduction because Pt is a rare element and an expensive noble metal. In this study, we successfully loaded a Pt17 cluster on γ-alumina (γ-Al2O3) (Pt17/γ-Al2O3) using [Pt17(CO)12(PPh3)8]Cln (n = 1, 2) as a precursor. In addition, we demonstrated that Pt is not present in the form of an oxide in Pt17/γ-Al2O3 but instead has a framework structure as a metal cluster. Moreover, we revealed that Pt17/γ-Al2O3 exhibits higher catalytic activity for carbon monoxide and propylene oxidation than γ-Al2O3-supported larger Pt nanoparticles (PtNP/γ-Al2O3) prepared using the conventional impregnation method. Recently, our group discovered a simple method for synthesizing the precursor [Pt17(CO)12(PPh3)8]Cln. Furthermore, Pt17 is a Pt cluster within the size range associated with high catalytic activity. By combining our established synthesis and loading methods, other groups can conduct further research on Pt17/γ-Al2O3 to explore its catalytic activities in greater depth. Introduction With the rapid advance in science and technology, automobiles have become indispensable in our daily lives. Because Pt can catalytically eliminate harmful substances contained in exhaust gas, this metal along with rhodium and palladium are extensively used to treat exhaust gas. 1 However, Pt is a rare element and an expensive precious metal. Therefore, it is essential to reduce the amount of consumed Pt. Many attempts have been made to develop catalysts without Pt. However, previous studies have implied that the activity and durability of Pt are superior to those of nonprecious metals. To reduce the amount of Pt consumed while taking advantage of its characteristic features, it is essential to improve the activity and performance per unit weight of the catalyst. The size reduction of Pt nanoparticles/clusters (hereinafter: Ptn clusters) increases the proportion of surface atoms 2,3 and enables the creation of new geometrical/electronic structures; 4−10 thus, this approach can efficiently reduce Pt consumption. 11−13 On the other hand, the geometrical/electronic structures and chemical properties of Ptn clusters in the fine size range vary considerably depending on the number of constituent atoms. 14 Therefore, it is important to load Ptn clusters with a controlled number of constituent atoms on a substrate to create a highly active supported Pt catalyst using fine Ptn clusters while elucidating the catalytic activity and performance of the clusters. Using a vacuum device with a mass selector, 2,6,10,15−19 it is possible to load controlled Ptn clusters onto a substrate. In fact, magnesium-oxide-supported Ptn clusters (Ptn/MgO; n = 8-20) and titanium-dioxide-supported Ptn/TiO2 (n = 4, 7-10, 15) have been prepared with precisely controlled numbers of Pt atoms using these types of experiments. These studies also revealed that fine supported Ptn clusters exhibit high catalytic activity for the oxidation of carbon monoxide (CO). 2,19 However, for the practical use of supported Pt catalysts, issues remain regarding the device manufacturing costs and loading efficiency for the preparation of supported Ptn clusters using such vacuum equipment. Recently, it has become possible to precisely synthesize various noble-metal and noble-metal-alloy clusters with atomic accuracy. 20−49 Ptn clusters can be synthesized with atomic accuracy using CO as a ligand or two types of ligands, CO and phosphine. 48 In addition, Ptn clusters can be precisely synthesized using special dendrimers as templates. 50,51 When these Ptn clusters are adsorbed on the substrate followed by the removal of the ligand, Ptn clusters with a controlled number of constituent atoms can be loaded on the substrate without the issues of device construction cost and loading efficiency ( Figure 1). However, currently, there are few examples of controlled loading of Ptn clusters on a substrate using this approach. For the synthesis of the former ligand-protected Ptn cluster, the reaction under CO atmosphere is essential. For the synthesis of the latter dendrimer-protected Ptn cluster, a special dendrimer synthesis technique is needed. Therefore, few research groups are capable of conducting these precise syntheses, and fundamental and applied research on fine supported Ptn clusters is currently limited. We recently discovered a very simple method for synthesizing Pt17 clusters protected with CO and triphenylphosphine (PPh3) ([Pt17(CO)12(PPh3)8]Cln; n = 1, 2; Figure 2(a)). 52 In our synthesis method, first, the Ptn(CO)m(PPh3)l cluster, which is mainly composed of [Pt17(CO)12(PPh3)8]Cln, is prepared by mixing the reagents and heating the solvent in the atmosphere. Then, the main product, [Pt17(CO)12(PPh3)8]Cln, is separated from the obtained mixture with high purity using the difference in solubility. This method does not require special synthesis equipment or dendrimer synthesis techniques. If we could establish the loading method of the Pt17 cluster using the [Pt17(CO)12(PPh3)8]Cln as a precursor, many research groups would be able to obtain fine supported Pt17 catalysts. In this study, the following three goals were addressed with the final objective of using supported Ptn clusters as catalysts to treat automotive exhaust gas: i) establishment of a precise loading method of Pt17 on γ-alumina (γ-Al2O3); ii) structural analysis of the obtained Pt17/γ-Al2O3; and iii) evaluation of the catalytic activity of Pt17/γ-Al2O3 against the oxidation reaction of CO and propylene (C3H6). As a result, we successfully determined the conditions for loading Pt17 on γ-Al2O3 while preserving the size of Pt17 (Figure 1). We observed that the supported Pt17 is not present in the form of an oxide 53 but has a framework structure as a metal cluster in the obtained Pt17/γ-Al2O3. Furthermore, Pt17/γ-Al2O3 exhibited higher catalytic activity against the oxidation of CO and C3H6 than γ-Al2O3supported larger platinum nanoparticles (PtNP/γ-Al2O3) prepared using the conventional impregnation method. Results and Discussion Loading of Pt17 Cluster on γ-Al2O3 [Pt17(CO)12(PPh3)8]Cln (Figure 2(a)) was synthesized using our previously reported method (see experimental section). 52 In this method, Ptx(CO)y(PPh3)z clusters containing both CO and PPh3 were synthesized by mixing the reagents and heating the solvent in the atmosphere. Because CO, which is one of the ligands, is generated by the oxidation of ethylene glycol, 54 this method does not require equipment for preparing a CO atmosphere. Specifically, an ethylene glycol solution containing a Pt salt (H2PtCl6) and sodium hydroxide (NaOH) was heated at 120 °C in the atmosphere, and then PPh3 was added at room temperature to obtain Ptx(CO)y(PPh3)z clusters. [Pt17(CO)12(PPh3)8]Cln was separated from the obtained mixture using the difference in solubility in the solvent (see experimental section; Scheme S1). The electrospray ionization (ESI) ( Figure 2(b)) and matrix-assisted laser desorption/ionization (MALDI) ( Figure S1) mass spectra of the product indicate that the product contained high-purity [Pt17(CO)12(PPh3)8]Cln. The transmission electron microscopy (TEM) images (Figure 2(c)) and high-angle angular dark field scanning transmission electron microscopy (HAADF-STEM) images ( Figure 2(d)) of the product were also consistent with the results of mass spectrometry (Figure 2(b) and S1). The resulting [Pt17(CO)12(PPh3)8]Cln was first adsorbed onto γ-Al2O3 (Figure 1(b)). In an aprotic solvent, the metal oxide has a permanent dipole moment on the surface. 55 As reported by Tsukuda et al., when this surface comes into contact with a metal cluster containing a functional group with high dielectric constant (e.g., a phenyl group) in the ligand, an induced dipole moment is generated in the ligand layer, and the metal clusters are adsorbed on the surface of the metal oxide via a dipoleinduced dipole interaction. 56 In the [Pt17(CO)12(PPh3)8]Cln used in this study, the ligand layer contained a large amount of phenyl groups, and [Pt17(CO)12(PPh3)8]Cln was thus adsorbed onto γ-Al2O3 via a dipole-induced dipole interaction. Dichloromethane was used as the aprotic solvent. The concentration of [Pt17(CO)12(PPh3)8]Cln in solution was carefully controlled by inductively coupled plasma mass spectrometry (ICP-MS) such that the weight of Pt17 was 0.15 wt% relative to γ-Al2O3. The solution changed from brown to colorless and transparent after 2 h of stirring, indicating that practically all of the [Pt17(CO)12(PPh3)8]Cln was adsorbed onto γ-Al2O3 (Pt17(CO)12(PPh3)8/γ-Al2O3). The particle diameter of Pt17(CO)12(PPh3)8/γ-Al2O3 obtained using this approach was estimated by HAADF-STEM measurement. In the HAADF-STEM image in Figure 3 (Figure 1(c)). Based on thermogravimetric mass spectrometry (TG-MS) analysis, a temperature of approximately 400 °C is required for PPh3 removal ( Figure S4). 62 Then, PPh3 was removed from the Pt17 cluster by calcination at 500 °C. In the DR spectra of the sample after calcination (Figure 4(c)), the peak structure in the spectra of [Pt17(CO)12(PPh3)8]Cln (Figure 4(a)) and Pt17(CO)12(PPh3)8/γ-Al2O3 ( Figure 4(b)) was not observed. In the X-ray photoelectron spectrum after calcination ( Figure S5), the P 2p peak was not observed. In the HAADF-STEM image of the sample after calcination (Figure 3(c)), particles (1.07 ± 0.24 nm) with sizes similar to that of Pt17(CO)12(PPh3)8/γ-Al2O3 (0.94 ± 0.16 nm; Figure 3(a)) were observed with a narrow distribution ( Figure S6). These results indicate that the PPh3 ligands were removed from the cluster by calcination and that Pt17 did not aggregate during this process. Pt forms a relatively strong bond with O compared with other noble metals (318.4 ± 6.7 kJ/mol for Pt-O vs. 223 ± 21.1 kJ/mol for Au-O). 63 Furthermore, as γ-Al2O3 has a complicated structure in which Al atoms are arranged octahedrally or tetrahedrally, cationic sites are present because of the surface defects in γ-Al2O3. 64 Pt clusters could be strongly immobilized on γ-Al2O3 by the interaction between Pt atom and these cationic sites. 53 For these reasons, it is considered that Pt17 did not aggregate on γ-Al2O3 during calcination. To confirm the weight of Pt loaded on γ-Al2O3, Pt17/γ-Al2O3 was mixed with aqua regia, and the amount of dissolved Pt was measured using ICP optical emission spectroscopy (ICP-OES). The results confirmed that 0.15 wt% Pt was actually loaded on γ-Al2O3. The results of temperature-programmed reaction measurements indicate that the surface of the supported Pt17 was covered by CO at normal temperature ( Figure 1(d), S7 and S8). This result is Structural Analysis of Pt17/γ-Al2O3 To understand deeply the charge state and geometrical structure of the obtained Pt17/γ-Al2O3, the charge state and geometry of the Pt17 cluster were investigated using X-ray absorption fine structure (XAFS) analysis. The Pt L3-edge X-ray absorption near-edge structure (XANES) of [Pt17(CO)12(PPh3)8]Cln, Pt17(CO)12(PPh3)8/γ-Al2O3, and Pt17/γ-Al2O3 are shown in Figure 5(a) together with those of Pt foil and PtO2 for comparison. The white-line intensities of [Pt17(CO)12(PPh3)8]Cln, Pt17(CO)12(PPh3)8/γ-Al2O3, and Pt17/γ-Al2O3 are similar to that of Pt foil and very different from that of PtO2. This result indicates that Pt is not present as an oxide in Pt17 53 . Among the three samples, the white-line intensities increases in the order of [Pt17(CO)12(PPh3)8]Cln→Pt17(CO)12(PPh3)8/γ-Al2O3→Pt17/γ-Al2O3. This result indicates that the number of holes in the d orbital of Pt17 increases, namely the electron density of Pt17 decreases, in this order. Figure 5(b) shows the Pt L3-edge Fourier-transform extended X-ray absorption fine structure (FT-EXAFS) of [Pt17(CO)12(PPh3)8]Cln, Pt17(CO)12(PPh3)8/γ-Al2O3, and Pt17/γ-Al2O3 (Tables S1-S3 and Figure S9). In the FT-EXAFS spectrum of [Pt17(CO)12(PPh3)8]Cln, the peaks attributed to the Pt-C and Pt-P bonds appear at ~1.7 and ~2.3 Å, respectively. For Pt17(CO)12(PPh3)8/γ-Al2O3, the intensity of the peak at ~1.7 Å increased and that at ~2.3 Å decreased, and the peak attributed to the Pt-Pt bond appeared at ~2.8 Å. As described above, there is no significant difference in the optical absorption between [Pt17(CO)12(PPh3)8]Cln and Pt17(CO)12(PPh3)8/γ-Al2O3 ( Figure 4(a)(b)). Therefore, it is assumed that the Pt17 cluster maintains the metal-core structure as a whole even during adsorption (Figure 3(a)(b)). However, the FT-EXAFS spectrum indicates that the adsorption causes a slight change in the structure of the ligand layer that covers Pt17. For the appearance of the Pt-Pt bond in the spectrum, a plausible explanation is that the variation in the Pt-Pt bond length ( Figure S10) decreases or the fluctuation of the Pt-Pt bond decreases 65 with adsorption on the substrate. The decrease in the electron density of the d orbital of Pt17 ( Figure 5(a)) caused by adsorption can also likely be attributed to the structural change of the ligand layer. In the spectrum of Pt17/γ-Al2O3 after calcination, a peak at ~2.8 Å clearly appears, and its satellite peak (in the FT-EXAFS spectrum of the Pt foil in Figure 5(b)) is also observed at ~2.3 Å. 66 This result indicates that the variation in the Pt-Pt bond length and/or the fluctuation of the Pt-Pt bond further decrease with the PPh3 removal and/or the structural change of the Pt17 cluster from the icosahedral-based structure (Figure 2(a)) to the structure shown in Figure 5(c)(d) (see below). In this spectrum, a peak was also observed at ~1.7 Å. As described above, the surface of supported Pt17 is covered by CO at normal temperature. The peak at ~1.7 Å is attributed to the generated Pt-C or Pt-O bond at the Pt17/γ-Al2O3 interface. Thus, it was observed that Pt does not form an oxide 53 and that Pt17 has a framework structure as a metal cluster in Pt17/γ-Al2O3. Based on the HAADF-STEM image, the supported Pt17 is assumed to have a bi-layer 2 or tri-layer structure, as shown in Figure 5(c)(d) and Figure S11. Previous studies have suggested that CO and O2 are activated on the terrace Pt and step Pt, respectively, during the oxidation reaction of CO. 2,3,18 Figure 5 (c)(d) show that most of the terrace Pt is located near the step Pt in Pt17/γ-Al2O3. Thus, the reaction of CO and O2, i.e., the oxidation of CO, is expected to proceed effectively over Pt17/γ-Al2O3. Catalytic Activity of Pt17/γ-Al2O3 Against the Oxidation Reaction of CO and C3H6 We examined the catalytic ability of Pt17/γ-Al2O3 obtained using the previously described approach. Industrially used supported Pt catalysts are frequently prepared by the impregnation method. Therefore, in this study, PtNP/γ-Al2O3, in which 0.15 wt% Pt was loaded by the impregnation method, was used as a comparative Pt catalyst. The amount of Pt was confirmed by mixing PtNP/γ-Al2O3 with aqua regia and measuring the concentration of dissolved Pt using ICP-OES. The HAADF-STEM image in Figure 6 indicates that PtNP/γ-Al2O3 had an average particle size of 3.10 ± 3.14 nm. The obtained Pt17/γ-Al2O3 and PtNP/γ-Al2O3 were examined for their catalytic ability against the oxidation of CO and C3H6, which are the main components in automobile gas. 1 In an actual automobile, the catalysts are coated on a honeycomb substrate made of cordierite ceramic. Thus, in this study, Pt17/γ-Al2O3 and PtNP/γ-Al2O3 were coated on a honeycomb substrate to evaluate their catalytic performance in a state similar to the actual vehicle mounting conditions (Scheme S2). CO Oxidation Reaction An engine exhaust-gas-measuring device was used to determine the catalytic activity. In the experiments, a gas mixture consisting of 1% CO, 0.5% O2, and 98.5% N2 was circulated over the honeycomb substrate at a space velocity of 50000 L/h while increasing the temperature of the honeycomb substrate to 400 °C at a rate of 20 °C/min (Table S4). The conversion ratio of CO over the catalyst was estimated by evaluating the components of the mixed gas before and after circulation using the exhaust gas analyzer (Scheme 1). Figure 7(a) shows the CO conversion for each catalyst (Pt17/γ-Al2O3 or PtNP/γ-Al2O3) estimated using this approach. When PtNP/γ-Al2O3 was used as the catalyst, the catalytic activity started to appear at approximately 270 °C, and the conversion reached 50% at approximately 350 °C (light-off temperature) and nearly 100% at approximately 370 °C. However, when Pt17/γ-Al2O3 was used, the catalytic activity started to manifest at approximately 240 °C, and the conversion reached 50% at approximately 330 °C and almost 100% at approximately 350 °C. These results indicate that Pt17/γ-Al2O3 exhibits higher catalytic ability at each temperature than PtNP/γ-Al2O3 and thus that Pt17/γ-Al2O3 can treat CO at lower temperatures than PtNP/γ-Al2O3. Currently, the issue of enhanced activity of exhaust-gas-treating catalysts at low temperatures must be overcome with the spread of vehicles that frequently stop and restart their engines (e.g., hybrid vehicles). 1 These results strongly suggest that Pt17/γ-Al2O3 could be used as an exhaust gas-treating catalyst to overcome this issue. The higher activity of Pt17/γ-Al2O3 than PtNP/γ-Al2O3 is considered to be associated with their respective geometrical structures. 67 Although the geometrical structures of Pt17/γ-Al2O3 and PtNP/γ-Al2O3 before the reaction experiments are shown in Figure 5 and 6, these geometrical structures should change as the catalytic reaction progresses and have not yet been elucidated. 68 However, there should be more combinations composed of the terrace and step Pt in Pt17/γ-Al2O3 than in PtNP/γ-Al2O3 ( Figure 5(c)(d)). These geometrical effects appear to make the reaction between CO and O2 more likely to occur in Pt17/γ-Al2O3, resulting in higher CO conversion of Pt17/γ-Al2O3 at any temperature. In addition, Pt17 in Pt17/γ-Al2O3 should be more susceptible to the fluctuation of the geometrical/electronic structure than PtNP in PtNP/γ-Al2O3. The ease of fluctuation of their geometrical/electronic structures may also contribute to the high activity of Pt17/γ-Al2O3. 9 Furthermore, CO adsorbed on fine Ptn supported clusters generally has a longer C-O bond than that adsorbed on the larger Ptn supported nanoparticle, which promotes the oxidation reaction. 69 In addition to the geometric factors, it is assumed that such a difference in CO activation caused by the difference in electronic states between the two supported clusters also contributes to the high activity of Pt17/γ-Al2O3. C3H6 Oxidation Reaction In this experiment, a mixed gas containing 200 ppm C3H6, 0.5% O2, and ~99.5% N2 was circulated at a space velocity of 50000 L/h while increasing the temperature of the honeycomb substrate to 400 °C at a rate of 20 °C/min (Table S4). The conversion ratio of C3H6 over the catalyst was estimated by evaluating the components of the mixed gas before and after circulation using an exhaust gas analyzer (Scheme 1). Figure 7(b) shows the C3H6 conversion for each catalyst (Pt17/γ-Al2O3 or PtNP/γ-Al2O3) estimated using this approach. When PtNP/γ-Al2O3 was used as a catalyst, the catalytic activity started to appear at approximately 160 °C, and the conversion reached 50% at approximately 245 °C and nearly 100% at approximately 260 °C. However, when Pt17/γ-Al2O3 was used, the catalytic activity started to manifest at approximately 130 °C, and the conversion reached 50% at approximately 225 °C and nearly 100% at approximately 250 °C. These results indicate that Pt17/γ-Al2O3 exhibits higher catalytic ability at each temperature than PtNP/γ-Al2O3 for oxidizing C3H6. Currently, the mechanism for C3H6 oxidation is not as well understood as that of CO oxidation. 70 Therefore, it is difficult to discuss the origin of the difference between the two activities. However, there should be a large difference in the number of surface Pt atoms that can participate in the reaction for Pt17/γ-Al2O3 and PtNP/γ-Al2O3. It appears that this factor is responsible for the difference in activity of the two types of catalysts. Durability We also investigated the durability of Pt17/γ-Al2O3 and PtNP/γ-Al2O3. In this experiment, the catalysts were experimentally aged to simulate the deteriorated state of the catalysts caused by engine operation of an automobile. First, Please do not adjust margins Please do not adjust margins the honeycomb substrate was exposed to an oxidizing atmosphere (a gas mixture of 3% O2, 10% water vapor (H2O), and 87% N2) at 1000 °C for 3 min. Then, the honeycomb substrate was exposed to a reducing atmosphere (a gas mixture of 3% H2, 3% CO, 10% H2O, and 84% N2) at 1000 °C for 3 min (Table S5). These operations were repeated for 4 h. The CO or C3H6 conversion of Pt17/γ-Al2O3 and PtNP/γ-Al2O3 was estimated using the method described above (Scheme 1). Figure 8(a) shows the CO conversion of each catalyst (Pt17/γ-Al2O3 or PtNP/γ-Al2O3) after the aging treatment. The CO conversion rate decreased significantly for both catalysts compared with that before the aging treatment (Figure 7(a)). A similar phenomenon was observed for the C3H6 conversion. These results indicate that the previously described procedure results in deterioration of the performance of both catalysts. However, comparing the conversion over Pt17/γ-Al2O3 and PtNP/γ-Al2O3, the use of Pt17/γ-Al2O3 resulted in higher conversion than PtNP/γ-Al2O3 for both reactions. This result indicates that Pt17/γ-Al2O3 exhibits higher activity than PtNP/γ-Al2O3 even after the aging treatment. The decrease in activity after aging is generally induced by the aggregation of the supported Pt catalyst. 12,71 In fact, the aggregation of the Pt catalyst was observed after the aging treatment for both Pt17/γ-Al2O3 and PtNP/γ-Al2O3 ( Figure S12). However, the average particle size of Pt17/γ-Al2O3 and PtNP/γ-Al2O3 after the aging treatment was 25.3 ± 19.4 and 77.5 ± 29.9 nm, respectively. Thus, the average particle size of the former was smaller than that of the latter even after the aging treatment. It is considered that because the original Pt17/γ-Al2O3 had a smaller particle size than the original PtNP/γ-Al2O3, Pt17/γ-Al2O3 had a smaller average particle size than PtNP/γ-Al2O3 after aggregation, resulting in its higher activity even after the aging treatment. Conclusions In this study, we successfully developed a method for producing Pt17/γ-Al2O3 using [Pt17(CO)12(PPh3)8]Cln as a precursor. Characterization of the obtained Pt17/γ-Al2O3 revealed that Pt17 is not present in the form of an oxide but has a framework structure as a metal cluster. Furthermore, it was determined that Pt17/γ-Al2O3 exhibits better catalytic ability for CO and C3H6 oxidation as well as better durability than PtNP/γ-Al2O3 prepared using the conventional impregnation method. The precursor [Pt17(CO)12(PPh3)8]Cln can be isolated with atomic precision only by mixing the reagents, heating the solvent in the atmosphere, and operating a simple separation. The supported Pt17 is a Ptn cluster within the size range associated with high catalytic activity. 18 It is expected that by using the loading method established in this study, many research groups can conduct further investigations on Pt17/γ-Al2O3 to obtain a deeper understanding of this catalyst and find new ways of using Pt17/γ-Al2O3 and Pt17 supported on other oxides. However, for practical use of the catalyst, it is necessary to investigate the catalytic activity and durability under actual operating conditions for the loading amount and exhaust gas mixing ratio. 18,72−74 In addition, the loading weight need to be increased to that used in actual operating conditions ( Figure S13). We are currently attempting the measurements under such conditions with collaborations between academia and industry. Synthesis of [Pt17(CO)12(PPh3)8]Cln [Pt17(CO)12(PPh3)8]Cln was synthesized using the method reported in our previous paper 52 with a slight modification (Scheme S1). First, H2PtCl6·6H2O (0.10 mmol) and NaOH (2.2 mmol) were dissolved in ethylene glycol (25 mL). NaOH was used to control the pH of the solution and thereby suppress the particle size obtained by the polyol reduction. 54,75 Then, the mixture was heated at 120 °C for 10 min to reduce the Pt ions and produce CO catalyzed by Pt ions. The color of the solution changed from yellow to dark brown. After cooling to room temperature (25 °C), acetone (10 mL) containing PPh3 (0.52 g, 2.0 mmol) was added to this solution at once. After several minutes, toluene (~20 mL) and water (~20 mL) were added to the reaction solution. The Pt clusters including Pt17(CO)12(PPh3)8 were transferred into the organic phase. Then, the organic phase was separated from the water phase and dried with a rotary evaporator. The dried product was washed with water and then methanol to eliminate ethylene glycol and excess PPh3. At this stage, the product was still a mixture of clusters of several sizes. The product was dried, and the by-products were then washed with a mixture of acetonitrile/toluene (1:1). at 0.15 wt% Pt. The amount of Pt in the solution was confirmed by ICP-MS analysis of the supernatant solution. After mixing for 2 h, the solution became colorless, which indicates that almost all the [Pt17(CO)12(PPh3)8]Cln were adsorbed on γ-Al2O3. After adsorption, the obtained Pt17(CO)12(PPh3)8/γ-Al2O3 was calcined under reduced pressure (>1.0 × 10 −1 Pa) to remove PPh3 ligands ( Figure S4). The furnace temperature was increased at a rate of 5 °C/min and then maintained at 500 °C for 20 min. Monolithic Honeycomb Catalyst Before the catalytic activity tests, monolithic honeycomb catalysts were prepared by coating a slurry, which was prepared from the Pt catalyst powder (Pt17/γ-Al2O3 or PtNP/γ-Al2O3), an inorganic binder, and water, onto a cordierite honeycomb substrate followed by drying at 120 °C for 1 h and subsequent calcination at 500 °C for 2 h (Scheme S2). The calcined honeycomb catalysts contained approximately 60 g/L of the coated catalyst powders. Characterization ESI mass spectrometry was performed using a reflectron time-offlight mass spectrometer (Bruker, micrOTOF II). In these measurements, a cluster solution with a concentration of ~10 μg/mL in dichloromethane was electrosprayed at a flow rate of 180 μL/h. MALDI mass spectra were collected using a spiral time-of-flight mass spectrometer (JEOL, JMS-S3000) with a semiconductor laser. DCTB 75 was used as the MALDI matrix (cluster:matrix = 1:1000). TEM images were recorded with a JEM-2100 electron microscope (JEOL) operating at 200 kV, typically using a magnification of 600,000×. HAADF-STEM images were recorded using a JEOL ARM200CFE fitted with an aberration corrector. The catalyst powders of Pt17/γ-Al2O3 or PtNP/γ-Al2O3 were ground between two glass slides and dusted onto a holey carbon-coated Cu TEM grid. DR spectra were acquired at ambient temperature using a V-670 spectrometer (JASCO). The wavelength-dependent optical data (I(w)) were converted to energy-dependent data (I(E)) using the following equation that conserved the integrated spectral areas: ICP-MS was performed using an Agilent 7500c spectrometer (Agilent Technologies, Tokyo, Japan). Bi was used as the internal standard. The ICP-MS measurements were performed for the supernatant obtained after mixing [Pt17(CO)12(PPh3)8]Cln with γ-Al2O3 to estimate the unadsorbed Pt content. The adsorption efficiency and Pt amount on γ-Al2O3 were estimated using this value. ICP-OES was performed using an Agilent Technologies 700 series spectrometer to determine the Pt content in the Pt17/γ-Al2O3 or PtNP/γ-Al2O3 after completely dissolving the sample using aqua regia. XPS data were collected using an electron spectrometer (JEOL, JPS-9010MC) equipped with a chamber at a base pressure of ~2×10 −8 Torr. X-rays from the Mg-Ka line (1253.6 eV) were used for excitation. Pt L3-edge XAFS measurements were performed at beamline BL01B1 of the SPring-8 facility of the Japan Synchrotron Radiation Research Institute (proposal numbers 2018B1422, 2019A0944). The incident X-ray beam was monochromatized by a Si(111) doublecrystal monochromator. As references, XAFS spectra of Pt foil and solid PtO2 were recorded in transmission mode using ionization chambers. The Pt L3-edge XAFS spectra of the samples were measured in fluorescence mode using a 19-element Ge solid-state detector at room temperature. The X-ray energies for the Pt L3-edges were calibrated using Au foil. The XANES and EXAFS spectra were analyzed using the xTunes program 77 as follows. The χ spectra were extracted by subtracting the atomic absorption background using cubic spline interpolation and normalized to the edge height. The normalized data were used as the XANES spectra. The k 3 -weighted χ spectra in the k range 3.0-14.0 Å −1 for the Pt L3-edge were Fourier transformed into r space for structural analysis. The curve fitting analysis was performed in the range of 1.2-3.0 Å for the Pt L3-edge. In the curve fitting analysis, the phase shifts and backscattering amplitude function of Pt-C, Pt-P, and Pt-Pt were calculated using the FEFF8.5L program. Measurements of Catalytic Activity Catalytic activity tests on the honeycomb catalysts of Pt17/γ-Al2O3 or PtNP/γ-Al2O3 were performed in a flow reactor. The honeycomb catalysts were fixed in a tubular reactor, and their catalytic activity was evaluated by supplying a gas mixture containing a reducing gas (CO or C3H6), oxidation gas (O2), and carrier gas (N2) at a space velocity (SV) of 50000 L/h (Table S4). The conversions of CO or C3H6 were monitored using an exhaust gas analyzer (MEXA-ONE-D1, HORIBA) from 100 °C to 400 °C at a heating rate of 20 °C/min. Before each catalytic activity test, the honeycomb catalyst was pre-treated at 400 °C for 0.5 h under a flow of the gas mixture used for catalytic activity test. Aging Treatment for Durability Tests The aged catalyst samples were prepared by hydrothermal redox aging under a flow of perturbed reduction gas (3% of H2, 3% of CO and 10% of H2O) and oxidation gas (3% of O2 and 10% of H2O) with N2 balance (Table S5). The perturbation cycle was 3 min, the aging temperature was 1000 °C, and the duration was 4 h.
2019-12-05T09:28:51.877Z
2019-12-03T00:00:00.000
{ "year": 2019, "sha1": "3d48e9f79cdbbecedfc1c45f01eb443d021cab90", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2020/na/c9na00579j", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "2d6d2b07b90daf4fbfec02ca8d3eacf11066eed1", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
14963214
pes2o/s2orc
v3-fos-license
Duffing oscillation-induced reversal of magnetic vortex core by a resonant perpendicular magnetic field Nonlinear dynamics of the magnetic vortex state in a circular nanodisk was studied under a perpendicular alternating magnetic field that excites the radial modes of the magnetic resonance. Here, we show that as the oscillating frequency is swept down from a frequency higher than the eigenfrequency, the amplitude of the radial mode is almost doubled to the amplitude at the fixed resonance frequency. This amplitude has a hysteresis vs. frequency sweeping direction. Our result showed that this phenomenon was due to a Duffing-type nonlinear resonance. Consequently, the amplitude enhancement reduced the vortex core-switching magnetic field to well below 10 mT. A theoretical model corresponding to the Duffing oscillator was developed from the Landau–Lifshitz–Gilbert equation to explore the physical origin of the simulation result. This work provides a new pathway for the switching of the magnetic vortex core polarity in future magnetic storage devices. anism from the gyration mode-assisted core switching. The underlying mechanism of the radial mode-assisted core switching was not clearly shown by the simulation. The critical field obtained by the radial mode in these studies is of the order of 20 mT (ref 17), larger than the gyration mode-assisted core reversal. In this work, we studied the underlying mechanism of the radial mode oscillation and outlined a new pathway to reduce the coreswitching field further down to the mT range, which was more comparable to the critical field of the gyration-assisted core switching. In addition to micromagnetic simulations 26 , we also established a dynamical equation for the radial mode oscillation from the Landau-Lifshitz-Gilbert (LLG) equation 27 . This equation clearly explores the nonlinear behavior of the radial mode and the critical field reduction. For direct comparison of the critical field reduction, the simulation structure was set as described by Yoo et al. 17 (Fig. 1a). According to previous studies, the radial modes are classified by the node number n (refs 17,24). The first mode has one node, the vortex core, which means that the magnetization does not oscillate temporally at the vortex core, but the other parts almost uniformly oscillate. The second mode has two nodes; one is the vortex core and the other a concentric circle. Yoo et al. 17 studied the resonance frequency of the individual radial mode and obtained the eigenfrequencies with the same sample structure as in this study: 10.7 GHz for the first mode (n 5 1), 15.2 GHz for the second mode (n 5 2), and 20.7 GHz for the third mode (n 53). They also showed the vortex core polarity reversal using the first mode with an oscillating external field of 20 mT. To reduce the radial mode-induced critical field below 10 mT, we stimulated the first mode of the radial oscillation with a different method; that is, sweeping of the external field frequency. The field was sinusoidal with amplitude of 9 mT and the field frequency f was slowly varied from 14.0 to 6.0 GHz over 40 ns. Figure 1b shows the magnetization oscillation during frequency sweeping with time. The normalized magnetization along the thickness direction m z and the external magnetic field, H z , were plotted together. The term ,m z . means the spatial average over the entire disk. The magnetization oscillation has the same frequency despite the phase difference. From this oscillation, we can get the oscillation amplitude of magnetization, I z , in the thickness direction, which is half the difference between the nearest maximum and minimum values of the ,m z . oscillation. After reaching an external field frequency of 6.0 GHz, the frequency sweeping direction was reversed and f returned to 14.0 GHz. In Fig. 1c, I z is shown as a function of f. It is interesting to note that an external field of 9 mT can reverse the vortex core polarity. In downward sweeping of the frequency, the almost uniform magnetization oscillation was observed on the disk except for the core conserving its width (inset of Fig. 1c). This uniform oscillation was maintained before I z reached the maximum amplitude of 0.28 when f was 8.7 GHz. After reaching this critical amplitude, the uniform oscillation collapsed and converged into the disk center that generated a breathing motion of the core. Such a breathing generated a strong exchange field when the core was compressed, and then core polarization switching occurred 16,17 . Amplitude fluctuations near 8.5 GHz and 10.5 GHz are transition effects discussed below. In contrast to downward sweeping, the upward frequency sweeping did not reach the amplitude of 0.28, so the vortex maintained its polarity. This means that one cycle of frequency sweeping generated one core reversal. It is notable that the amplitude obtained with the fixed-field frequencies was the same as the upward sweeping. The fixed-field frequency amplitudes were determined by amplitude saturation after turning on the external oscillating field. To reverse the core polarity with the upward sweeping oscillation and fixedfrequency oscillation, a larger field was required for achieving the sufficient oscillation amplitude. From this sweeping frequency simulation, it was verified that the critical field was reduced to below 10 mT and this reduction was only observed in downward sweeping because of the hysteresis behavior of the frequency. To study this hysteresis behavior, we constructed a simplified model. The magnetization oscillation of the first mode can be represented by only two variables, h and Q, because with the exception of the core region, the magnetization oscillates almost uniformly 17 . Figure 2a shows the definition of h and Q in the core-free model. The angle h represents the magnetization tilting toward the normal direction and Q denotes tilting in the radial direction. The initial magnetization is then described as h 5 0 and Q 5 0. Note that the initial magnetization of the vortex state could have two values of Q, 0 and p, because of the chirality of the vortex. The two states are energetically equivalent, so in this paper we only considered Q 5 0 as the initial state. Using these two variables, the magnetization state of each component was described as M r 5 M S cosh sinQ, M Q 5 M S cosh cosQ, and M z 5 M S sinh. After inserting these components into the LLG equation 27 and solving, we could derive the following coupled equations: Here, the over dot means the time derivative, c is the gyromagnetic ratio, N z and N r are determined by the demagnetizing energy along thê z andr directions, respectively, H is the external field amplitude, and v (5 2pf) is the angular frequency of the external field. In equations (1) and (2), all the parameters are known except for N z and N r . To obtain these two parameters, we reduced equations (1) and (2) assuming that N z ? N r because of the thin structure, and this condition also results in h = 1, after which simplified equations are derived: We carried out a simulation that solved equations (3) and (4). Gaussian-shaped external field pulses were applied sequentially to the nanodisk to excite the first mode and the interval between pulses was tuned for precise resonant amplification 21 . We turned off the external field pulse and observed the relaxation process (Fig. 2b). The angles h and Q converged exponentially to zero with a spiral trajectory; h was obtained from sinh 5 ,m z (t).2,m z (0)., where, t is time and Q was determined by the averaged value over the entire disk. From this spiral relaxation, _ Q was plotted as a function of h (Fig. 2c) and sin2Q was plotted with _ h (Fig. 2d). The damping effect was negligible because a 5 0.01 (=1) and during relaxation, H 5 0. Thus, two linear relations ( Fig. 2c and 2d) determined the values N z 5 0.85 and N r 5 0.14. These values are reasonable because N z 1 N r < 1 for the first mode was composed with almost uniform magnetization. Next, we solved equations (3) and (4) to obtain the oscillation amplitudes versus the external field frequencies. The time derivative of the equations and the elimination of the _ h and € h terms yielded the following equation of motion of Q: Here, v 0 5 cM S ffiffiffiffiffiffiffiffiffiffi ffi N z N r p 5 2p 3 10.7 GHz and C 5 acM S . If we neglect the damping term and the external field term, equation (5) exactly corresponds to the simple plane pendulum, € Wzv 2 0 sin W~0 with W 5 2Q, so it was natural for the main dynamics of the radial mode of the vortex to be the same as that of the plane pendulum. Directly solving equation (5) was not easy, so we simplified equation (5). Firstly, we compared the third and the fourth terms of equation (5), both terms have _ Q dependence, and we neglected the fourth term which is much smaller than the third term. Then, we expanded sin 2Q in the second term by using a Taylor expansion such as sin x 5 x 2 x 3 /3!1x 5 /5!… and we adopted up to the third order of this expansion. These simplifications produced the following equation. This is the well-known Duffing oscillator equation 28,29 that describes a nonlinear oscillation induced by the Q variation-dependent spring constant. If b 5 0, equation (6) becomes a harmonic oscillator. In this study, b 5 -2/3 due to the Taylor expansion of sin 2Q. Other parameters were determined similarly: the dissipative constant d 5 acM S (N z 1 N r ) and the external force F 5 cH. It is well known that we can solve the Duffing equation by using the van der Pol transformation 29,30 . A solution of equation (6), the frequency-dependent amplitude Q 0 , exhibits hysteresis behavior (Fig. 3a). We set H to be 8 mT. In contrast to a harmonic oscillator showing a symmetric resonant peak, the Duffing oscillator exhibits an asymmetric resonance peak and the curve is divided into stable and unstable solutions that are well known as the foldover effect. If we apply the external field with a starting frequency of 14 GHz and monotonically reduce the frequency, the oscillation amplitude would follow the stable solution line until it meets the maximum amplitude point Q max . After passing the maximum point, the amplitude was drastically reduced to follow the stable line. A similar behavior was also expected for the case of increasing frequency, but the amplitude did not reach Q max because the stable line was connected up to 10 GHz. This hysteresis property was also shown as the phase d variation of Q oscillation with respect to the field oscillation (Fig. 3a). The micromagnetic simulation result showed almost the same hysteresis behavior (Fig. 3b) with the Duffing oscillator, but it showed fluctuation when the amplitude www.nature.com/scientificreports and the phase jumped from one stable solution to the other stable line. Note that in Fig. 3 the amplitude is shown with Q. It differs from Fig. 1c, which was plotted with I z (5 sinh 0 ), but the maximum values of Q 0 and h 0 can be converted through the relation N z sin 2 h max 5 N r sin 2 Q max because the demagnetization energy is alternatively transferred between theẑ andr directions. We observed the Duffing-type oscillation and critical field reduction in other radial modes. Figure 4 presents the minimum field for core reversal, H c , as a function of frequency f. To obtain the critical field with fixed frequency, we fixed the external field frequency and varied the amplitude of the field from 0 with a 10 mT/ns rate. Frequency dependent critical field had three local minima and each minimum corresponded to the first, second, and third mode. The obtained H c values for modes with fixed frequency were respectively 20 mT for n 5 1, 78 mT for n 5 2 mode, and 127 mT for the n 5 3 mode. The critical fields were also determined by the sweeping frequency method. For the second mode, the frequency started from 17 GHz with 20.2 GHz/ns sweeping rate; for the third mode, it started from 24 GHz with the same rate. Then, the critical field was 8.5 mT for n 5 1, 37 mT for n 5 2, and 76 mT for n 5 3, which were almost half the values obtained with fixed frequency. The sweeping rate can change the critical field but the variation is small. For example, n 5 1 mode, H c 5 8.3 mT with 0.1 GHz/ns rate and 8.6 mT with 0.4 GHz/ns rate. The critical field is obtained with 0.1 mT field interval for n 5 1 and 1 mT interval for n 5 2, 3. We confirmed that these reduced critical fields were not achieved through the upward sweeping of the field frequency because of the Duffing-type nonlinear oscillation. In addition, further reduction of the critical field was achieved by using a square wave type external field. For n 5 1 mode, we obtained 6.3 mT for the vortex core reversal. We tested the scalability of the radial mode-induced core reversal. When the radius of the disk was 120 nm, the critical field obtained by the frequency sweeping method was 9.3 mT. The core of a disk with radius 250 nm reverses its polarity with 12 mT external field. Increasing the radius, the critical field also increases. This scalability is an important property for developing data storage devices. Contrary to the radial mode-induced polarity switching, the critical field with the gyration-induced polarity switching exhibits inverse radius dependence 19 as well as the chirality reversal 13 . Finally, we point out the chaotic behavior and the phase commensurability in the radial mode oscillation for further studies. Petit-Watelot et al. observed the chaos and phase-locking phenomenon in the vortex gyration with the core reversal 31 . We observed similar behavior in radial mode oscillation. It is expected that a nonlinear oscillator with a sufficiently large driving force would exhibit chaotic motion. We confirmed this chaotic behavior in the radial mode of the vortex. When the oscillating field strength was smaller than H c , a plot of the variable with respect to its time derivative, for example v _ m z w versus ,m z ., showed a circular trajectory. But when the field was larger than H c , this plot becomes complex in the phase space, which manifests its chaotic behavior. Figure 5 shows examples of the chaos in the radial mode. The frequency was fixed at 13.5 GHz. When H 5 60 mT , H c (Fig. 5a), it showed a closed circular trajectory, but when H 5 90 mT . H c the trajectory was not closed (Fig. 5b). Further increases in the field resulted in closed trajectories. However, the trajectories were not a simple circle. To close the trajectory, 14 cycles of field oscillation are needed (Fig. 5c) and during these 14 cycles, the core reversed four times. In the case of H 5 120 mT, core reversal occurred twice in five field oscillations (Fig. 5d), implying that the core reversal rate was related to the chaotic behavior. Thus, to describe the radial mode of vortex including its chaotic behavior, the core polarity-related term 32 is needed in the motion equation. In summary, we studied the nonlinear resonance of the radial mode of the vortex and found that the oscillation mode corresponding to the Duffing-type nonlinear oscillator exhibited a hysteresis behavior with respect to the external field frequency. Through the hysteresis effect, we can achieve hidden amplitude that is almost double that obtained with fixed field frequency and this amplitude multiplication effect reduces the critical field below 10 mT. In addition, we pointed out the chaotic behavior of the radial mode for further studies. We think that to complete the study on vortex dynamics, it is timely to start research on the nonlinear behavior in radial modes, as well as in other oscillations of the magnetic vortex. Methods Micromagnetic simulations. We performed a micromagnetic simulation using the OOMMF code 26 for numerical calculation of the LLG equation. The simulation structure is a circular disk with a diameter of 160 nm and 7 nm thickness, t d , as shown in Fig. 1a. The structure is divided into 2 3 2 3 7 nm 3 unit cells. Uniform magnetization was assumed along the thickness direction. The material parameters were chosen to resemble a typical permalloy with no crystal anisotropy. The saturation magnetization, M S , was 8.6 3 10 5 A/m, the exchange stiffness, A, was 1.3 3
2016-05-12T22:15:10.714Z
2014-08-22T00:00:00.000
{ "year": 2014, "sha1": "b6b6f8df874eafd696e4185d683cb19ff86d33f2", "oa_license": "CCBYNCND", "oa_url": "https://www.nature.com/articles/srep06170.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b6b6f8df874eafd696e4185d683cb19ff86d33f2", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
11364542
pes2o/s2orc
v3-fos-license
How robust are the constraints on cosmology and galaxy evolution from the lens-redshift test? The redshift distribution of galaxy lenses in known gravitational lens systems provides a powerful test that can potentially discriminate amongst cosmological models. However, applications of this elegant test have been curtailed by two factors: our ignorance of how galaxies evolve with redshift, and the absence of methods to deal with the effect of incomplete information in lensing systems. In this paper, we investigate both issues in detail. We explore how to extract the properties of evolving galaxies, assuming that the cosmology is well determined by other techniques. We propose a new nested Monte Carlo method to quantify the effects of incomplete data. We apply the lens-redshift test to an improved sample of seventy lens systems derived from recent observations, primarily from the SDSS, SLACS and the CLASS surveys. We find that the limiting factor in applying the lens-redshift test derives from poor statistics, including incomplete information samples, and biased sampling. Many lenses that uniformly sample the underlying true image separation distribution will be needed to use this test as a complementary method to measure the value of the cosmological constant or the properties of evolving galaxies. Planned future surveys by missions like the SNAP satellite or LSST are likely to usher in a new era for strong lensing studies that utilize this test. With expected catalogues of thousands of new strong lenses, the lens-redshift test could offer a powerful tool to probe cosmology as well as galaxy evolution. The concept of optical depth to lensing (ODTL) was proposed to study strong lensing statistics by Turner, Ostriker & Gott (hereafter TOG, 1984): they presented an analytic calculation of the lensing probability of distant quasars by intervening galaxy lenses and the role of selection effects therein. Since the lensing probability depends on the comoving volume element, the ODTL test can be used to constrain cosmological parameters by comparing the number of expected lenses to the number of observed ones. Using the ODTL test, Kochanek (hereafter K96, 1996) obtained limits on the cosmological constant from the statistics of gravitational lenses using a number of completed quasar surveys (e.g. Snapshot Survey, the ESO/Liège survey, the NOT survey, the HST GTO survey, the FKS survey), lens data, and a range of lens models. The formal limit obtained was ΩΛ < 0.66 at 95% confidence in flat cosmologies, which included the statistical uncertainties in the number of lenses, galaxies, quasars, and the parameters relating galaxy luminosities to dynamical variables. This value is in contrast to what is now well established by WMAP observations of the CMB (e.g. Spergel et al. 2006) and high redshift SN Ia observations (Riess et al. 1998;Perlmutter et al. 1999). These observations have in fact led to what is currently referred to as a 'cosmic concordance' model (Ostriker & Steinhardt 1995) -the Λ-CDM model (with h ≃ 0.7, ΩΛ ≃ 0.7, ΩM ≃ 0.3 and ΩK ≃ 0.0) -as the most widely accepted description of the Universe. It has been argued by K96 that their retrieved low value for ΩΛ could be due to dust obscuration in a large fraction of lensing galaxies: however, a hundred times more dust is needed to change the expected number of lenses by a factor of two. Given this extreme value, dust is clearly not the dominant source of systematic errors. By tabulating various sources of error and the limitations imposed on the accuracy of the determination of ΩΛ, K96 speculated that the assumptions on the velocity dispersion function of lenses might be a significant source of error. Reviewing previous estimates of the cosmological constant derived from strong lensing statistics Maoz (2005) concludes that the discrepancies might be due to possibly a lower lensing cross section for ellipticals galaxies than assumed in the past. Maoz (2005) argues that the current agreement between recent model calculations and the results of radio lens surveys may be fortuitous, and due to a cancellation between the errors in the input parameters for the lens population and the cosmology, as well as input parameters for the source populations. In the quest to determine the correct underlying cosmological model by placing better and tighter constraints on ΩΛ, strong gravitational lensing has not been the most reliable technique. Systematic errors have plagued the lensing analysis, leading to contradictory results for the derived values of the cosmological constant in a flat Universe (see, for example, Maoz & Rix 1993, K96, Chae et al. 2002. These contradictory results were primarily caused by: small number statistics due to the shortage of observed lens systems; assumptions about the relationship between luminosities and masses of galaxies; scatter in the empirical relation between mass and light; and observational biases, mainly the magnification bias 1 . An explicit relation between mass and light is required for the lensing analysis in the absence of independent mass estimates for the lensing galaxies. The luminosity of galaxies is converted into a mass distribution (which is the relevant quantity to model lensing effects) using a density profile, which is parametrized via the velocity dispersion. Statistics of strong lenses and any cosmological constraints thereby obtained depend on the assumed velocity dispersion function (VDF) of galaxies. Kochanek (hereafter K92, 1992), devised a test, the 'lens-redshift test', which circumvented the magnification bias since it does not involve computing the total ODTL. This test relies on the computation of the differential optical depth to lensing with respect to the angular critical radius r. The probability distributions of lens redshifts z with a given angular critical radius, [(dτ /dz)/τ ](r), are evaluated. However, this quantity still required knowledge of the VDF of lensing galaxies, which was inferred (hence IVDF) by combining the Schechter luminosity function with an em-1 The magnification bias arises due to the fact that intrinsically faint sources can appear in a flux-limited survey by virtue of gravitational lensing thereby affecting the statistics. pirical relation between luminosity and velocity dispersion, the Faber-Jackson and the Tully-Fisher relations for earlytype and late-type galaxies, respectively. The lens-redshift test depends on cosmological parameters as well as on galaxy evolution parameters. Therefore, it can be used to constrain the former by fixing the latter, or vice versa. With the assumption of no evolution, K92 derived ΩΛ 0.9. Ofek, Rix & Maoz (hereafter ORM, 2003) revived the lens-redshift test (K92). They applied it to a larger sample of lens systems than were available to the K92 analysis, using the CLASS and SDSS surveys. Their study also included a re-derivation and generalisation of the lens-redshift test, which incorporated mass and number density evolution of lens galaxies. They explicitly included the redshift evolution of the characteristic velocity dispersion and evolution of the number density of galaxies. The limit obtained by ORM for a flat Universe, assuming no mass evolution of early-type galaxies between z = 0-1, was ΩΛ < 0.95 at the 99% confidence limit. Turning things around, and fixing the cosmological model to ΩΛ = 0.7 and ΩM = 0.3, they determined galaxy evolution parameters and found d log 10 σ⋆(z)/dz = −0.10 +0.6 −0.6 and d log 10 n⋆(z)/dz = +0.7 +1.4 −1.2 , where σ⋆ and n⋆ are the characteristic velocity dispersion and number density of lensing galaxies, respectively. Mitchell et al. (hereafter MKFS, 2005) focused instead on the ODTL test (TOG). In addition to using a larger sample than the one used by K96, they included the evolution of the VDF in amplitude and shape, based on theoretical galaxy formation models, and used the measured velocity distribution function (MVDF) for early-types from the SDSS (Sheth et al. 2003). MKFS found ΩΛ = 0.74-0.78 for a flat Universe prior and a limit ΩΛ < 0.86 at the 95% confidence limit. Including the effects of galaxy evolution, they found ΩΛ = 0.72-0.78 and a limit ΩΛ < 0.89 at the 95% confidence limit. The consequence of using the MVDF versus the IVDF in the determination of ΩΛ is one of the key questions we address in this work. The IVDF and MVDF differ at high luminosities/velocity dispersions (see Fig. A1 in the Appendix). The scatter of the Faber-Jackson relation was a predominant source of uncertainty in the previous studies (cf. ORM), leading to a systematic underestimation of the number of objects with large velocity dispersions. In this paper, we investigate the lens-redshift test in detail and re-examine the uncertainties that limit its use as a powerful discriminant between cosmological models, as well as its potential to constrain galaxy evolution models. We apply this to a new enlarged sample of lenses. This is done for the first time using the measured velocity dispersion function from SDSS although we compare and reproduce the results of ORM using the inferred velocity dispersion function. In addition, we consider the effect of incomplete lensing information on the retrieval of cosmological parameters with a new nested Monte Carlo method. The outline of the paper is as follows. In section 2 we define the lens-redshift test and compare the use of the IVDF and the MVDF on the determination of both ΩΛ and galaxy evolution parameters. In section 3 we describe the new expanded sample and, in section 4, we present the results of the application of the lens-redshift test to our sample. We present a new Monte Carlo method to quantify the effect of incomplete lensing information in section 5, by constructing realizations of several biased subsamples. We conclude with a discussion of our results and their implication for future observational surveys. 2 THE LENS-REDSHIFT TEST FORMALISM 2.1 Methodology using the inferred velocity dispersion function We follow the notation introduced by TOG and K92 in defining the optical depth to lensing and the lens-redshift test, respectively. The differential optical depth to lensing per unit redshift is the differential probability dτ that a line of sight intersects a lens at redshift z in traversing the path dz from a population of lensing galaxies with comoving number density nL. Mathematically, for a source this is simply the ratio of the differential light travel distance cdt to its mean free path between successive encounters with galaxies 1/nLS, where the comoving number density of lensing galaxies is given by nL = n⋆(1 + z) 3 ; n⋆ is the average number density of lensing galaxies; S is the cross section for multiple imaging of a background point source; rH = c/H0 is the Hubble radius and E(z) = (ΩM (1 + z) 3 + ΩK (1 + z) 2 + ΩΛ) 1 2 . The cross section for multiple imaging S is given by S = πr 2 DL 2 . We initially assume n⋆, the characteristic luminosity L⋆ and the characteristic velocity dispersion σ⋆ of lensing galaxies to be constant with redshift, although we will later relax this assumption and allow for time evolution. Our analysis is restricted to early-type and S0 galaxies as lenses and it is assumed that they can be modelled as singular isothermal spheres (SIS) 2 . With the assumptions stated above, we can write the angular critical radius r as: where DLS and DS are the angular diameter distances between the lens and the source and between the observer and the source, respectively, and fE is a parameter that takes into account the difference between the velocity dispersion of the mass distribution σm and the observed stellar velocity dispersion σ: σm = fEσ. Modelling galaxies as singular isothermal spheres, the characteristic central velocity dispersions (which are typically unmeasured for most lenses), are drawn from the VDF. In this subsection, we relate the luminosity distribution to the Faber-Jackson law and construct the IVDF. Using the Schechter function fit to model the luminosity function of lensing galaxies, 2 A SIS has a mass distribution given by ρ(r) = σ 2 /2πGr 2 , where σ is constant with radius r. This density profile is a very good fit for elliptical and S0 lensing galaxies. In fact non-singular isothermal spheres and truncated isothermal spheres give very similar fits to lensing data (e.g. Rusin et al. 2003). and the Faber-Jackson relation, L/L⋆ = (σ/σ⋆) γ , to relate the luminosity to a velocity dispersion, combining these two equations we derive the IVDF: Combining the IVDF with equation (1), the differential optical depth can be written as dτ dz (dσ/σ⋆) Defining r⋆ ≡ 4π(σ⋆/c) 2 and using gives us the IVDF lens-redshift test equation, where r⋆ and τ⋆ ≡ 16π 3 n⋆rH 3 (σ⋆/c) 4 are constants. Incidentally, we note that this is slightly different in form from K92, as we compute dτ /dzdr, whereas K92 compute dτ /dz(dr/r⋆). Both calculations then proceed to normalise with respect to τ ; this gives identical results only when a single population of galaxies is considered. The value of r⋆ depends on σ⋆, which in turn varies as a function of the morphological type considered. We include both ellipticals and S0 galaxies in our analysis, whereas K92 considered only ellipticals. To obtain constraints on galaxy evolution, we consider the following scaling relations where n⋆, L⋆ and σ⋆ are the characteristic values at zero redshift, and P , Q and U are constants (as in equations (9), (10) and (11) in ORM). Incorporating these, the IVDF lens-redshift test equation (7) becomes Note that Q, the evolution parameter of the luminosity in equation (9), does not appear in the equation above. Methodology using the measured velocity dispersion function In this section we rewrite the differential optical depth as a function of the measured velocity dispersion function for early-type galaxies from observations circumventing the use of the Schechter luminosity function and the Faber-Jackson relation. We now use the functional form (fitted to SDSS observations) of the MVDF taken from Sheth et al. (2003) and rewrite it in a form that makes for easy comparison with the IVDF (equation (4) above): Substituting and simplifying the expression for the differential optical depth as done previously, we obtain the MVDF lens-redshift test equation, Once again, we consider the case where the parameters n ′ ⋆ , L ′ ⋆ and σ ′ ⋆ for the MVDF evolve with redshift, following equations similar to (8), (9) and (10), and on substituting we find There is no reason to assume that σ⋆ and σ ′ ⋆ (or n⋆ and n ⋆ ′ ) evolve differently, therefore, we set U = U ′ (or P = P ′ ). We note here that σ * is just a parameter in the IVDF and the MVDF fits and due to the different functional forms for the parametrizations of the velocity dispersion function, their values for the IVDF and MVDF can be and are in fact, found to be quite different. DEFINING THE NEW LENS GALAXY SAMPLE Following ORM, our first lens sample is primarily drawn from the CASTLES (Muñoz et al. 1998) data base 3 which, at present, contains 82 class 'A' (certain), 10 class 'B' (likely), and 8 class 'C' (dubious) gravitational lenses, making for a total sample size of 100 systems. We ignore the 13 class 'B' binary quasars from the CASTLES lists, eight of which have image separations greater than 4 ′′ and would be discarded as likely being cluster-assisted rather than due to field galaxies. In the remaining 5 lenses nearby group galaxies are implicated in determining the separations, therefore we discard them as well. To get a handle on potential biases in the sample we 3 http://cfa-www.harvard.edu/castles/index.html have grouped the systems by their discovery technique into three categories: targeted optical discoveries, which were targets selected based on the lensed source optical emission and includes such surveys as the first HST snapshot survey ) and quasars selected from the Calan-Tololo (Maza et al. 1996), Hamburg-ESO and SDSS quasar surveys (York et al. 2000); targeted radio discoveries, which were targets selected based on the lensed source's radio properties and includes the JVAS/CLASS (Myers et al. 2003), PMN , and MG (Bennett et al. 1986) lens searches; and miscellaneous discoveries for systems discovered either serendipitously (such as the HST parallel field discoveries; Ratnatunga, Griffiths & Ostrander 1999) or based on system properties other than that of the lensed source and typically that of the lensing galaxy. Lenses discovered based on the properties of the source ought not to harbour a bias in terms of the redshift of the lensing galaxy, although they suffer from magnification bias. However, systems discovered because of the lens or surrounding environment will naturally favour low-redshift lenses. All systems in the 'Miscellaneous Discoveries' fall under this category, which includes systems discovered because of the properties of the lensing galaxy such as Q2237+030 (Huchra et al. 1985) and CFRS03.1077 (Crampton et al. 2002), systems discovered based on properties of the lensing galaxy's environment such as RXJ0921+4529 (Muñoz et al. 1998), and systems discovered serendipitously from HST pointings such as the HST Medium Deep Survey lensing candidates (Ratnatunga, Griffiths & Ostrander 1999) and HDFS2232509-603243 (Barkana, Blandford & Hogg 1999), the latter of which are characterised by deflector emission that is either comparable to or dominates over the background source. We also exclude systems inappropriate for our lens model of isolated and elliptical/S0 lensing galaxies. These include systems with multiple lensing galaxies of comparable luminosities (and therefore likely comparable halo masses) such as HE0230-2130 (Wisotzki et al. 1998), B1359+154 (Myers et al. 1999) and B2114+022 (Augusto et al. 2001. We also exclude cluster-assisted systems such as Q0957+561 (Young et al. 1980). Although we are ignoring entire surveys, this ought not to introduce biases in the sample. These various cuts detailed above leave a total of 42 systems in our sample A1 detailed in Table A1 with complete redshift information (source and lens redshifts). We have estimated the size of the deflector's critical radius for the remaining systems using a simple Singular Isothermal Sphere model (SIS) plus external shear using the gravlens software of Keeton (2001). Relative image positions with respect to the lensing galaxy were obtained from either the CASTLES compilation or from the reference in column (11) of Table A1 in the Appendix. For double systems, we use the reddest flux ratio measured between lensed components (typically either HST/F160W or the radio flux ratio if the system is radio-loud) as the fifth constraint required by the model, which ought to minimize flux ratio contamination from microlensing-induced variability. For systems with ring morphology, the critical radius of the lens was obtained from the corresponding model reported in the cited reference. Finally, we add the SLACS lenses ) to construct our largest sample comprising seventy systems. The SLACS survey has proven to be a very efficient HST Snapshot imaging survey for new galaxy-scale strong lenses. The targeted lens candidates were selected by Bolton et al. (2006) from the SDSS database of galaxy spectra for having multiple nebular emission lines at a redshift significantly higher than that of the target SDSS galaxy. The survey is optimized to detect bright early-type lens galaxies with faint lensed sources. The key advantage of this selection technique is that it provides a homogeneously selected sample of bright early-type lens galaxies. However, given the size of the fibres in SDSS this sample is biased toward large separations (the consequences of this bias are discussed later) compared to other surveys. This is clearly seen in the histogram plotted in Fig. 1. All the 28 SLACS lenses found to date are tabulated in Table A2 of the Appendix. Of these lenses 19 are confirmed with multiple images and the remaining 9 lenses are candidates. We note here that both the confirmed and unconfirmed candidates from the SLACS are included in our calculations. Although the SLACS is a biased sample, we include all 28 lenses in our analysis to illustrate the effect of better statistics given the success of the adopted selection strategy. The SLACS selection favours gravitational lenses that have typically larger Einstein radii (as seen clearly in the histogram of image separations in Fig. 1) by virtue of the selection of spectroscopic candidates for imaging follow-up. For the purposes of the current analysis it clearly strengthens the results of this work, i.e. biased sampling of the image separation distribution provides biased values for galaxy evolution parameters. Once the survey has finished, the selection function will be very well determined, which will enable more careful use of this sample for galaxy evolution studies. This strategy has been extremely successful so we feel compelled to showcase this sample. SLACS E/S0 lenses appear to be a random subsample of the luminous red galaxies sample of the SDSS, only skewed toward the brighter and higher surface brightness systems. While the environments of some of the lenses are complicated by the existence of nearby galaxies, by and large it is a 'clean' sample where the image separations are determined primarily by a single elliptical/S0 lens. Including the 28 SLACS lenses to the sample A1 gives us a final tally of 70 lenses, all with complete information, that defines our sample A2. In Fig. 1, we plot the distribution of critical radii for observed lenses in sample A1 and the SLACS lenses, which together constitute our sample A2. Since several samples will be used in the paper, we list them here for clarity: Sample A1 -our updated version of the ORM sample I, with a total of 42 systems; Sample A2 -our new, enlarged sample that contains Sample A1 and 28 new SLACS lenses; Sample B -our mock sample of a 100 lenses with complete information; and Sample C: our truncated sample A1 with 10% of the largest separation lenses removed. ANALYSIS AND RESULTS In this section, we compare the discriminating power of the MVDF versus the IVDF, in constraining both cosmological and galaxy evolution parameters. We study the recovery bias in the extraction of (i) cosmological constraints with U = P = 0 for the various compiled lens samples, as well as of (ii) galaxy evolution parameters, by fixing the cosmological parameters. Finally, we assess the impact of incompleteness of lens data in the recovery of the cosmological constant. As noted in section 2, we normalise the lens-redshift probability distribution with respect to the optical depth τ . Therefore, parameters which appear simply as multiplying constants in the distributions do not impact our comparison. Such parameters include the Hubble radius rH = c/H0 and the average number density of lensing galaxies n⋆ (IVDF) and n⋆ ′ (MVDF). Our comparison of the IVDF and MVDF is not affected by the value of fE. This parameter relates the velocity dispersion of the dark matter to that of the stars. TOG set it to (3/2) 1/2 , other studies (e.g. Narayan & Bartelmann 1999) suggested using values smaller than 1. Recent results from the SLACS survey suggest that fE ∼ 1, i.e. the lens model velocity dispersions are fairly close to the measured stellar velocity dispersion within an effective radius . Therefore, we take fE = 1 in this work. The default cosmological model is taken to be a Friedmann-Robertson-Walker (ΩΛ + ΩM + ΩK = 1) Λ-CDM flat Universe, with ΩM = 0.3, ΩK = 0.0 and ΩΛ = 0.7. Note that late-type galaxies can in principle also be incorporated into the IVDF analysis easily, by simply replacing the Faber-Jackson relation with the Tully-Fisher relation. We can substitute the Faber-Jackson exponent γ and characteristic σ⋆ with the corresponding Tully-Fisher relation parameters. Although late-type galaxies are more numerous than early-type galaxies, they tend to have lower masses and therefore do not contribute significantly to the total optical depth to lensing. Due to the strong dependence of the lensing cross section on the velocity dispersion, this causes late-type galaxies in general to be inefficient lenses. Besides, late-type galaxies are not included in the determination of the MVDF from SDSS data. So in this work, for consistent comparisons we restrict ourselves to elliptical and S0 lenses. We compute the differential optical depth (dτ /dz)/τ for each individual lens with measured separation and known source redshift for our two samples: sample A1 (42 lenses) and sample A2 (70 lenses). We then determine the probability distribution of the redshift of the lens using equation (7) (IVDF) and equation (13) (MVDF), given the observed image separation; the measured source redshift; a given cosmology and galaxy evolution model (in this case assuming no evolution: U = P = Q = 0, h = 0.7, ΩM = 0.3, ΩK = 0.0 and ΩΛ = 0.7). If the choice of underlying cosmological parameters, primarily ΩΛ in this case, corresponds to the true value, the peak of the probability distribution zp ought to be close to the measured lens redshift z l , in a good number of cases. These lens redshift probability distributions are shown in Fig. A2 (in the Appendix) for all the lenses in our sample A2. The plot illustrates that the MVDF and IVDF yield near identical probability distributions for the lens redshifts. Choosing different values of ΩΛ shifts these inferred probability distributions: this is illustrated in Fig. 2 for one lens (B0218+357), for values of ΩΛ = 0.2, 0.4, 0.7 and 1.0 (keeping ΩK = 0.0). However, we notice a systematic effect: zp the peak redshift of the probability distribution is skewed slightly lower for the MVDF compared to the IVDF for almost the entire sample A2. This can be qualitatively explained as arising due to the different asymptotic behaviours of the MVDF and the IVDF (see Fig. A1 in the Appendix). The maximum likelihood method Now, we use the samples to statistically play the game in both directions: (i) constrain the geometry of the Universe with galaxy evolution parameters fixed, and (ii) to constrain the galaxy evolution model with cosmological parameters as knowns. We use a maximum likelihood estimator in our statistical analysis of lens redshift distributions. The lensredshift test equations give the probability distribution of lens redshifts as P (z l |{X}, zs, r), normalised to unity, where {X} = {ΩΛ, U, P } is the set of cosmological and galaxy evolution model parameters, and (zs, r) are source redshift and Figure 2. Dependence of the lens-redshift distribution on cosmology. We show the dependence on the value of Ω Λ explicitly for the lens B0218+357 using the MVDF (thick lines) and IVDF (thin lines) respectively, for a range of values: Ω Λ = 0.2 (black, dotted), Ω Λ = 0.4 (red, short-dashed), Ω Λ = 0.7 (green, solid) and Ω Λ = 1.0 (blue, long-dashed). The vertical dashed line marks the position of the observed lens redshift z l = 0.69. The peak zp of the distribution increases as Ω Λ increases from 0.2 to 1.0, due to the increase in the cosmological comoving volume. lens angular critical radius priors for a given system. The likelihood estimator L for the entire sample of N systems is then: Pi(z l |{X}, zs, r). The quantity L is computed to quantify the consistency of all measured lens redshifts for the entire ensemble of lens systems for any given geometry and galaxy evolution model. We then compute the maximum value of L, fixing the galaxy evolution parameters to obtain constraints on the cosmology ({X} = {ΩΛ}), and then fixing the cosmology to obtain constraints on galaxy evolution ({X} = {U, P }). As pointed out by ORM, the lens-redshift test is more sensitive to the galaxy mass evolution parameter U compared to the galaxy number evolution parameter P . This can be understood by considering the limit when P ∼ 0: a negative U decreases the most probable value for the lens redshift and narrows the probability distribution. In contrast, the number evolution parameter only affects the peak value but does not affect the overall shape of the probability distribution. Constraints on cosmology We proceed to obtain constraints on ΩΛ, keeping galaxy evolution parameters fixed, using the Friedmann-Robertson-Walker cosmology, and imposing ΩΛ + ΩM + ΩK = 1 with ΩK = 0.0. Unless otherwise stated, we assume U = P = Q = 0, corresponding to the case of no evolution in the galaxy population either in mass or number. The lens-redshift test equations (7) and (13) are used in this instance and the likelihood as described above is constructed and maximized. A projection of the likelihood surface along the ΩΛ axis for sample A1 is shown in Fig. 3. In the upper panel the likelihood function is calculated using the IVDF for sample A1. Several values of σ⋆ are shown for completeness. Assuming a value of σ⋆ = 225 km s −1 for elliptical galaxies (ORM), we obtain the following limits on the cosmological constant: ΩΛ = 0.55 +0.14 −0.20 at 1σ confidence. This value is determined by taking the median as the central value and "ruling out" the leftmost ∼ 16% and rightmost ∼ 16% of the total integral ('median' method). Alternatively, if we take the mode as the central value and determine the threshold value of the likelihood for which the integral under it comprises ∼ 68% of the total integral ('mode' method), we obtain a slightly higher value of ΩΛ = 0.60 +0.13 −0.19 ; this method is similar to what ORM did, except that they assumed a normal distribution ('normal' method). Their assumption turns out to be quite reasonable as we recover the same results (ΩΛ = 0.60 +0.12 −0.18 ) applying their method to our sample A1. For the IVDF, we note that the error bars we obtain are much smaller than in ORM: this is certainly due to the larger number of lens systems in our sample A1 (the ORM sample I had 15 lenses compared to 42 in our sample A1). Although our error bars are smaller, we note that the numbers quoted here are for a single value of σ⋆ = 225 km s −1 and these determinations of ΩΛ using the 'mode', 'median' and 'normal' methods are entirely consistent with each other within the errors. The corresponding results for sample A1 using the MVDF are also shown in Fig. 3 using the 'mode' method, and ΩΛ = 0.67 +0.11 −0.15 using the 'normal' method. Again, we note that these quoted values are for a single value of σ ′ ⋆ = 88.8 km s −1 and once again the constraints on ΩΛ using these three different criteria are consistent with each other given the errors. Even with the improvement of using the MVDF compared to earlier work, the sensitivity to ΩΛ in the lens-redshift test is low, as seen clearly by the fact that using the ±1σ range on σ ′ ⋆ recovers values of ΩΛ varying from 0.0 to nearly 1.0 -the full available range. Unsurprisingly, the recovery of ΩΛ using the MVDF and the IVDF yields very similar values, as these functions differ only at the extremely high (σ⋆ > 380 km s −1 ) velocity dispersion tail as shown in Fig. A1 in the Appendix. A marked difference between the IVDF and MVDF shows up in the velocity range of 380-400 km s −1 , which is characteristic of cD galaxies. Strong lensing events from such galaxies are difficult to model, as their position at the centre of clusters causes the events to be assisted by additional smoothly distributed dark matter in their vicinity. Since we excluded all such systems in our sample A1, it is not surprising that the inferred value of the cosmological constant using the IVDF and MVDF are in good agreement. We plot the corresponding results for the larger sample A2 in Fig. 4 We find that the recovered value of ΩΛ is higher from the sample A2 (the mode value is shifted by about 0.25). Sample A2 does include a higher proportion of larger separation lenses (clearly seen in Fig. 1). This indicates a potential systematic bias that skews recovery of ΩΛ, that is sensitive to how well the 'true' separation distribution is sampled. To obtain robust constraints on ΩΛ with the lens-redshift test not only do we need large samples but we also need lenses that accurately reflect the true underlying distribution of image separations. We note here that the SLACS lenses are included to clearly demonstrate this bias as their image separations are skewed to larger values as a consequence of the selection technique. Four key results emerge from these plots: first, the lensredshift test is not very robust in constraining the value of the cosmological constant with current samples. This was already suggested by K92, but we demonstrate it more clearly here even with two notable improvements: a larger sample of lenses and the use of the MVDF. The likelihood curve is very shallow and consequently the error bars are rather large. Second, the MVDF and IVDF results are comparable, therefore the inefficacy of the lens-redshift test does not appear to stem from systematics arising from the use of the IVDF. Third, is the notable sensitivity of constraints on ΩΛ to the parameter σ ′ * . The value of σ ′ * emerges in the fit of a functional form to the observed velocity dispersion function and depends on the completeness of the measurement, i.e. adequate sampling of the high and low velocity dispersion tail for observed galaxies. Finally, inclusion of the SLACS lenses (19 confirmed lenses + 9 candidates) with relatively larger separations pushes the recovered ΩΛ to higher values. The finite number of lens systems is clearly implicated here as evidenced in the error bars on ΩΛ and is a key limitation. In conclusion, as we show in the next section, while a large number of lens systems will go a ways toward increasing the robustness of this test in the future, it is crucial to simultaneously sample the separation distribution uniformly. Constraints on galaxy evolution We now investigate galaxy evolution using the lens-redshift test, with n⋆, L⋆, σ⋆, n ′ ⋆ , L ′ ⋆ and σ ′ ⋆ varying with redshift according to equations (8), (9) and (10) and their primed versions. The equations used are (11) and (14). Fixing the cosmological model to h = 0.7, ΩM = 0.3, ΩK = 0.0 and ΩΛ = 0.7, we determine U and P . As outlined in Section 4.1, the likelihood function is constructed fixing ΩΛ and then maximizing to obtain constraints on U and P . We obtain constraints on U and P for various samples: the ORM sample I lenses, sample A1 and sample A2 all of which are plotted in Fig. 5. We calculate the U -P contours for the ORM sample I, applying our analysis methods and using the MVDF. The MVDF was not available at the time of the ORM analysis. Our calculation of the U and P parameters using the IVDF is in very good agreement with their results. The orientation and calibration of the confidence level contours agree. Although the contours of all the samples overlap quite well, the difference in the peak values of U and P determined for our samples A1, A2 and ORM sample I is significant. The likelihood results in the U -P plane for sample A1, using the IVDF and the MVDF, are shown in the upper and the lower panels of Fig. 5, respectively. We obtain a maximum in L at {U = 0.11, P = −1.40} for the IVDF and at {U = 0.10, P = −1.24} for the MVDF, again showing no significant dependence on the choice of velocity dispersion function employed. However, we note that the contours close along the U -axis for the MVDF case compared to the IVDF. Therefore, using the IVDF to calculate the likelihood lowers the sensitivity to mass evolution. For our sample A2 the maximum value for L is at {U = 0.32, P = −1.60} for the IVDF and at {U = 0.28, P = −1.57} for the MVDF. For the ORM sample I: the maximum value for L is found to lie at {U = −0.08, P = 0.44} for the IVDF and at {U = −0.07, P = 0.85} for the MVDF. The likelihood peak moves toward increasingly positive values of U for sample A2 compared to A1 and the ORM sample I. This indicates a strong sensitivity to the fraction of large separation lenses. Sample A2 has a larger proportion of those and therefore predicts stronger mass evolution for the lens ensemble. The fact that U and P have opposite signs is consistent with mass conservation: we have either a larger number of lower mass galaxies (P > 0, U < 0) or fewer more massive galaxies (P < 0, U > 0) in the past compared to today. However, the case (P < 0, U > 0) is in conflict with the currently accepted hierarchical model of galaxy formation with bottom-up assembly of structure. Our primary conclusions on deriving galaxy evolution parameters are: (i) we reproduce the trends reported by ORM for their sample I when we use the IVDF as they did; (ii) we find the likelihood peak position to be insensitive to the choice of IVDF vs MVDF for the ORM sample I; (iii) we recover U and P values consistent within the errors for all our samples; (iv) with a larger number of lenses (as in sample A2) we obtain slightly increased sensitivity to P compared to ORM sample I; (v) the likelihood peak shifts systematically to higher U values for sample A2 which contains a higher proportion of large separation lenses compared to the sample A1. To summarise, there is a notable observational bias in recovering mass evolution that depends strongly on how well the underlying true separation distribution is sampled in detected lenses. Investigation of systematic observational biases Unbiased lens surveys are needed to sample uniformly the full distribution of separations (Kochanek 1993) in order to apply the lens-redshift test to constrain galaxy evolution parameters as found above. If the sample is slightly skewed toward larger separations, biased values of the galaxy evolution parameters are retrieved. To investigate this issue further, we create a mock sample of a hundred lenses (sample B), all with complete information (i.e. source redshift, lens redshift, and image separation) known assuming no evolution, to try and understand the observational biases that likely affect our analysis. We randomly assign the source redshift from a normal distribution of redshifts centred at z = 2 and with a dispersion of 1. We then randomly assign the angular critical radius, as half of the lens separation, from the probability distribution of image separations given by Kochanek (1993), where we set γ = 4 and α = −1 in their equation (4.10) for a flat Universe. We calculate the Figure 5. Comparison of constraints on galaxy evolution for different samples: the derived U -P contours for sample A1 (blue, solid lines), sample A2 (black, dot-dashed lines) and the ORM sample I (red, dashed lines). In the upper panel, the IVDF was used for the calculation and in the lower panel the MVDF. The three contours shown for each sample are the 1σ, 2σ and 3σ confidence levels and the cross marks the no-evolution locus (U = 0, P = 0). While there is some degree of overlap for the various samples it is clear that the peak value of U -the mass evolution parameter -shifts toward more positive values consistent with sample A2 having a larger proportion of more massive lenses, in good agreement with the fact that sample A2 does have a higher fraction of large separation lenses. The error bars on the ORM sample I are larger since it has only 15 systems whereas sample A1 has 42 systems and sample A2 70 systems. We note that within the errors the values of U and P obtained for different samples are in good agreement. differential optical depth distribution for each lens, using equation (7), and randomly pick a lens redshift from it. For the full mock catalogue (sample B), the maximum value for the likelihood L is obtained at {U = −0.03, P = −0.57}. The input (U = 0, P = 0) parameters values are not exactly recovered due to finite-sample variance. Several subsamples of sample B were then evaluated, after cutting the sample based on source redshifts and image separations. Creating a subsample of 90 lenses discarding the 10 highest source redshift systems, we find that the maximum value of the likelihood L shifts to {U = −0.09, P = −0.21}. Then for a subsample of 90 lenses generated by discarding the 10 lowest source redshift systems we find that the peak is at {U = −0.03, P = −0.54}. To examine the additional sensitivity to the number of lenses as well, we now construct a subsample from sample B of 58 lenses with the lowest source redshifts and find {U = −0.07, P = −0.80} and for the subsample of 58 lenses with the highest source redshifts the evolution parameters are found to be {U = −0.02, P = −0.48}. These results are shown in the lower panel of Fig. 6: no significant systematic bias is introduced on selection by source redshift. The only difference seen in these subsamples is the effect of the variation in the number of systems: the contours are more extended for the 2 cases with 58 systems compared to the cases with 90 systems. We then cull sample B on the basis of image separations and the results for these subsamples are shown in the upper panel of Fig. 6. The key systematic that we study in further detail is the role of biased sampling of the image separation distribution. The mock catalogue generated above was now cut based on lens angular critical radius r. Once again biased subsamples were generated to preferentially sample larger and smaller separation systems. First, we constructed a subsample of 90 systems discarding 10 smallest separation systems: for this instance the maximum value of the likelihood lies at {U = −0.03, P = −0.49}. Picking now a further 90 systems discarding the 10 largest separation systems, we find a different maximum at {U = −0.02, P = −0.62}. Similarly, making a more extreme selection, we pick 58 systems from the mock discarding 42 of the largest separation systems. This is our extreme biased sample skewed to small separations. For this subsample we find {U = −0.14, P = −0.19}. Finally, for a mock with 58 of the largest sep- U -P contours for our full mock sample B and several subsamples selected by cuts in image separations (i) lowest 90% separations (blue, dot-dashed thin lines), (ii) lowest 58% separations (red, dotted thin lines), (iii) largest 90% separations (blue, solid thin lines), (iv) largest 58% separations (orange, dashed thin lines). Lower panel [Biased source redshifts]: U -P contours for subsamples now cut on the basis of source redshifts (i) lowest 90% zs (blue, dot-dashed thin lines), (ii) lowest 58% zs (red, dotted thin lines), (iii) highest 90% zs (blue, solid thin lines), (iv) highest 58% zs (orange, dashed thin lines). The full mock sample B results (black, solid thick lines) are shown in both panels for comparison. The three contours shown for each sample are the 1σ, 2σ and 3σ contour likelihood lines and the cross marks the location of the no-evolution locus (U = 0, P = 0). A clear systematic bias is introduced on selection by lens separation. In particular, the velocity dispersion evolution parameter dramatically shifts from near-zero values (U ∼ 0) to negative values (U ∼ −0.1), when the largest separation lenses are removed from the sample. aration lenses (this constitutes our extreme biased sample toward large separations) we find {U = −0.10, P = 1.92}. Our analysis clearly indicates the presence of a systematic bias introduced on selection by lens separation. This effect is especially seen clearly in the smaller subsamples (with 58 systems). In particular, the velocity dispersion evolution parameter dramatically shifts from near-zero values (U ∼ 0) to negative values (U ∼ −0.1), when the highest separation lenses are removed from the sample. We see clearly from Fig. 6 that lens data comprising biased sampling of the underlying image separation distribution introduce a systematic shift in the recovered values of the galaxy evolution parameters, whereas data with biased source redshifts yield unbiased estimates of U and P . Galaxy evolution parameters are thus extremely sensitive to observational biases in the separation distribution of lens systems. For ground based optical surveys and high resolution HST surveys there are optimal separations that are detected. Lens systems found in ground based surveys are likely skewed to larger separations than those found in HST surveys. Having narrowed the plausible source of the systematic bias, we re-do the analysis making a similar cut on our observed lens sample A1 to verify our finding. We remove the five (10%) largest image separation lenses thus creating the sample C. We now compare the recovery of U and P for this biased sample with samples A1 and A2, sample B and the ORM sample I. The maximum value of L is obtained for sample C at {U = −0.03, P = −0.51} (IVDF) and at {U = −0.01, P = −0.40} (MVDF). In Table 1 we list the positions of the peak values of the likelihood function in the U -P plane, for our samples A1, A2, B, ORM sample I and sample C, for the IVDF and MVDF. The full results are shown in Fig. 7. The resultant trend clearly demonstrates the strong bias now replicated with cuts in the data introduced by artificially removing large image separation systems. Our result that incompleteness in image separations is a serious current limitation in using strong lensing statis- Table 1. Positions of the peak values of the likelihood function in the U-P plane, for samples A2, A1, B, C, and ORM I, IVDF and MVDF, in order of U-peak position. When comparing samples A2, A1 and C, we confirm the same trend we observed with the mock subsamples: the velocity dispersion evolution parameter shifts to less positive values as we remove the highest separation lenses. tics has also been pointed out by and Oguri, Keeton & Dalal (2005). THE EFFECT OF INCOMPLETE LENS DATA ON THE RETRIEVAL OF COSMOLOGICAL PARAMETERS Previous lens-redshift test analyses have differed on how to handle systems with incomplete redshift information. K96 included an estimate of the probability of failing to measure a system's lens redshift for systems lacking such a measurement. ORM take a more pragmatic approach by discarding all systems with zs > 2.1, arguing that systems below that redshift are mostly complete. The former approach is made difficult by the many variables that can prevent a successful redshift measurement (surface brightness of the lensing galaxy, galaxy contrast with respect to the magnified source images, observing conditions during an actual measurement attempt), while the latter approach ignores higher-redshift systems that do have complete redshift information. Such systems are likely to show the strongest sensitivity to cosmological or evolution effects that are sought after in the first place. The approach we adopt here is to marginalise over systems with incomplete redshift information using nested Monte Carlo simulations. Let Nc be the number of systems with complete redshift information and Nu be the number of systems with unmeasured lensing redshifts. For a given parameter set {X}, we can assign lens redshifts for the Nu sample by drawing from P (z l |{X}, zs, r) which gives a sample of lens redshifts {z l,u }. With {z l,u } fixed, we obtain the absolute likelihood L({X}) for the combined Nc + Nu sample. The procedure is then repeated NMC times with each iteration using a different set of {z l,u }. This gives an average absolute likelihood < L({X}) > and a corresponding scatter δL({X}) for the given set of model parameters {X}. The scatter in the absolute likelihood estimate shrinks to zero as Nu → 0, and can be interpreted as a measure of the uncertainty in the absolute likelihood because of the incomplete sample. The entire procedure can then be repeated for a different set of model parameters. We argue that this is an attractive method for several reasons. First, it does not ignore existing redshift information for any system, either within the complete or the incomplete sample. This results in as large a sample size as possible and helps to minimize small-number effects that are traditionally present in lensing statistics. Second, the question of handling biases present in the incomplete sample is made objectively by marginalising over the entire sample rather than imposing an artificial cut on, say, the source redshifts. And third, it allows one to quantify the effects that the incomplete sample has on the accuracy of likelihood analysis through δL({X}). This last point can be used to explore how the precision of the model parameters can be measured by future changes in either the complete or incomplete sample size. We performed a nested Monte Carlo simulation of our sample A1 (Nc = 42), adding a set of ten mock lens systems with known image separations, source redshifts and unknown lens redshift (Nu = 10), to make up a total of 52 lens systems. We fixed all galaxy evolution parameters (U = P = Q = 0, σ⋆ = 225 km s −1 ) and varied cosmological parameters, after setting ΩΛ + ΩM + ΩK = 1 with ΩK = 0.0. Therefore, the parameter set was taken to be {X} = {ΩΛ}. A projection of the (un-normalised) likelihood surface along the ΩΛ axis is shown in Fig. 8 lute the efficacy of constraints on cosmological parameters. While we have argued here that the lens-redshift test with a small lens sample with complete information is insufficient, we further find even with a small number of systems in a large sample with incomplete information, we lose sensitivity to ΩΛ. CONCLUSIONS AND DISCUSSION We investigate the lens-redshift test to assess its robustness in constraining cosmological and galaxy evolution parameters. We apply the test to a much improved lens sample compared to earlier work by K92 and ORM. Moreover, we also use the observationally determined velocity dispersion function (MVDF), instead of relying on the IVDF. MKFS also used the MVDF, but they applied it to the ODTL test, which is more affected by observational biases -mainly the magnification bias -than the lens-redshift test considered here. Finally, we develop a new nested Monte Carlo analysis to quantify the effects of incompleteness on the accuracy of retrieving ΩΛ. Our results suggest that the lens-redshift test is not particularly robust in the determination of either cosmological parameters or galaxy evolution parameters with the currently available samples. We conclude this after careful analysis of 70 lens systems and generating several mock catalogues. First, we fix galaxy evolution parameters to constrain ΩΛ: in this instance very weak constraints are obtained. Moreover, despite using the MVDF for the first time in this test our results do not differ significantly from earlier work. When we do the converse, i.e. fix the cosmology and look for constraints on galaxy evolution, we find that the results on the evolution parameters are too sensitive to the choice of sample, implying a very strong dependence on the observational bias introduced by lens separations. Finally, the limit of the precision with which the value of the cosmological constant can be determined due to lack of complete information has been assessed. We find that even a small number of systems with incomplete information in a large sample can further reduce the significance of the already weak constraints on the cosmological constant. In fact, systems with incomplete information add more noise than signal. For the purposes of constraining cosmological parameters incomplete-redshift information systems are best excluded. With the small number of systems available at the present time, such a strong cut is not feasible, however with the expected large number of new lenses from future surveys the statistics will permit stricter selection of optimal systems. The lens-redshift test is clearly affected by lens selection effects. An obvious observational strategy for the future would be to observe hundreds of new lenses, that fairly sample the full distribution of separations. Such samples are expected from the large area surveys to be performed by future instruments like SNAP and the LSST. These large samples with hundreds/thousands of lenses at several redshifts would allow us to better quantify the lens sample selection bias. Moreover, ideally lenses for use in the lens-redshift test need to be relatively "clean", that is, they should not belong to groups, where presence of additional deflectors/nearby galaxies could affect the image separation and therefore provide skewed lens image separation distributions that will in turn bias results. It is becoming increasingly clear from the study of individual lens environments that there are clearly no truly isolated lenses. However, what is important from the point of view of the lens-redshift test is that the neighbouring perturbers are not massive enough to significantly alter the image separations to within observational positional accuracies. Obviously these accuracies depend on whether space based data or ground based data is available for new lens systems. From large proposed future surveys which all involve deep imaging, lens systems with many perturbers need to be culled. It appears from simulations that systems that likely require of the order of 10 -20% external shear are still viable candidates for the lens-redshift test (Oguri, Keeton & Dalal 2005) . Future lens samples should ideally include both early and late type galaxies and span a large redshift range, in order to constrain galaxy evolution parameters. Finally, all lens systems to be useful should have complete information, since even a small fraction of incomplete systems would significantly decrease the efficacy of constraints on parameters. In the near future, hundreds/thousands of new lens systems will be discovered by these upcoming new instruments. Simultaneously with many on-going and planned ambitious surveys to study galaxy evolution, progress is likely to come from better knowledge of galaxy evolution. The lens-redshift test, which is currently unable to give decent constraints on cosmology and galaxy evolution because of poor statistics, could eventually prove to be a very profitable means to con-strain cosmological parameters and galaxy evolution models robustly. (6) lens redshift, z l ; (7) critical radius, r, in arcseconds; (8) grade for the likelihood that the object is a lens: A=I'd bet my life, B=I'd bet your life, and C=I'd bet your life and you should worry (CASTLES); (9) Number of images corresponding to each source component, E means extended and R means there is an Einstein ring (CASTLES); (10) sample: A1-Sample A1 in this paper, C-Sample C in this paper, I-Sample I in ORM, O-Targeted optical discoveries, R-Targeted radio discoveries, M-Miscellaneous discoveries; (11) references. The numbers shown are the ones actually used in the computations. When computing ORM Sample I, we used the numbers from ORM Table A1 (calculating the critical Table A2. List of all lenses belonging to Bolton et al. (2006) sample. The columns are: (1) lens number; (2) lens name; (3) source redshift, zs; (4) lens redshift, z l ; (5) velocity dispersion measured within an aperture σa, in km s −1 and (6) the status of the lens (C -confirmed; UC -unconfirmed). The numbers shown are the ones actually used in the computations (only the central value of σa was used). The critical radius r was then computed using equation (2) and h = 0.7, Ω Λ = 0.7, Ω M = 0.3, and Ω K = 0.0. Note that the 19 confirmed lenses and 9 candidates are denoted with a C and U C respectively in the final column. Figure A1. The measured velocity dispersion function (MVDF) and the inferred velocity dispersion function (IVDF) for early-type galaxies. The functional forms plotted here are derived from fits provided in equation (23) of Mitchell et al. (2005). The solid curve is the MVDF and the dashed curve is the IVDF. Figure A2. The lens-redshift distribution ((dτ /dz)/τ vs z) for all 70 lenses from Sample A1 (numbered in the same order as presented in Table A1 in the Appendix) and the SLACS sample (numbered in the same order as presented in Table A2 in the Appendix): calculated using the IVDF (red, dot-dashed line) and the MVDF (blue, solid line); with the galaxy evolution and cosmological parameters set to U = P = Q = 0, h = 0.7, Ω Λ = 0.7, Ω M = 0.3, and Ω K = 0.0. The vertical dashed line marks the position of the observed lens redshift z l . The peak zp of the probability distribution is skewed to slightly higher z l for the IVDF compared to the MVDF, in most lenses.
2007-08-07T19:27:31.000Z
2007-05-22T00:00:00.000
{ "year": 2007, "sha1": "168f7a42c828e8ff6df6e31e2afedaf564f0a99d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1367-2630/9/12/445", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "28593746f9f4a3c61bf51fb17b0a3415de9a7a5e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
234864137
pes2o/s2orc
v3-fos-license
Prioritization of hydroelectric power plant earth dam safety procedures: a multi-criteria approach The number of procedures focused on dam safety is very large, mainly due to the rules established by different regulatory bodies, the guidelines that are part of the recommended best practices for engineering works, and the common sense and conservatism present in dam operation and maintenance because of the large socioeconomic and environmental impacts that any incident with a dam can cause. In practice, the vulnerability of a dam is inversely proportional to the improvement of safety procedures, such as monitoring and sensing, and the staff’s capacity to interpret the information in timely fashion. Therefore, establishing priorities for these procedures is essential for the plant management to define the scheduling and detailing of inspections and monitoring, as well as training needs. The MCDA model described here was specified based on regulations and practical public domain guidelines. The subjective estimation of preferences was done by the staff of a hydroelectric plant located in central Brazil. It employed the Simos method combined with a procedure adopted to convert the scores to the format of paired comparisons. The weights for dam safety procedures were obtained using the fuzzy AHP method. The method allowed obtaining the classification of safety procedures according to their priorities, and thus provided the plant management with elements to better schedule monitoring and staff training. Introduction Vulnerability and reliability are antagonistic concepts. The former is a more wide-ranging concept, with much broader implications. While reliability focuses on the possibility of maintaining the performance of infrastructure elements, vulnerability focuses on the potential for disrupting these elements or degrading them to a point where performance is diminished. Vulnerable does not necessarily mean unreliable, nor does unreliable necessarily mean vulnerable. Reliability is a probabilistic measure of elements in an infrastructure system and their ability not to fail or malfunction, given a series of established benchmarks or performance guidelines (Murray and Grubesic 2007), while vulnerability is a fragility or defect in the design, operation and/or management, making the infrastructure subject to failure or stoppage when exposed to a hazard or threat (Zio 2016). Dams are key assets in terms of critical infrastructure (Murray and Grubesic 2007). The extent and severity of a cascading effect depends on how tightly coupled and vulnerable the infrastructure systems are (Little 2002). In eventdriven risk models, the probability of occurrence of events or hazards of a certain level are inputs triggering the probability of poor performance or vulnerability, which results in the probability of a set of likely consequences. Depending on the adverse effect of the event on the facility's performance, the consequence can be dam failure (Baecher 2016). The assessment of the physical vulnerability of elements at risk as part of the risk analysis is an essential aspect for the development of strategies and structural measures for risk reduction. Understanding, analyzing, and where possible quantifying physical vulnerability is a prerequisite for designing strategies and adopting tools for this reduction (Papathoma-Köhle 2016). A dam fails due to a severe event that can result from heavy rainfall, earthquakes, strong winds, snow and ice, volcanic action, landslides, tsunamis, and wildfires. All of these events can cause some degree of risk to dams and related infrastructure. To these natural hazards, one can add terrorist acts, design defects, excessively long service lives, aging materials, and unsatisfactory maintenance (Little 2002). When any of these events occurs, the extent of the dam's vulnerability determines whether or not it remains intact. One or a combination of the following vulnerabilities may cause its failure: (a) inadequate design caused by the inability to predict extreme environmental events or insufficient site assessment; (b) design flaws; (c) faults or poor engineering practices and inadequate site supervision during construction and filling of the reservoir; and (d) deficient surveillance, monitoring, and maintenance (Donnelly and Acharya 2020). With respect to the last item, a significant probability of failure is due to human factors, including maintenance, inattention to regular testing, faulty communication during an emergency, or incorrect action at the moment before the occurrence. This involves both cognitive and physical responses that, depending on the event sequence, can have results ranging from minor loss to catastrophic failure (Baecher 2016). Most failed dams either did not have any monitoring system or had a system that was out of order (ICOLD 1995). The most usual safety indicators can be classified into three categories: mechanical effects, such as deformation and displacement; hydraulic effects, such as seepage and pore water pressure; and environmental effects, such as high reservoir water levels, precipitation, and temperature (Santillán et al. 2013). Among these indicators, the monitoring of pore water pressure is essential to assure dam geotechnical stability because it detects the internal erosion and seepage problems (Pan and Dias 2016;Pagano et al. 2010), although the materials used in the internal structure of dams are arranged in zones to enhance the potential filtering capability to arrest piping if it starts (Koelewijn and Bridle 2017). The prudent maintenance practice uses all possible means to better understand dam performance, making inspection and monitoring methods more efficient. The actions developed and followed are: detailed monitoring and inspection through instrumentation, reading and interpretation of data; detailed emergency action plans, classified by type of hazard, containing stepby-step actions to be carried out by the people in charge; periodic reviews of the dam's status and evolutionary evaluation of indicators; detailed investigation of the indicators that reveal risk of failure; and preparation of a remediation plan and rehabilitation procedure (Adamo et al. 2017). The regulatory provisions on dam inspection and monitoring are consolidated (ICOLD 2015) and evaluation of dam inspection and monitoring is commonly found in the academic literature (Seyed-Kolbadi et al. 2020;Li et al. 2020;ASCE 2018;Mizuno and Hirose 2009;Lewin et al. 2003). In the case of risk assessment, safety and vulnerability of dams, several studies have been published. Andersen et al. (1999) proposed a multistep approach for the ranking of maintenance and repair actions and monitoring of embankment dams, where the most important action in terms of dam safety in the worst condition was preferred. Curt et al. (2008) adopted a possibility theory-based approach to validate different uncertain pieces of information based on sensory evaluations and global judgments. Curt and Talon (2013) suggested a method based on identification and assessment of criteria for the various sources of imperfection through visual observations, monitoring, calculation, and construction measurement. Then the ELECTRE TRI method was used to aggregate the values resulting from the assessment of the criteria. Vojteková and Vojtek (2020) applied a technique involving multi-criteria decision analysis (MCDA) and a geographic information system (GIS) to identify and analyze landslide susceptibility at a local spatial scale. Larrauri and Lall (2020) obtained dam hazard ranking through AHP in a model focused primarily on identification of methods for the rapid quantification of the trigger probability of dam failure, and identification of the critical infrastructure that would be impacted if the dam fails. The fuzzy analytic hierarchy process (FAHP) has also been used in risk assessment (Arce et al. 2015;Govindan et al. 2015;Zou et al. 2013;Chan and Wang 2013;Arikana et al. 2013;Li et al. 2013;Avdi et al. 2013;Zeng et al. 2007). Masoumi et al. (2018) adopted an integrated decision model based on the fuzzy hierarchy process (FAHP) and fuzzy technique for order of preference by similarity to ideal solution (FTOPSIS) to evaluate nine criteria and 42 alternative geotechnical instruments for monitoring and sensing of a clay core embankment dam. Jing et al. (2018) proposed a method for risk assessment of small reservoirs without monitoring equipment and verified the applicability and effectiveness of this method in two engineering cases. Yucesan and Kahraman (2019) adopted a Pythagorean fuzzy analytical hierarchy process (PFAHP) using linguistic expressions to calculate the weights of 20 hazards in the operation of hydroelectric power plants. Ribas and Pérez-Díaz (2019) used fuzzy approximate reasoning for dam safety risk assessment with FAHP to rank the indicators in a two-stage risk assessment process. Wijitkosum and Sriburi (2019) described four main factors influencing drought: climate, physical factors, soil and land utilization factors, each containing ten sub-criteria to identify severity levels and specific issues. Since each hydroelectric plant has its own characteristics and the number of procedures and records to support mitigating vulnerability is large, the proposed method establishes a priority order for the operation, inspection, sensing and maintenance procedures related to risk mitigation, aiming at supporting the planning of the instrumented health monitoring of an earth dam as well as better scheduling of training activities. This paper is organized into five sections including this introduction. The materials and methods section presents the main characteristics of the FAHP and the Simos technique. It also describes the three methodological phases -specification, scoring and weighting. The case study section describes the earth dam to which we performed the proposed method. The results and discussion section defines the monitoring procedures and their degrees of importance as estimated by the experts. The paper concludes by summarizing the main points and the practical consequences of the results. Fuzzy analytic hierarchy process The Analytic Hierarchy Process (AHP) is a method that organizes a list of criteria in order of priority. The procedure uses pairwise comparisons, whose order of importance refers to an ordinal scale and depends on the subjective judgment of an expert. The expert is asked to estimate by how much one criterion dominates another with respect to a given attribute (Saaty 2008). AHP is a method that measures intangibles in relative terms, so when the expert tries to subjectively assess the relative importance between two criteria, he or she is conditioned to bounded rationality and heuristics (Hilbert 2012). The first situation occurs because the brain's ability to process information is biologically limited, and the second occurs because the specialist tries to use shortcuts in the judgment to reduce the cognitive effort. This anomaly inherent in the subjective estimation procedure results in comparisons that incorporate biases and errors. To incorporate imprecision in the process of comparing criteria, a viable alternative is the fuzzification of scores using the FAHP. The technique takes the treatment of inaccuracy into account through a measure that represents the degree of fuzziness (δ). This metric is adopted to determine the α-cuts, boundaries of a fuzzy set characterized by a membership function, which are assigned to each pairwise comparison. This concept allows a given evaluation to have a clear numerical representation of something vague and imprecise, characteristic of the natural language used in the decision-making process (Klir and Yuan 1995). The first solution for the FAHP method was proposed by Van Laarhoven and Pedrycz (1983), in which the fuzzified weights result from the normalized estimated values of a logarithmic regression. Buckley (1985) proposed a simpler approach, in which the criteria weights are equal to the relative means of the geometric means calculated along the columns of the fuzzy pairwise comparison matrix. Mikhailov (2000Mikhailov ( , 2002Mikhailov ( , 2003 proposed a solution of a linear programming problem which maximizes the consistency index or degree of satisfaction restricted to a set of inequalities based on the α-cuts and a certain degree of tolerance. The FAHP solution proposed by Chang (1996) is a synthesizing method whose algorithm resolves hierarchy problems associated with fuzzy logic (Zhu et al. 1999), which has proven to be practical and transparent (Kahraman et al. 2003). However, the method has some practical problems. One occurs whenever zero is used as divisor or data are out of range (Zhu et al. 1999). Such weakness is solved by altering the fuzzified values to 1, 9 or 1/9, depending on the case. In addition, the extent analysis method may assign a zero weight to one or more dominated criteria, causing the criterion/ criteria to be disregarded in the decision process (Wang et al. 2008). This is solved by assigning high values to δ . Finally, it is difficult to meet the consistency requirement in pairwise comparison, which gets worse when the number of criteria is large. Ribas and da Silva (2015) proposed the adoption of the Simos method (Figueira and Roy 2002;Pictet and Bollinger 2005), which significantly reduces the risk of inconsistency during the comparison process and also decreases the effort of subjectively assigning the scores (Li et al. 2013). The Simos elicitation method is used to obtain the relative comparison scores of the degrees of importance between criteria. An expert receives a certain number "n" of identified cards and three times "n" of blank cards. The expert is initially invited to sort the identified cards in descending order of importance. He/she is then asked to insert blank cards among the identified cards, in which the number defines the degree of importance. The identified card positions divided by the total number of cards results in a normalized score. These scores are then converted to the Saaty scale using an eight plus one base scale. Specification phase The flowchart of Fig. 1 depicts the progress in three phases of the FAHP method: specification, scoring and weighting. The surveillance, instrumentation, monitoring, and data acquisition of the dam involve periodic readings and systematic analysis of the installed geotechnical instrumentation, which degree of detail and frequency of inspections depend on the age, size and location of the dam. In addition, the applicable regulations determine the elaboration of contingency plans for different hazard scenarios, such as operational actions in emergency situations, instructions, communication, and evacuation routes for those affected downstream. Each criterion is detailed for structural elements, types of measurement instruments, specific procedures for each hazard scenario and maintenance routines, as appropriate, and the sub-criteria are analyzed and compared with each other to avoid redundancies. Acronyms are assigned to the criteria and sub-criteria to facilitate identification in the FAHP model. For instance, the criterion "maintenance records" is codified by the acronym MAN and one of its sub-criteria, the "updated maintenance logbook", is codified by the acronym MAN.LOG. Scoring phase According to the Simos elicitation method, the expert is provided with the letters identified with the criterion acronyms and several blank letters. The letters containing the criterion identifications are lined up by the specialist in decreasing order of importance. When two criteria are judged to be of equal importance, the cards are placed side by side. Then the expert is asked to insert one, two or three blank cards between the identified cards, for comparisons as being "a little more important", "more important" or "much more important", respectively. Each criterion card has a value corresponding to its position in ascending order. If two or more criteria are tied, they will have the same value corresponding to their average position. Although the blank card occupies a position, no score is assigned to it since it is not a criterion. The Simos weights are the relative position values in percentage. Instead of performing the normalization on the 100 bases, as proposed by the Simos method, or using the perception of difference between the two criteria for the extremes of preference, according to the revised Simos method, we adopted the base scale eight plus one. The advantage of this approach is that the resulting matrix is transitive (Ribas and da Silva 2015). The result must be rounded to the nearest integer and inverted so the criteria scores are then converted to the equivalent Saaty scale. The matrix of paired comparisons contains the distances among the combination of pairs of Saaty scores. For example, if the Saaty scores for two criteria "a" and "b" are calculated as 4 and 8, respectively, the importance score of "b" over "a" will be their distance 1 + (8-4), equal to 5, while the importance score of "a" over "b" will be the inverse, 1/5. These steps from the Simos elicitation method to the Saaty paired comparison matrix must be repeated for all the subcriteria subordinated to each criterion, so that the result is a matrix of paired comparisons between criteria and several matrices of paired comparisons between sub-criteria, related to each criterion. Weighting phase The degree of fuzziness (δ) is set subjectively for the expert based on the extent of his/her expertise on the topic of analysis (Espino et al. 2014;Keprate and Ratnayake 2016). The purpose of δ is to compensate for the lack of precision of the specialist when making comparisons between the criteria by establishing an α-cut for a triangular membership function (TMF). The δ value is 1.0 when the expert has participated in similar projects and has demonstrated high involvement during the elicitation phase. The value of δ is 2.0 when either one of these two requirements is met and, is equal to 3.0 otherwise. A fuzzy number M ij described by a TMF assumes values in the interval {M ij | 1/9 ≤ M ij ≤ 9} and is represented by the lower (l ij = m ij-δ; ), modal (m ij ), and upper values (u ij= m ij + δ ). The fuzzy synthetic extent (S i ) for each M ij is determined, noting that each M ij is a TMF, so that S i is a triplet containing lower, modal, and upper values. When comparing two convex fuzzy numbers S 1 and S 2 , the degree of possibility must be a value between 0 and 1, determined through the min operator (Chang 1996). The weight vector is obtained by normalizing the degree of possibility vector. Then, the normalized weights calculated for the criteria and the sub-criteria are presented to the expert, who will judge whether the results are consistent with his/her expectations. If not, the scoring phase should be reevaluated. Finally, the importance degree results from multiplying the weights of the criteria, the weights of the corresponding sub-criteria and 100. Case study The Corumbá IV hydropower plant is located on the upper reach of the Corumbá River, at the geographical coordinates 16º19′22''S and 48º11′15''W, in Goiás state, Midwest Brazil (Fig. 2). The Corumbá River is a tributary on the right bank of the Paranaíba River, and the Corumba IV reservoir is part of the reservoir system of the Paraná River's hydrographic basin. The reservoir has approximately 173 km 2 of flooded area, a total maximum volume of about 3.7 × 10 9 m 3 (3.7 trillion liters) and a useful volume of 0.8 × 10 9 m 3 (800 billion liters). The filling of the reservoir began at the start of 2005 and the first generator went into operation about 11 months later. Its predominant shape is elongated, without excessive arms with relatively large depth. Its average depth is about 21 m. The earth dam was built on the Corumbá riverbed, with a predominant soil classified as Dystroferric Hapli Cambisol, having an approximate granulometric composition of 60% sand, 30% clay and 10% silt. Therefore, the dam is composed basically of sand and clay because of use of the construction materials found in the vicinity of the project. The completed dam is 10 m wide at the crest, 1,290 m long and has maximum height of 76 m in the stretch of the river channel. The earth dam core is built of clay soil, being impermeable and protected by other soils and externally by altered rock soils, which are more resistant. The powerhouse was built next to the left abutment of the dam and dimensioned to contain two sets of hydro-generators and their auxiliary equipment. The total installed power is 127 MW, divided into two generator sets of 63.5 MW each. The energy generated by this project serves up to two million people per month, guaranteeing energy for the Federal District and surrounding region. The turbines are of the vertical axis Francis type, with armored spiral box and elbow suction tube, suitable for direct drive of three-phase alternating current generators. The spillway, on a free runway threshold, was designed for a load of two meters and has a capacity of 1,550 m 3 per second, with a total length of 425 m. It is a surface spillway with a dissipation basin. The water empties into a concrete structure, with free edge, that is, without gates, descending through an elongated "S" shaped structure that launches the water, called ski jump. The adduction circuit consists of a tower water intake, with seven openings protected by grids, through which water is captured and taken to the intake well, which was excavated in rock and covered with reinforced concrete. After descending through the intake well, the water reaches the adduction tunnel with a length of 394 m, also excavated in rock and covered in reinforced concrete, the same tunnel that was used to divert the river during the construction phase. The final stretch of the tunnel, close to the powerhouse, metallic armor in addition to the concrete lining. Specification phase The vulnerability of a plant must be mitigated by means of methods, processes, instruments, and practices that are ideally able to predict and face situations of imminent danger. These elements were identified and classified according to the scope of use based on Brazilian dam safety legislation (Brasil 2010), the technical guidelines of good engineering practices recommended for public and private generation concessionaires (Eletrobras 2003) and the general guidelines for companies, aiming to ensure adequate safety conditions for dams, from construction to decommissioning (ANA 2016). Nine elements identified within their respective scopes were used as criteria in the MCDA model. The subdivision of the sub-criteria depends on the documents, processes, practices, or instruments as they were classified in the criteria. Documentation (DOC): These records must meet dam safety criteria from preliminary and feasibility studies to the final project as constructed. In this item, we analyzed the available as-built designs, construction and commissioning documentation, project description and specifications, hydrological studies, geological and geotechnical studies, seismological studies, foundation studies, spillway design and water intake design. The sub-criteria are Preconstruction engineering and design (DOC.ENG); Hydrology investigations (DOC.HYD); Geotechnical investigations (DOC. GEO); Sub-surface explorations and foundation investigations (DOC.FOU); Service spillway design (DOC.SPI); and Water intake design (DOC.WIN). Inspections (INS): Dam safety inspections are divided into regular and special inspections, the first being carried out periodically, with frequency determined according to the risk category and potential damage, aiming at assessing and detecting the existence of anomalies. Regarding the regular safety inspection, the situation and level of danger related to regular inspections evaluates the upstream slope, downstream slope, right and left abutments, crest, spillway, water intake, reservoir, instrumentation, ducts and shielding, turbines, alternators, substation, and powerhouse. Regarding the special safety inspection, it will be prepared as instructed by a multidisciplinary team of specialists, depending on the risk category and the potential damage associated with the dam, in the construction, operation and deactivation, and must consider changes in conditions upstream and downstream of the dam. Operation (OPE): it describes the procedures for the operation of the water intake, spillway, and sluice way, to allow satisfactory operation of the dam, in addition to keeping it in safe conditions and monitoring its behavior to detect any anomalies in a timely manner. In this part, the existence of procedures to be adopted in the operation of the reservoir is verified. The sub-criteria are: reservoir levels, affluent and effluent water flows (OPE.RES); Service spillway operating records (OPE.SPI); Detailed information concerning seepage control (OPE.SEC); Bottom outlet operating records (OPE.BOT); Water intake operating records (OPE.WIN); Reservoir water quality management (OPE.QUA); Operating procedure for extreme flood events (OPE.EXT); Operating instruction in the event of general flood gate failure (OPE. GAT); and Operating procedure in the event of loss of communication (OPE.COM). Maintenance (MAN): maintaining structures and equipment in good condition is intended to ensure that the dam is kept in fully operational and safe condition. For this purpose, the equipment must be inspected and checked at regular intervals, as part of a maintenance program appropriate to the type of equipment, age, and intensity of use. Maintenance records of current actions involving the bus, spillway, instruments, and gates are checked. The sub-criteria are Updated maintenance logbook (MAN.LOG); Drainage system maintenance procedures (MAN.DRA); Spillway gates and chute maintenance procedures (MAN.SPI); Safety instrument maintenance procedures (MAN.ITM); Water intake maintenance procedures (MAN.WIN); and Reduction of the erosive process and siltation (MAN.ERO). Monitoring and Instrumentation (MON): it includes the instrumentation of the body of the dam, the spillway structures, and the foundations. This criterion investigates the monitoring of the following factors (some related to instrumentation): neutral pressures at the landfill, landfill settlements, surface displacements, and seepage, among others. The dam instrumentation design must be elaborated from the feasibility study and basic design phases of the hydroelectric plant. It should contain the location of the instrumentation, along with the type and quantity of devices to be used. Instrumentation is an objective tool to monitor dams, being of fundamental importance in the safety procedures adopted. The most common instruments in earth dams and their respective functions are: topographic frame of reference for monitoring vertical and horizontal displacements; settlement sensors for measuring total and differential settlements and movement of materials; inclinometers for measuring displacements inside embankment dams or foundations; piezometers for measuring water pressures inside the dam body or at the foundation; total pressure cells in the landfill, to measure the total pressure in the landfill; flow meters for measuring the piping through the body of the dam and its foundation; water level gage; and weather stations for measuring reservoir air and water temperatures and precipitation. The number of auscultation instruments to be installed in a dam depends on its height and length, geological formation of the soil and the materials used in the body of the dam. The frequency of readings should be adjusted depending on the occurrence of critical geotechnical or geological conditions, changes in construction procedures, rapid rises or falls of the reservoir level, and severe natural phenomena. Regular Revision and Updating (REV): the Brazilian Dam Safety Act (Brasil 2010) states that, based on the dam risk classification, a regular dam safety review must be carried out to verify the general state of the dam, considering the current state of the art for the design criteria, the updating of the hydrological data and changes in conditions upstream and downstream. Thus, an extensive review of the technical documentation is carried out, comprising hydrological, geological, geotechnical, and seismological studies. The electric energy regulator audits the records related to the performance of the foundation, reservoir, spillway structure, earthen dam, operation and maintenance procedures, instrumentation, and monitoring. The outcome of the audit is the reassessment of the risk category and the associated potential damage. This criterion assesses whether the information available is sufficient to meet the requirements established by the auditors. Emergency (EME): this criterion is based on the principle that the priority will be to save human lives. It must have an organized command line for emergency situations, with agile and standardized alert and communication procedures and a plan to resume operational capacity once the hazard has been controlled. The procedures must be specific to an external threat (invasion for example); fire; dam damage; inundation and leakage of chemical waste. Each threat must be determined according to the sequence of steps for device startups and shutdowns and plant evacuation. The sub-criteria are Disaster committee (EME.DIS); Emergency communication (EME.COM); Assignment of responsibilities (EME. REP); Mass care (EME.MAS); Animal care (EME.ANI); and Recovery (EME.REC). Evacuation (EVA): the population affected by the dam must be identified to enable implementing an instruction plan and escape routes, aiming to minimize the damage and risks that may occur in a hypothetical breach. Therefore, in this criterion, a survey is performed to detect the existence or absence of programs for training the population and determining the escape routes: Instruction Plan, Escape Routes and Meeting Points, Self-Rescue Zone. The sub-criteria are Jurisdiction (EVA.JUR); Administration and logistics (EVA. ADM); Flash flood alarms (EVA.ALA); Escape routes and emergency exits (EVA.ROU); Mass care facilities (EVA. MAS); and Supplies of food, water, sanitation materials, clothing, bedding and first aid items (EVA.SUP). Scoring and weighting phases To illustrate the scoring phase, we demonstrate the steps that were followed with the general manager (GM). Nine letters containing the identification of the criteria and several blank letters were handed to the interviewee. He was asked to rank the cards serially in order of importance, with two or more cards in parallel meaning a tie for that position. The interviewee thought that the order of importance was headed by INS and MON, second MAN, third REV and DOC, fourth OPE, and fifth PRE, EVA and EME. The degrees of importance between criteria were all judged to be "a little more important". Table 1 shows the steps of the Simos method used to obtain the criterion weights from the initial ordering, which are the criterion ranking, the number of criteria of each position, the position order, the average position, and the weights resulting from the average position divided by the sum of positions. The same table shows the scores according to the Saaty scale converted from the weights obtained by the Simos method (Ribas and Silva 2015). In this case, a change was made to the values for base 8 plus 1 for subsequent inversion. Table 2 shows the matrix of paired comparisons, which together with fuzzification degree equal to 3.0 constitute the input data of the FAHP method. The resulting weights for the criteria can be seen in the last column. The degree of fuzzification of this magnitude suggests that the experts' inaccuracy in estimating preference levels is relatively high. For example, the degree of preference of INS in relation to MAN is obtained by subtracting the two scores plus one, so the difference between the two scores (8-7) + 1 equals 2, representing slight preference for INS. The Simos method was also used in the comparisons between the sub-criteria for each of the nine criteria. We adopted a variant of the AHP method to determine the degrees of importance, that is, instead of making crosscomparisons, we compared each set of sub-criteria within each criterion. The reasons is that we were not establishing a ranking of the alternatives, but rather trying to classify the safety methods and procedures necessary to ensure the reduction of the dam's vulnerability to the risk of incidents of any nature. To exemplify, we now demonstrate the results obtained in the interview with the GM for estimation of the weights of the safety procedures adopted in the scope of maintenance. As can be seen in Table 3, the GM estimated that, in terms of dam safety, Updated maintenance logbook (MAN.LOG); Drainage system maintenance procedures (MAN.DRA) and Reduction of the erosive process and siltation (MAN.ERO) are more important than Spillway gates and chute maintenance procedures (MAN.SPI); Security instruments maintenance procedures (MAN.ITM); and Water intake maintenance procedures (MAN.WIN). The last column shows the scores according to the Saaty scale converted from the weights obtained by the Simos method. Table 4 is assembled in the same way as Table 2. The last column shows that the weights for three of the sub-criteria are identical and much higher than the weights of the other three. This result stems from the GM's preferences, as shown in Table 3. The procedure exemplified for the MAN criterion was performed for the others. Then, the weights obtained for the criteria according to Table 2 were multiplied by the weights obtained for the sub-criteria and by 100. The results of the GM's ranking can be seen in Table 5, in which the degrees of importance for the methods and procedures related to mitigation of the dam's vulnerability are in the range of 0.39 to 5.75, with an average of 1.42 and a standard deviation of 2.59. The degrees of Table 5 according to the GM as well as other experts were classified in four classes, where C.1 denotes the "essential" methods and procedures; C.2 those of "high importance"; C.3 those of "medium importance"; and C4 the "least important". The classifications were obtained through the quartiles calculated from the degrees of importance of each expert. The cutoff values according to the GM, for example, are 1.39, 2.36 and 3.73, for C.4, C.3, C.2 and C.1, respectively. To obtain a single aggregated classification, it is essential to reject all hypotheses of significant difference between the estimates among the five experts: general manager (GM), electric engineer (EE), mechanical engineer (ME), civil engineer (CE), and consultant (CO). For this purpose, Spearman correlation coefficients are calculated by comparing the results between pairs of specialists. As can be seen in Table 6, the estimates of the sub-criterion positions among them have no significant differences, with p values close to zero in all cases. Such statistics confirm the inexistence of significant differences among the experts, allowing us to use the aggregated values, summarizing them in a single estimate for the entire group. Table 7 shows the importance classes estimated by the five specialists and the aggregate estimate. Most of the sub-criteria deemed essential are related to the inspection procedures of critical parts of the dam and the reliability of such instruments. Internal erosion with increased pore pressure and saturation of the dam and foundation causes loss of resistance, which explains the relevance of regular and the operating procedures for the affluent and effluent water flows (OPE.RES). Therefore, when the experts expressed their preferences regarding the methods and procedures to mitigate the dam's vulnerability, they emphasized the actions of inspecting parts of the dam and in reacting to extreme and unexpected events. The emphasis given to procedures related to instrumentation and monitoring can be explained by their preventive characteristic. The regulatory agency responsible for hydroelectric power plants in Brazil carries out periodic inspections and requires the submission of detailed reports on inspection by a set of instruments, such as the surface landmarks used to determine horizontal and vertical displacements through periodic topographic surveys; inclinometers used to measure horizontal movements to control the stability of slopes; piezometers, used to determine pore water pressures in earth or rock masses; and gages showing variability of water levels and flow. Thus, the plant's staff is concerned with the geotechnical monitoring of the stability of the ground plant, therefore reducing the vulnerability of the dam, and with the compliance rules and regulations determined by the regulatory agency. In this scenario, the main dam hazard indicator is measured by inspection. But ensuring monitoring is an ongoing process, especially considering the application of the Dam Safety Law (Brasil 2010) and the concern about the absence of documents containing geotechnical records. The documentation and procedures related to inspection, operation and preparedness reduce the vulnerability of earth dams. Consistency of responses Inconsistency occurs when ordinal scales are allocated to subjective preferences of the experts. Thus, it is necessary to test what is acceptable. The consistency ratio (CR) compares the consistency index (CI) of the matrix of pairwise comparisons against the consistency index of a random-like matrix (RI). The consistency index for criterion weights is calculated as shown in Eq. (1): where λ max is the eigenvalue and n is the number of criteria. The consistency ratio in Eq. (2) is: The denominator RI is the random index table value according to the number of criteria (Saaty and Vargas 2012). The value for a criterion to be judged as consistent must be CR ≤ 0.1 (Saaty 1990). For nine criteria, the RI value is 1.45. We tested the consistency of the experts' pairwise (1) C.I. = max .(n − 1) n (2) C.R. = C.I. R.I. comparisons for them. Table 8 shows that all the participants had CR values lower than 0.1. Conclusion Although we interviewed each member of the plant's staff separately, we were unable to control whether there was any communication between them outside the interview room, so we cannot guarantee there was no suggestion between them regarding the ordering of preferences, possibly introducing bias. Another limitation stems from the tendency that managers may avoid any choice or comment that might suggest a structural problem or operational difficulty with the plant. A process should be followed to allow for minimal bias. Therefore, considering the high degree of education of the respondents, the conduction of the Simos technique followed a straightforward line of reasoning. To summarize, the Simos and FAHP techniques were used in the proposed method to allow a group of experts to assign weights to the criteria and sub-criteria representing the relative importance of a set of operational procedures and safety practices, with the objective of ensuring the availability of the most relevant and updated monitoring and sensing processes to mitigate the dam's vulnerability, often caused by inadequate assessment of its structural conditions. While the Brazilian legislation on dam safety and the technical guidelines are available to assist in specifying the MCDA model, the Simos method to estimate preferences proved to be easier, faster, and more flexible compared with the traditional paired comparison method. Moreover, since it is a ranking procedure, the conversion to the Saaty matrix made it possible to satisfy the transitivity requirement. Regarding the multi-criteria method, the fuzzification of the scores attenuated the inaccuracy of the responses, characteristic of the use of subjective numerical scales. The proposed method exploits the experience and opinion of the experts, since they are required to focus on the problem, explain and validate their estimation, and during the same process, weight the degree of importance of each relevant safety procedure. Because they were required to compare the criteria and sub-criteria, they are more likely to consider the priorities of the security procedures as proposed by the model and thus determine requirements, such as training, accessibility, and specific timelines, for updating and detailing. This case study identified that procedures for inspections and instrumentation readings; recording and dissemination of results; identification of the recipients of information; emergency alerts of extreme flood events; procedures for corrective and emergency actions; preparedness in each situation; and operational and maintenance training courses should be priorities of dam safety management. The absence of any of these elements makes it hard to monitor the activity itself, make decisions, and carry out preventive or corrective actions to reduce earth dam vulnerability.
2021-05-21T16:56:29.219Z
2021-04-21T00:00:00.000
{ "year": 2021, "sha1": "d0b99cf13592918603a1d055704061437dd7f998", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-360591/latest.pdf", "oa_status": "GREEN", "pdf_src": "Springer", "pdf_hash": "46b5842ca1506c7d5e83e29176085a912f7229b9", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
257665348
pes2o/s2orc
v3-fos-license
Differential regulation of cardiac sodium channels by intracellular fibroblast growth factors Intracellular fibroblast growth factors (iFGF) regulate voltage-gated sodium (NaV) channel expression and gating. Using a mouse model and heterologous expression in Xenopus oocytes, we describe mechanisms of how iFGF alters NaV channel activation and inactivation. Please submit your revised manuscript via the link below along with a point-by-point letter that details your responses to the editors' and reviewers' comments, as well as a copy of the text with alterations highlighted (boldfaced or underlined).If the article is eventually accepted, it would include a 'revised date' as well as submitted and accepted dates.If we do not receive the revised manuscript within one year, we will regard the article as having been withdrawn.We would be willing to receive a revision of the manuscript at a later time, but the manuscript will then be treated as a new submission, with a new manuscript number. Please pay particular attention to recent changes to our instructions to authors in sections: Data presentation, Blinding and randomization and Statistical analysis, under Materials and Methods, as shown here: https://rupress.org/jgp/pages/submissionguidelines#prepare.Re-review will be contingent on inclusion of the required information (including for data added during revision) and demonstration of the experimental reproducibility of the results (i.e., all experimental data verified in at least 2 independent experiments). Please note, JGP now requires authors to submit Source Data used to generate figures containing gels and Western blots with all revised manuscripts (when applicable).This Source Data consists of fully uncropped and unprocessed images for each gel/blot displayed in the main and supplemental figures.If your paper includes cropped gel and/or blot images, please be sure to provide one Source Data file for each figure that contains gels and/or blots along with your revised manuscript files.File names for Source Data figures should be alphanumeric without any spaces or special characters (i.e., SourceDataF#, where F# refers to the associated main figure number or SourceDataFS# for those associated with Supplementary figures).The lanes of the gels/blots should be labeled as they are in the associated figure, the place where cropping was applied should be marked (with a box), and molecular weight/size standards should be labeled wherever possible.Source Data files will be made available to reviewers during evaluation of revised manuscripts and, if your paper is eventually published in JGP, the files will be directly linked to specific figures in the published article. Source Data Figures should be provided as individual PDF files (one file per figure).Authors should endeavor to retain a minimum resolution of 300 dpi or pixels per inch.Please review our instructions for export from Photoshop, Illustrator, and PowerPoint here: https://rupress.org/jgp/pages/submission-guidelines#revisedWhen revising your manuscript, please be sure it is a double-spaced MS Word file and that it includes editable tables, if appropriate. Please submit your revised manuscript via this link: Link Not Available Thank you for the opportunity to consider your manuscript. Sincerely, Olaf S. Andersen, M.D. On behalf of Journal of General Physiology Journal of General Physiology's mission is to publish mechanistic and quantitative molecular and cellular physiology of the highest quality; to provide a best-in-class author experience; and to nurture future generations of independent researchers. _______________________________________________________________________________________ Reviewer #1 (Comments to the Authors): The authors have addressed most of my comments for the original submission.I remain enthusiastic about the overall study.The explain for the limited investigation, comparing the N terminal domains of the rodent and human FHFs is noted and the conclusion that the N terminal domains are not sufficient to confer the different effects of the two FHFs is a nice result, although still somewhat incomplete as the authors cannot provide definitive mechanistic insight for their result. There are however, three issues that remain for the authors to critically address.Further, as I have been provided access to the related Lesage et al. submitted manuscript, to which this manuscript refers and in which the mouse model is shared, I raise an additional issue about concerns for consistency between the manuscripts.Related to this manuscript alone: First issue: The description of the new Fgf13 knockout animal is incorrect.Specifically, in the methods (and throughout the results), the authors appear to have not correctly describe the generation of these mice.Since Fgf13 is on the X chromosome, the specific description of how the Fgf13 knockout mice was generated must be incorrect: 1. Page 5/line 7: " Adult (8-16 weeks old) male and female wild-type (WT), Fgf12-/-and cFgf13-/-...".What were the sexes of the Fgf13 mice studied?The nomenclature "cFgf13-/-" can only refer to female knockout mice. 2. Page 6/line 6 and following: "Heterozygous (Fgf13fl/+) offspring were then crossed..." does not correctly describe a possible mating schema for generating a KO of a gene on the X chromosome. 4. The authors should report whether they are studying hemizygous male or homozygous female knockout mice, and provide an accurate breeding schema for their generation. 5. All relevant Fgf13 figures are labeled Fgf13-/-: were only females studied?If not, then this needs to be clarified/corrected.Second issue: When comparing currents between Fgf13 knockout animals and "WT" controls, the most relevant controls are not WT animals but Myh6-Cre+ and/or floxed Fgf13 mice of the same sex.True wild type C57BL/6J are not the appropriate controls.Since the authors do not appear to have selected the appropriate controls, they should qualify their findings and acknowledge the possibility that they were not able to control for effects of the Cre transgene, which has been reported to have effects on its own (PMID: 9202069), and/or the presence of the loxP sites.As support for this: In the newly added Supplemental Figure S3A the presented I-V curves suggest that there may be a difference in current amplitude between WT and the floxed control (the issue of whether these are female vs. male persists here), when I compare the gray and the black curves.Strangely, the authors do not refer to these data from the floxed animal, and only reference comparison between WT and knockout, concluding that there is no difference in current amplitude in the absence of FGF13.This may not be the correct conclusion. Third issue for Supplemental 3A: Statistics, summary data, etc.The summary statistics, Ns, etc. for the newly added Supplemental Figure 3A are not provided.Related: the curves for S3A and S3B for "Fgf13-/-" appear superimposable.Are these from the same data set?If so, that should be noted. Minor: the authors demonstrate effective knockout of Fgf13 by showing loss of protein in the knockout animals.Similarly, I recommend that when comparing effects of Fgf12 expression in the Fgf13-/-background, that the authors confirm successful and sufficient expression of the FGF12 protein by western blot for FGF12 in those animals.We thank the Reviewers and the Editor for the time spent reviewing our revised submission and for identifying issues that required attention.We have now revised the manuscript to address the concerns noted about the previous submission.We also note that all changes made in the text and figure legends in the revised (tracked) manuscript are highlighted in yellow.In addition, we have responded to the previous Reviewer/Editor comments in the paragraphs that follow.Please note that we have reproduced each comment first, followed by our response and summary of changes made in the manuscript (in italics) to address each point. Reviewer #1 (Comments to the Authors): The authors have addressed most of my comments for the original submission.I remain enthusiastic about the overall study.The explain for the limited investigation, comparing the N terminal domains of the rodent and human FHFs is noted and the conclusion that the N terminal domains are not sufficient to confer the different effects of the two FHFs is a nice result, although still somewhat incomplete as the authors cannot provide definitive mechanistic insight for their result. There are however, three issues that remain for the authors to critically address.Further, as I have been provided access to the related Lesage et al. submitted manuscript, to which this manuscript refers and in which the mouse model is shared, I raise an additional issue about concerns for consistency between the manuscripts. Response: Thank you for the time spent reviewing our revised manuscript and for identifying the additional issues requiring attention. Related to this manuscript alone: First issue: The description of the new Fgf13 knockout animal is incorrect.Specifically, in the methods (and throughout the results), the authors appear to have not correctly describe the generation of these mice.Since Fgf13 is on the X chromosome, the specific description of how the Fgf13 knockout mice was generated must be incorrect: Response: This Reviewer is correct.We apologize for the errors in the description of the way the mice were generated and for any confusion these errors may have caused.In this revised manuscript, we have now corrected the description of how the cardiac specific Fgf13 knockout mice were generated (in the text and in the legend to Supplemental Figure 1), and we note that we have corrected the mouse descriptor (to cFgf13KO) throughout (the text, figures and figure legends). Response: The Reviewer is correct that the nomenclature "cFgf13-/-" can only be used to refer to female knockout mice.We again apologize for the error in the description of the mice generated and used in the studies presented.As noted above, we have corrected the description of how the cardiac specific Fgf13 knockout mice were generated in the revised manuscript.We have also noted that cardiac specific homozygous female cFgf13 -/-mice and hemizygous male cFgf13 -/y mice are now collectively referred to as cardiac specific Fgf13 knockouts (cFgf13KO).The experiments reported/presented in this manuscript were (as originally stated) conducted on male and female cardiac specific knockout mice, cFgf13KO, and we have corrected the descriptions of the mice used throughout the manuscript.Page 5, male and female wild-type (WT), Fgf12KO, Fgf13 floxed, and cFgf13KO C57BL/6J mice were used in the experiments here." 2. Page 6/line 6 and following: "Heterozygous (Fgf13fl/+) offspring were then crossed..." does not correctly describe a possible mating schema for generating a KO of a gene on the X chromosome. Response: This Reviewer is correct and, as noted above, we have edited the description of the procedure used to generate the cardiac specific Fgf13 knockout (cFgf13KO) in the revised text and in the legend to Supplemental Figure 1.Page 6, line 6: "Heterozygous Fgf13 floxed (Fgf13 fl/+ ) female and hemizygous Fgf13 floxed (Fgf13 fl/y ) male (F1) offspring were mated to produce (F2) females homozygous for the floxed Fgf13 locus, Fgf13 fl/fl (Supplemental Fig S1A).The Fgf13 fl/fl and Fgf13 fl/y animals were then crossed with transgenic animals (Tg(Myh6cre)2182Mds) expressing Cre-recombinase driven by the (cardiac specific) α-MHC promoter (Supplemental Fig S1A).Crossing Cre-recombinase-expressing hemizygous male (Fgf13 fl/y ) offspring with Fgf13 fl/fl females (or Cre-recombinase-expressing heterozygous Fgf13 fl/+ females with Fgf13 fl/y males) provided cardiac specific Fgf13 targeted deletion hemizygous male (cFgf13 -/y ) and homozygous female (cFgf13 -l-) animals, referred to here collectively as cardiac specific Fgf13 knockouts, cFgf13KO.Offspring (from this and subsequent crosses) were screened by PCR using the primers given in Supplemental Table 1A and representative results are illustrated in Supplemental Fig S1B ." 3. Page 7/line 14: "Adult (8-12 week) male and female WT and cFgf13-/-C57BL/6J mice were anesthetized...".It is not clear if these were KO females or hemizygous males.Page 8/Line 15 and other places have similar unclear wording. Response: We again thank this Reviewer for pointing out the errors in the description of the mice generated and used in the studies described.We apologize for the confusion and note that we have corrected the errors throughout.Page 7, line 14: "Adult (8-12 week) male and female WT and cFgf13KO C57BL/6J mice…." 4. The authors should report whether they are studying hemizygous male or homozygous female knockout mice, and provide an accurate breeding schema for their generation. Response: We again thank this Reviewer for pointing out the errors in the description of the mice generated and used in the studies reported.We have corrected the errors throughout.Page 6, line 10: "Crossing Cre-recombinase-expressing hemizygous male (Fgf13 fl/y ) offspring with Fgf13 fl/fl females (or Cre-recombinase-expressing heterozygous Fgf13 fl/+ females with Fgf13 fl/y males) provided cardiac specific "Quantitative analyses of peak I Na densities revealed no significant differences in Fgf13 floxed, compared with , or in cFgf13KO, compared with Fgf13 floxed or at all voltages) (Supplemental Fig S3A)." Third issue for Supplemental 3A: Statistics, summary data, etc.The summary statistics, Ns, etc. for the newly added Supplemental Figure 3A are not provided.Related: the curves for S3A and S3B for "Fgf13-/-" appear superimposable.Are these from the same data set?If so, that should be noted. Response: We apologize for the oversight in not providing quantitative comparisons of the amplitudes of the Nav currents recorded from WT, Fgf13 floxed and cFGF13KO LV myocytes, as we had done for the time and voltage-dependent properties of the Nav currents (in Table 1).We have corrected this oversight in this revised manuscript.The mean amplitudes of the Nav currents in WT, Fg13 floxed and cFgf13KO LV myocytes are compared directly in Supplemental Figure S3A and in the legend to Supplemental Figure S3 in this revised manuscript. Minor: the authors demonstrate effective knockout of Fgf13 by showing loss of protein in the knockout animals.Similarly, I recommend that when comparing effects of Fgf12 expression in the Fgf13-/background, that the authors confirm successful and sufficient expression of the FGF12 protein by western blot for FGF12 in those animals. Response: This is another important point and one that we have addressed directly in this revised manuscript.During the course of the experiments detailed in this manuscript, we were concerned about demonstrating the expression of iFGF12 in AAV9 transduced myocytes.To address this important point, we prepared protein lysates of the ventricles of cFgf13KO animals (2,3 and 4 weeks) following retroorbital virus injections and we then examined (by Western blot) iFGF12 (as well as eGFP) expression in these samples.We now present the results of these analyses, which clearly demonstrate robust expression of eGFP and iFGF12 four weeks following virus injections, in Supplemental Figure S4, panels A (eGFP) and B (iFGF12).Also we note that we have validated the anti-iFGF12 antibody used in these experiments using Fgf12KO atrial samples, as illustrated in Supplemental Figure S4C. Page 8, line 5: "To confirm the expression of iFGF12 and determine the time course of eGFP/iFGF12 expression in the ventricles of virus injected animals, protein lysates were prepared from cFgf13KO LV at 2, 3 and 4 weeks following the injections of the hFGF12B-expressing and eGFP-expressing AAV9 viruses.Following protein fractionation and transfer, membranes were probed with a polyclonal anti-eGFP antibody (Supplemental Related to the Lesage et al. manuscript: both measure Na currents in the new cardiac-specific Fgf13 constitutive knockout model, yet there are inconsistencies in results between the two manuscripts.Here, as noted above, Fig S3A appears to show differences in current density when comparing Na currents from the floxed control and the knockout mice.In Lesage et al., Fig 5D appears to show no differences between the same two groups.Would the authors please explain these apparently conflicting results? 1 Fig S4A) or with a polyclonal anti-iFGF12 antibody (Supplemental Fig S4B).The anti-iFGF12 antibody was validated using protein lysates prepared from WT and Fgf12KO adult mouse left and right atria.A prominent band at ~20 kDa was detected with the anti-iFGF12 antibody in the WT, but not in the Fgf12KO, atrial protein samples (Supplemental Fig S4C), consistent with the elimination of iFGF12 proteins and validating the anti-FGF12 antibody for Western blot analyses of iFGF12 protein expression." Related to the Lesage et al. manuscript: both measure Na currents in the new cardiac-specific Fgf13 constitutive knockout model, yet there are inconsistencies in results between the two manuscripts.Here, as noted above, Fig S3A appears to show differences in current density when comparing Na currents from the floxed control and the knockout mice.In Lesage et al., Fig 5D appears to show no
2023-03-23T06:17:30.264Z
2023-03-21T00:00:00.000
{ "year": 2023, "sha1": "82d41534d1a492db1b4521231152a03748ec9537", "oa_license": "CCBYNCSA", "oa_url": "https://digitalcommons.wustl.edu/context/oa_4/article/2525/viewcontent/Differential_regulation_of_cardiac.pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "d99fa8e9d6fb7e665329e46e1532b4405db9dfba", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
234149009
pes2o/s2orc
v3-fos-license
Computer Vision-Control-Based CNN-PID for Mobile Robot : With the development of artificial intelligence technology, various sectors of industry have developed. Among them, the autonomous vehicle industry has developed considerably, and research on self-driving control systems using artificial intelligence has been extensively conducted. Studies on the use of image-based deep learning to monitor autonomous driving systems have recently been performed. In this paper, we propose an advanced control for a serving robot. A serving robot acts as an autonomous line-follower vehicle that can detect and follow the line drawn on the floor and move in specified directions. The robot should be able to follow the trajectory with speed control. Two controllers were used simultaneously to achieve this. Convolutional neural networks (CNNs) are used for target tracking and trajectory prediction, and a proportional-integral-derivative controller is designed for automatic steering and speed control. This study makes use of a Raspberry PI, which is responsible for controlling the robot car and performing inference using CNN, based on its current image input. Introduction From the automobile to pharmaceutical industries, traditional robotic manipulators have been a popular manufacturing method for various industries for at least two decades. Additionally, scientists have created many different forms of robots other than traditional manipulators to expand the use of robotics. The new types of robots have more freedom of movement, and they can be classified into two groups: redundant manipulators and mobile (ground, marine, and aerial) robots. Engineers have had to invent techniques to allow robots to deal automatically with a range of constraints. Robots that are equipped with these methods are called autonomous robots [1]. Mobile robots can move from one location to another to perform desired tasks that can be complex [2]. A mobile robot is a machine controlled by software and integrated sensors, including infrared, ultrasonic, webcam, GPS, and magnetic sensors. Wheels and DC motors are used to drive robots through space [3]. Mobile robots are often used for agricultural, industrial, military, firefighting, and search and rescue applications (Fig. 1), allowing humans to accomplish complicated tasks [4]. Figure 1: Autonomous robots Line-follower robots can be used in many industrial logistics applications, such as transporting heavy and dangerous material, the agriculture sector, and library inventory management systems (Fig. 2). These robots are also capable of monitoring patients in hospitals and warning doctors of dangerous symptoms [5]. A substantial number of researchers have focused on smart-vehicle navigation because of the limitations of traditional tracking techniques due to the environmental instability under which vehicles move. Therefore, intelligent control mechanisms, such as neural networks, are needed, as they provide an effective solution to the problem of vehicle navigation due to their ability to learn the non-linear relationship between input and sensor variables. A combination of computer vision techniques and machine learning algorithms are necessary for autonomous robots to develop "true consciousness" [6]. Several attempts have been made to improve low-cost autonomous cars using different neural-network configurations, including the use of a convolutional neural network (CNN) for self-driving vehicles [7]. A collision prediction system was constructed using a CNN, and a method was proposed for stopping a robot in the vicinity of the target point while avoiding a moving obstacle [8]. A CNN has also been proposed for keeping an autonomous driving control system in the same lane [9], and a multilayer perceptron network has been used for mobile-robot motion planning [10]. It was implemented on a PC Intel Pentium 350 MHz processor. Additionally, the problem of navigating a mobile robot has been solved by using a local neural-network model [11]. A variety of motion control methods have been proposed for autonomous robots: proportional-integral-derivative (PID) control, fuzzy control, neural network control, and some combination of these control algorithms [12]. The PID control algorithm is used by most motion control applications, and PID control methods have been extended with deep learning techniques to achieve better performance and higher adaptability. Highly dynamic high-end robotics with reasonably high accuracy of movement almost always require these control algorithms for operation. For example, a fuzzy PID controller for electric drives for a differential drive autonomous mobile robot trajectory application has been developed [13]. Additionally, a PID controller for a laser sensor-based mobile robot has been designed to detect and avoid obstacles [14]. In this paper, the low-cost implementation of a combined PID controller with a CNN is realized using the raspberry pi platform for the smooth control of an autonomous line-follower robot. Autonomous Line-Follower Robot Architecture In this Section, the architecture and system block are described. First, a suitable configuration was selected to develop a line-follower robot using a Pi camera connected through a Raspberry Pi3 B+ to the motor driver IC. This configuration is illustrated in the block diagram shown in Fig. 3. Implementing the system on the Arduino Uno ensured the following: • moving the robot in the desired direction; • collecting data from the Pi camera and feeding it into a CNN; • predicting the error between the path line and the robot position; • determining the speed of the left and right motors using a PID controller and the predicted error; • operating the line-follower robot by controlling the four DC motors; and • avoiding obstacles using an ultrasonic sensor. Mobile-Robot Construction The proposed mobile-robot design can easily be adapted to new and future research studies. The physical appearance of the robot was evaluated, and its design was based on several criteria, including functionality, material availability, and mobility. During the analysis of different guided robots of reduced size and simple structure (as shown in Fig. 4), the work experience of the authors with mechanical structures for robots was also considered. Seven types of parts were used in the construction of the robot: (1) Four wheels, (2) Four DC motors, (3) Two base structures, (4) A control board formed using the Raspberry Pi3 B+ board, (5) An LN298 IC circuit for DC-motor driving, (6) An expansion board, and (7) An ultrasonic sensor HC-SR04. Obstacle Detector The HC-SR04 ultrasonic sensor uses sonar to determine the distance from it to an object (Fig. 6). It offers excellent non-contact range detection with high accuracy and stable readings in an easy-to-use package. It comes complete with ultrasonic transmitter and receiver modules [16]. Below is a list of some of its features and specs: • Raspberry Pi 3B+ The Raspberry Pi is an inexpensive, credit-card-sized single board computer developed in the United Kingdom by the educational charity, the Raspberry Pi Foundation (Fig. 7). The Raspberry Pi 3 Model-B is the 3rd generation Raspberry Pi minicomputer with a 64-bit 1.2 GHz quad-core processor, 1GB RAM, WiFi, and Bluetooth 4.1 controllers. It also has 4× USB 2.0 ports, 10/100 Ethernet, 40 GPIO pins, a full-size HDMI 1.3a port, camera interface (CSI), combined 3.5 mm analog audio and composite video jack, a display interface (DSI), MicroSD slot, and VideoCore IV multimedia/3D graphics core@400 MHz/300 MHz [17]. The GPIO27 (Physical pin 13) and the GPIO22 (Physical pin 15) are connected to IN1 and IN2 of the L298N module, respectively, to drive the left motors. The GPIO20 (Physical pin 38) and the GPIO21 (Physical pin 40) are connected to IN3 and IN4 of the L298N module respectively, to drive the right motors. L298N Motor Driver The L298N motor driver ( Fig. 8) consisted of two complete H-bridge circuits. Thus, it could drive a pair of DC motors. This feature makes it ideal for robotic applications because most robots have either two or four powered wheels operating at a voltage between 5 and 35 V DC with a peak current of up to 2A. This module incorporated two screw-terminal blocks for motors A and B and another screw-terminal block for the ground pin, the VCC for the motor, and a 5-V pin, which can either be an input or output. The pin assignments for the L298N dual H-bridge module are shown in Tab. 1. The digital pin, which is assigned from HIGH to LOW or LOW to HIGH, was used for the IN1 and IN2 on the L298N board to control the direction. The controller output PWM signal was sent to the ENA or ENB to control the position. The forward and reverse speed or position control of the motor was achieved using a PWM signal [18]. Then, using the analogWrite() function, the PWM signal was sent to the Enable pin of the L298N board, which drives the motor. Convolutional-Neural-Network-Based Proportional Integral Derivative Controller Design for Robot Motion Control A PID-based CNN controller was designed to control the robot. An error value of zero referred to the robot being precisely in the middle of the image frame. Error is assigned for each line position in the camera frame image, as explained Tab. 1. A positive error value meant that the robot had deviated to the left and a negative error value meant that the robot had deviated to the right. The error had a maximum value of ±4, which corresponds to the maximum deviation. Tab. 2 presents the assignment of errors to real images tacked with the Raspberry Pi camera. The primary advantage of this approach is extracting the error from the image given by the Raspberry Pi camera, which can be fed to the controller to compute the motor speeds such that the error term becomes zero. The estimated error input was used by the PID controller to generate new output values (control signal). The output value was exploited to determine the Left Speed and Right Speed for the two motors on the line-follower robot. Proportional-Integral-Derivative Control A PID controller aims to keep an input variable close to the desired set-point by adjusting an output. This process can be 'tuned' by adjusting three parameters: K P , K I , K D . The well-known PID controller equation is shown here in continuous form where Kp, Ki, and Kd refer to proportional, integral, and derivative gain constants respectively. For implementation in a discrete form, the controller equation is modified by using the backward Euler method for numerical integration. The PID controls both the left and right motor speeds according to predicted error measurement of the sensor. The PID controller generates a control signal (PID-value), which is used to determine the left and the right speed of robot wheels. It is a differential drive system: a left turn is achieved if the speed of the left motor is reduced, and a right turn is achieved if the speed of the right motor is reduced. Right speed and left speed are used to control the duty cycle of the PWM applied at the input pins of the motor driver IC. The PID constants (Kp, Ki, Kd) were obtained using the Ziegler-Nichols tuning method [19]. To start, the Ki and Kd of the system was set to 0. Then, the Kp was increased from 0 until it reached the ultimate gain Ku, and at this point, the robot continuously oscillated. The Ku and Tu (oscillation period) were then used to tune the PID (see Tab. 3). According to substantial testing, using a classical PID controller was unsuitable for the linefollower robot because of the changes in line curvature. Therefore, only the last three error values were summed up instead of adding all the previous values to solve this problem. The modified controller equation is given as This technique provides satisfactory performance for path tracking. LeNet Convolutional Neural Networks The use of CNNs is a powerful artificial neural network technique. CNNs are multilayer neural networks built especially for 2D data, such as video and images, and they have become ubiquitous in the area of computer vision. These networks maintain the spatial structure of the problem, make efficient use of patterns and structural information in an image, and have been developed for object recognition tasks [20]. CNNs are so named due to the presence of convolutional layers in their construction. The detection of certain local features in every location of the particular input image is the primary work of the convolutional layers. The convolution layer comprises a set of independent filters. The filter is slid over the complete image and the dot product is taken between the filter and chunks of the input image. Each filter is independently convolved with the image to end up with feature maps. There are several uses that we can gain from deriving a feature map and reducing the size of the image by preserving its semantic information is one of them. In this paper, we propose using the LeNet CNN (Fig. 10) to take advantage of it ability to classify small single-channel (black and white) images. LeNet was proposed in 1989 by Yann et al. [21], and it was later widely used in the recognition of handwritten characters. He combined a CNN trained to read black lines using backpropagation algorithms. LeNet has the essential basic units of CNN, such as a convolutional layer, pooling layer, and full connection layer. The convolution kernels of the three convolutional layers are all 5 × 5, and the activation function uses the Sigmoid function. The input for the LeNet CNN is a 28 × 28 grayscale image, which passes through the first convolutional layer with four feature maps or filters with a size of 5 × 5 and a stride of one. The image dimensions change from 28 × 28 × 1 to 24 × 24 × 4. Then, the LeNet applies an average pooling layer or sub-sampling layer with a filter size of 2 × 2 and a stride of two. The resulting image dimensions are reduced to 12 × 12 × 4. Next, there is a second convolutional layer with 12 feature maps with a size of 8 × 8 and a stride of 1. In this layer, only 8 out of 12 feature maps are connected to four feature maps of the previous layer. The fourth layer (S4) is again an average pooling layer with a filter size of 2 × 2 and a stride of two. This layer is the same as the second layer (S2) except it has 12 feature maps, so the output is reduced to 4 × 4 × 12. The fifth layer (F5) is a fully connected softmax layer with 10 feature maps each of a size of 1 × 1 with 10 possible values corresponding to the digits from 0 to 9. Implementation on Raspberry Pi A substantial amount of data are required to create a neural network model. These data are gathered during the training process. Initially, the robot has to be wirelessly operated using the VNC Viewer, which helps monitor the Raspberry Pi through Wi-fi. As the robot is run on the path line it collects image data through the Raspberry Pi camera. These data are used to train the LeNet neural network model. The script for the CNN algorithm was developed in Python [22]. This code was transported to the RP via a virtual development environment provided by the Python language composed of the interpreter, libraries, and scripts. Each project implemented on either the computer or Raspberry depends on the Python virtual development environment without affecting dependence on other projects. The dependencies in the RP virtual environment for running the training and test applications include Python 3, OpenCV, NumPy, Keras, and TensorFlow. To capture a set of images at an interval of 0.5 s, we used the pseudo code listed in the figure below: Once the images were captured for many positions of the robot on the track, they were placed in different folders. We trained our LeNet system on the images captured using the following code. However, first we imported the necessary libraries, such as Numpy, cv2, and Matplotlib. The folder name is extracted from the image path of each image. A label of 0 is assigned if the folder name is "error = 0." Similarly, a label of −4, −3, −2, −1, 1, 2, 3, 4 is assigned to images present in folder "error = −4," "error = −3," folder "error = −2," folder "error = −1," folder "error = 1," folder "error = 2," folder "error = 3," and folder "error = 4." dataset = /home/pi/Desktop/tutorials/raspberry/trainImages/ data = [ ] labels = [ ] imagePaths = sorted (list (paths.list_images (dataset))) random.seed (42) random.shuffle (imagePaths) for imagePath in imagePaths : Once the model is trained, it can be deployed on the Raspberry Pi. The code below was used to control the robot, based on the prediction from the LeNet CNN and the PID controller. A PID controller continuously calculates an error and applies a corrective action to resolve it. In this case, the error is the position prediction delivered by the CNN and the corrective action is changing the power to the motors. We defined a function PID_control_robot that will control the direction of the robot based on prediction from the model. Conclusions We presented a vision-based, self-driving robot, in which an image input is taken through the camera and a CNN model is used to take a decision accordingly. We developed and a combined a PID and CNN deep-learning controller to guaranty a smooth tracking line. The results indicated that the proposed method is well suited to mobile robots, as they are capable of operating with imprecise information. More advanced controller-based CNNs could be developed in the future.
2021-05-11T00:05:33.219Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "c2d15bd24723eb163c5d24097fc75ce3b7197a90", "oa_license": "CCBY", "oa_url": "https://doi.org/10.32604/cmc.2021.016600", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "ae80dfa01471a1e8f85330c2340008a6812263c5", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
12555536
pes2o/s2orc
v3-fos-license
Classroom Activities to Engage Students and Promote Critical Thinking about Genetic Regulation of Bacterial Quorum Sensing† We developed an interactive activity to mimic bacterial quorum sensing, and a classroom worksheet to promote critical thinking about genetic regulation of the lux operon. The interactive quorum sensing activity engages students and provides a direct visualization of how population density functions to influence light production in bacteria. The worksheet activity consists of practice problems that require students to apply basic knowledge of the lux operon in order to make predictions about genetic complementation experiments, and students must evaluate how genetic mutations in the lux operon affect gene expression and overall phenotype. The worksheet promotes critical thinking and problem solving skills, and emphasizes the roles of diffusible signaling molecules, regulatory proteins, and structural proteins in quorum sensing. INTRODUCTION Regulation of bacterial processes by population density (quorum sensing) is often a difficult concept for students to grasp, and understanding how cells communicate is a core concept in introductory microbiology (2). Quorum sensing is typically a new concept for introductory microbiology students, involving complex genetic feedback mechanisms (4). Light production in bacteria, specifically the lux operon of Vibrio fischeri, is a good model system for teaching students about quorum sensing. V. fischeri is a marine bacterium that can form a symbiotic relationship with a host squid, Euprymna scolopes. Light production by V. fischeri only occurs at high cell densities, when this bacterium grows in the nutrient-rich light organ of the squid (6). Active learning has been shown to improve student learning and engagement with material (for example see 1, 5), and so we developed an interactive classroom activity where students enact the dynamics of bacterial chemical communication at low and high cell densities. We also developed a worksheet where students collaborate and use what they know about V. fischeri lux operon genetics to predict outcomes of genetic complementation experiments between mutant strains. Approximately 40 college junior and senior students (ages 19 to 21) enrolled in a Principles of Microbiology course comprised our test audience for this activity. These activities were designed for an introductory microbiology course, with a genetics course as a prerequisite, and are best conducted toward the end of the semester as a way to integrate concepts of bacterial genetics and communication. The interactive activity and worksheet were part of a one-class lecture (75 minutes) on bioluminescence and its applications, symbiotic relationships of bioluminescent bacteria, and the lux operon of Vibrio fischeri (lecture material available upon request). PROCEDURE Prior to implementing these classroom activities, we registered our project with the SUNY Geneseo Institutional Review Board. During the lecture we introduced the lux operon ( Fig. 1). The operon is composed of four main parts: luxI, luxR, luxAB, and luxCDE (reviewed in (3)). LuxR encodes a transcription factor that, when bound to an autoinducer molecule, up-regulates lux operon gene expression. LuxI encodes a synthase for the autoinducer molecule (acyl-homoserine lactone; AHL), and the rest of the operon encodes for the necessary components to make light. After the lecture, a short quiz was given to assess student understanding of these concepts (sample questions in Appendix 1). After the lecture, students were moved into a large space and we ran the following activity twice, first with a small group (five students), and next with the entire class (40 students). Each student was given a small packet of labeling stickers and one index card and instructed to move randomly within the space. Stickers represented autoinducer (AHL) molecules, and students exchanged an "autoinducer" (sticker) whenever they passed each other, placing it onto their index card. Exchanging stickers was not a perfect analogy for movement of the autoinducers, and it was emphasized that bacteria do not physically exchange these molecules (they diffuse in and out of the cell and are picked up from the environment). Initially, students hand out only one sticker at a time. A student who has collected two stickers on an index card then hands out two stickers. As students accumulate stickers, they hand out the same number of stickers as are on their index card (i.e., if they have three stickers, they now hand out three stickers to each student they pass). This is analogous to upregulation of the lux operon. When students collected five stickers on their index card, they were instructed to make a beeping sound, representing emission of light. We chose to represent light production with an auditory cue, rather than a visual one, as this allowed students to assess the level of "bioluminescence" without interrupting student interactions. The activity is conducted in exactly the same manner for small and large groups, and the small-group activity clearly showed that at low population density, quorum sensing was not sufficient to bring about much, if any, beeping (bioluminescence). When repeated with the larger group (high population density), almost all students were beeping within approximately one to two minutes. We then shifted to gaining a more in-depth understanding of the lux operon. Students were given a worksheet that presented four different strains of V. fischeri, including one wild type and three mutants (Appendix 2). Students were given four practice problems that asked them to predict bioluminescence phenotypes when various strains were streaked opposite to each other on the same plate. Before students got started, we showed a video of a similar complementation experiment from the Howard Hughes Medical Institute BioInteractive website (www. hhmi.org/biointeractive/bacterial-quorum-sensing), and we explained how two strains streaked next to each other on a plate can share diffusible molecules, such as AHLs. Students were given approximately 10 to 15 minutes to complete the worksheet in groups, and we then discussed it as a class. CONCLUSION Overall, both activities were successful at increasing student engagement with the topic and their understanding of the material. For the interactive demonstration, the difference between small and large groups was striking. A potential issue for this activity is making sure that students fully understand the analogy and are able to extrapolate back to what actually occurs in a bacterial population. To add in a quantitative aspect, students could record the amount of time it takes to for everyone to be beeping in smaller groups compared with progressively larger groups, or record the level of beeping using a decibel meter (many smartphones are capable of measuring decibels). In larger classes, the interactive activity could be done as a demonstration with a subset of the students, or (if facilitators are available) students could be broken up into groups of 20 to 40 to complete this activity. The activity is straightforward to run, and instructors could even have students serve as facilitators for each group after a brief overview of the activity rules. The genetic complementation worksheet would work well in small or large classes. If time is limited, the interactive activity and the worksheet can be completed separately. Instructors can choose to complete either one of the activities alone based on the level of detail desired (the interactive activity emphasizes the basics of quorum sensing, while the worksheet emphasizes quorum sensing genetics), or the genetic complementation worksheet can be assigned as homework instead of being completed during class. SUPPLEMENTAL MATERIALS Appendix 1: Sample quiz or exam questions for instructors Appendix 2: Genetic complementation worksheet ACKNOWLEDGMENTS The development of these two activities was supported by the SUNY Geneseo Edgar Fellows College Honors program. The authors would like to thank the SUNY Geneseo Spring 2015 Principles of Microbiology class for their enthusiasm and participation. The authors declare that there are no conflicts of interest. FIGURE 1. Diagram of the lux operon and its regulation. Direction of transcription is indicated by arrowheads, and the promoter region is indicated by the lighter green color. luxI encodes an acyl-homoserine lactone (AHL) synthase, which converts S-adenosyl methionine (SAM) into AHLs (blue circles). These AHLs can diffuse out of the cell, and AHLs from the surrounding environment can diffuse inward. AHLs bind to and activate luxR, which upregulates lux operon transcription.
2020-04-10T11:34:30.486Z
2016-05-01T00:00:00.000
{ "year": 2016, "sha1": "5fe329d6268e0cc3fccb42b0586b4dcbc5fb3146", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1128/jmbe.v17i2.985", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5fe329d6268e0cc3fccb42b0586b4dcbc5fb3146", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
218673632
pes2o/s2orc
v3-fos-license
Structure/property relationship of semi-crystalline polymer during tensile deformation: A molecular dynamics approach A coarse-grained molecular dynamics model of linear polyethylene-like polymer chain system was built to investigate the responds of structure and mechanical properties during uniaxial deformation. The influence of chain length, temperature, and strain rate were studied. The molecular dynamic tests showed that yielding may governed by different mechanisms at temperatures above and below Tg. Melt-recrystallization was observed at higher temperature, and destruction of crystal structures was observed at lower temperatures beyond yield point. While the higher temperature and lower strain rate have similar effects on mechanical properties. The correlated influences of time and temperature in the microscopic structures are more complicated. The evolution of microscopic characteristics such as the orientation parameter, the bond length, and the content of trans-trans conformation were calculated from the simulation. The results showed that the temperature have double effects on polymer chains. Higher temperature on one hand makes the chains more flexible, while on the other hand shortens the relaxation time of polymers. It is the interaction of these two aspects that determine the orientation parameter. During deformation, the trans conformation has experienced a rising process after the first drop process. And these microscopic structure parameters exhibit critical transaction, which are closely related to the yield point. A hypothetical model was thus proposed to describe the micro-structure and property relations based the investigations of this study. Introduction Semi-crystalline polymer materials pose long standing puzzles in its structure/property relations, mainly due to the hierarchical structures of polymer crystalline and the coexistence of amorphous and crystalline domains. Moreover, various thermal processes [ 1 -4 ] through practices such as extrusion, injection, compression, or annealing, may introduce significant differences in morphology evolution of this amorphous-crystalline binary system. The structure and property relations of semi-crystalline polymers are heavily affected by the characteristic of polymer chains. The high molecular weight and long relaxation time of macromolecules bring up the complexity into the structure/property relations. The interests on structure/property relation of semi-crystalline polymers have barely fade since its discovery in 1960s, mostly because of its wide usage in industry and excellent cost performance ratio. [5], [6] The experiment methods such as X-ray synchrotron, infrared spectroscopy, differential scanning calorimetry, and scanning electron microscopy are widely used to investigate the structural evolution of polymer materials. The classical Peterlin's model [7] proposed the orientation and fracture of spherulites and lamellar, and formation of microfibril structures during deformation of semi-crystalline polymer. Juska and Harrison hypothesized a melt-recrystallization procedural. [ 8 ] While more and more recent studies suggested that, besides the structural evolution of the crystalline domain, the amorphous part plays an important role during deformation process. [9][10][11][12] Experimental techniques such as infrared spectroscopy(IR),small-angle X-ray scattering (SAXS), wide-angle X-ray scattering (WAXS), atomic force microscopy (AFM), and diffraction scanning calorimetry have been employed and new discoveries and theories have been put forward for recent decades. Feng Zuo, Benjamin S. Hsiao et al. [ 13 ] have investigated isotactic polypropylene (iPP) deformation with in situ SAXS, and WAXD. It was observed that at room temperature the distraction of lamellar crystals is dominant while at higher temperature (>60 o C) the formation of oriented folded chain crystal lamellar is dominating. And this phenomenon is attributed to the chain entanglement and tie chains between crystal lamellar and the relative strength of amorphous part to the crystalline domain at different temperatures. Yongfeng Men, Gert Strobl et al [14] investigated the interplay of the amorphous and crystal blocks in semi-crystalline polymer during deformation, and found that the state of the amorphous part and the stability of the crystal block act together to determine the critical strain (yield strain). While accordingly, the tie chains are of lesser importance compared to the state of amorphous domain as a whole. The experiment methods have played an irreplaceable role in scientific research on these issues, these measurements can provide partial or statistical structural information of the material, however, this is insufficient for revealing the complex micro-/meso-structure property relations, some important information is still missing. Nevertheless, it is difficult to investigate the micro-/meso-structures closely with conventional experimental measurements, and studying their influences on macroscopic properties in real time has been always a challenge. The molecular dynamics (MD) simulation provides a new route to reveal the details in structure/property relations in small scales. With MD simulation, in situ study can be readily conducted in chain configuration/conformation as well as mesoscale structures. The results of simulation and experiment investigations can be compared to help understand the essence of how synthesis and processing of polymer materials may determine the mechanical properties. Recent development in hardware and algorithms makes it possible to simulate large scale polymer system within reasonable CPU time. Significant progress has been made in exploring the microstructure/property relationships of polymer materials through MD simulation [ 15 -17 ] . Takashi Yamamoto [18] studied polyethylene with a united atom model in fiber formation and large deformation by MD simulations. The study compared the structure transformation in fiber axis with transverse direction. The deformation along the fiber axis was almost linear and elastic before yielding, and caused large reorientation of the tilted chains in the crystals. After yielding, cavitation was occurred in amorphous regions. While along the transverse direction, the molecular chains give rise to the 90° reorientation toward the uniaxial deformation direction, breaking and reformation of the crystalline texture was also emerged, simultaneously. Recently, In-Chul Yeh, Gregory C. Rutledge, et al. [19] through MD model to investigate deformation of the semi-crystalline polyethylene at different strain rates and temperatures. It was learned that cavitations emerged at low temperature or high strain rate. While at higher temperature or lower strain rate the melt/recrystallization phenomenon was be observed. The results exhibit that the interaction of crystalline and noncrystalline domains is a crucial factor in determining the mechanical properties during tensile deformation. The interface Monte Carlo (IMC) method was proposed and employed to prepare PE model with coexisting amorphous and crystalline domain. The purpose of this paper is to reveal the structure/property relations of semi-crystalline polymers, with MD simulation on the uniaxial tensile deformation process under various strain rates and temperatures. The model with coexisting amorphous and crystalline domains was established though isothermal crystallization process. This model is thermally stable and more realistic comparing to other related works such as the IMC method. The micro-/meso-structure evolution during deformation was investigated in details. And hypothetical mechanism was proposed to describe the influence of determinate structural characteristics on mechanical properties. Coarse graining The simulation tests run on an in house workstation SP2EHIEQ in parallel mode, with 14 CPUs and 28 threads. In this paper a linear polyethylene-like molecular was chose as the study object. Through coarse-graining, the carbon-hydrogen bonds are ignored. A coarse-grained bead represents one monomeric unit which is connected to neighbor beads by harmonic springs. With this simplification, the angle in this system actually represents the torsional angle in the atomic backbones. This coarse-graining method was proposed by Meyer and Müller, [20], [21] and has been applied widely. [22], [23] The mass of one bead equals to the total mass of the monomeric unit. Force field and related parameters The force field is composed of bond stretching potential, angular bending potential, and nonbond interaction potential. No charges and torsional potential was considered in this model. Thus, the total potential of the system can be described as follows. A harmonic form of the bond potential is adopted. where 0 b is the equilibrium bond length, bond K is the force constant, which determined the bond stiffness under extension. The angle bending potential which contains information on the torsional states of the atomistic backbone was derived directly from the Boltzmann-inverted angle distribution of the atomistic trajectories. The angle potential is exhibited in Fig 1. Three minima at 95 ○ , 126 ○ , 180 ○ are displayed, corresponding to gauche-gauche, trans-gauche, trans-trans conformations of the atomic backbone chain, respectively. Pair potential is defined as the potential that between pairs of atoms within a cutoff distance and the set of active interactions typically changes over time. In this simulation the non-bonded interaction potential was adopted a Lennared-Jones 9-6 potential. where ε 0 is a parameter that determines the depth of the potential well on the equilibrium position. 0  represents the equilibrium distance of a pair of beads. The cutoff distance cut r is estimated from the equilibrium distance of the potential as The values of miscellaneous modeling coefficients and deformation conditions are listed in Table 1. In this simulation a nondimentionalized unit system is used. The nondimension factors are listed in Table 2. Other unitless parameter values are thus derived accordingly. In this nondimentionalization system, T=1 corresponds to the temperature about 550K. The time step of 0.005τ and 0.01τ was used during crystallization and deformation process, respectively. An external pressure P=8 was applied, corresponding to the value of an atmospheric pressure. were applied, respectively. The huge difference of structure transition and mechanical properties were found and will be discussed in the next section. Results and Discussion Crystallization A molecular dynamics simulation of semicrystalline PE was performed in a static rectangular box with a side length ration of x:y:z=2:1:1. The ensemble is consisting of 200 coarse-grained chains with 500 repeating beads each chain, which is initially generated via a self avoiding random walk algorithm. A periodic boundary condition is applied in three dimensions to eliminate the boundary effect. During the simulation process, the temperature is controlled by a Nose-Hoover thermostat and a Berendsen barostat is applied to control the pressure. The initial conformation of the system is a non-equilibrium thermodynamic state and the conjugate gradient (CG) algorithm was performed to minimize the energy. The optimized conformation was then relaxed in an npt ensemble with the temperature of 0.1 and pressure of 16. Afterward the temperature of the system was increased to 1.0 followed by a relaxation process in a npt ensemble at atmospheric pressure. After sufficient relaxation, a well-equilibrated melting state with a disordered distribution of the molecular chains was generated. Subsequently, the isothermal crystallization process was performed after a sudden drop of the temperature from 1.0 to 0.7. In Fig 2, the snapshots of the morphology in the melting state and after crystallization are exhibited. In the melting state the polymer chains are randomly coiled, no ordered structure is observed. While after isothermal crystallization the system is composed of amorphous phases and crystalline phases, and the crystalline blocks are distributed randomly. To assess the structure transformation during crystallization, the order parameters, S(t), and entanglement parameters are applied here. The order parameter is the most intuitive way to characterize the degree of order of the system. It is indispensable to characterize the emergence of crystal nucleus and the transition of the conformations from the coiled state to extension state. The total order parameter of the system is calculated as follows. [24], [25] Where θ(t) is the angle between the cord vector The evolution of order parameter during crystallization is represented in Fig 3. It is clearly that the order parameter exhibits a dramatic increase before the step reach to 5×10 6 , and after that the increasing rate becomes slow down. The entanglement parameter is calculated following the atom steric methodology proposed by Yashiro et al. [26] The entanglement status of each atom is evaluated, by measuring the relative positions of the kth adjacent atoms to both directions along the polymer chain. If the angle between these two vectors is smaller than 90°, then the referred atom in the center is hereby designated as an entangled position. The entanglement density during crystallization process is shown in Fig 4. The entanglement density firstly increased with the steps which may due to the emergence of crystal nucleus. Subsequently, the entanglement parameter is decreased due to the rearrangement of polymer chains, which lead to transformation of the chains from the coiled conformation to the extend conformation. Latter, the entanglement parameter keeps constant due to the completion of crystallization. The angle distribution is showed in Fig 5 to clarify the evaluation of the conformation. It is obviously that three peaks are emerged in the figure which corresponding to three conformations in the bending potential. However, the position of the peak is not located rightly to the angle of 180°, which correspond to trans-trans conformation in the bending potential. That is because the molecular chains in the system bear not only the bending potential, but also other force fields in the simulation system, for example, pairwise potential, bond stretching potential. It is the combined effects of all these force fields that determined the position of the peak. There is an obvious decrease in gauche-gauche and trans-gauche conformation while an increase in trans-trans conformation, which indicate crystallization from the melting state. To explore this process in depth, the trans conformation was defined as the angle larger than the threshold of 170°. The evolution of trans-trans conformation during crystallization is represented in Fig 6. From the figure a similar tendency with the order parameter is observed, which indirectly support the rationality of these algorithms each other. Glass Transition Temperature, T g Glass transition temperature (T g ) is an important turning point in the chain segment's mobility. To obtain T g , the melting state of the system was firstly quenched to a sufficient low temperature of 0.1, to make sure that no crystal lamellae was generated. Subsequently, a heating process was applied at a certain heating rate and the evolution of volume with temperature is represented in Influence of chain length To investigate the influence of the molecular chain length to mechanical properties, two distinct ensembles were constructed, both have 10 5 coarse-grained particles but with different chain length (200 chains with 500 DP and 100 chains with 1000 DP). Both ensembles have experienced the same isothermal crystallization process. The stress-strain curves and entanglement parameters at different temperatures are illustrated in Fig 9. Comparing the stress-strain curves, larger fluctuation was found in Fig 9(a). That was because a higher temperature T=0.7 was applied during deformation, which provided a high kinetic energy and a high mobility of the chain. At the high temperature of 0.7, the yield stress in the system with short chains was larger than the system with long chains. However, there emerges a conversion at the late stage of strain hardening regions. Strangely, at the low temperature of 0.2, the stress of the system with long chains is larger than the system with short chains in the whole deformation process. A key prerequisite should be understood firstly before clarify this phenomenon. Cause of the influence of chain length, a low crystallinity and a high entanglement density were processed in the system with long chains. At the high temperature of 0.7, both systems have enough kinetic energy and both chains have strong mobility regardless of chain length. In this condition, the amorphous phase is in a rubbery state. In the elastic region, the influence of the chain length and entanglement density is low, and crystallinity is the dominant factor. However, in strain hardening regime, more and more chain segments stretched along the deformation direction. The influence of the entanglements becomes more and more severely which resist the strain for further progress. This is because the more the entanglement points the more difficulty the mobility of the chain. When at the low temperature of 0.2, all the molecular chains have been frozen. The friction in the system with long chains is large, due to the interaction between long chains and high entanglement density. It is worth noting that the influence of crystallinity shouldn't be neglect. Comprehensively consideration, the chain length and entanglement density is the dominant factor in deformation mechanism at the low temperature. Summary from these two deformation mechanisms, it seems that there exists a critical temperature to distinct these mechanisms, according to the crystallinity and entanglement density which are determined by crystallization process and chain length. At the high temperature of 0.7 and slow strain rates, the chain's mobility is strong, the amorphous phase is in a rubbery state. Due to the chain's strong mobility, the polymer chain can be easily orientated toward the deformation direction. After yield point, the crystal tilting and crystal lamellae slipping toward the stretching direction were happened, which lead the crystal stems orientated toward the deformation direction. Due to the high chain's mobility, the transition was progressed very quickly. In this stage the unfolding of the chain was not observed before the strain reach to 0.5. And the strain induced recrystallization toward the deformation direction was also observed latter at the interface between crystalline domain and amorphous domain. All in all it is the orientation of crystal stems that initiate the strain hardening behavior after yield point. At the low temperature of 0.2, the friction between the polymer chains is large. After yield point the crystal tilting and slipping toward the deformation direction was also happened but with a low transition rate. The crystal stems orientated toward the deformation direction may increase the stress. In the stress plateau stage, no crystal broken was observed. The unfolding progress was partially happened which may decrease the stress. Comprehensively consideration, the stress plateau may occur after the yield point at the slow strain rate. The yield stress is increased with the increase of strain rate and temperature. This phenomenon can be clarified from the chain's mobility. When deformed at a high strain rate, the molecular chain's mobility couldn't come up with the change of strain, the friction between the molecular chains will be more larger. Thus, when reaching to the yield point the corresponding yield stress will be larger than at the small strain rate. The influence of temperature to the chain's mobility is similar to the strain rate. At low temperature, the intermolecular and intramolecular movement will be resisted, to make it yield, a more larger stress should be applied. From the figures we can conclude that the temperature and strain rate have played an important role in determine the yield stress. deformation process, but with a more slowly decline rate after the initial drop. These two different behaviors may due to the influence of temperature. In the elastic deformation, the crystal blocks remain intact, the deformation of the system was almost generated in the amorphous regions. Along the extension direction, the deformation was increased linearly with time, but the lateral contraction was too small at the elastic regime, which lead to the increasement of system volume. Based on this, the density was decreased with strain. After the yield point, the strain induced crystal tilting and slipping toward the stretching direction were happened. At the high temperature of 0.7, the mobility of the chain is strong, so a fast deformation of the crystal blocks toward the extension direction, which lead for further lateral contraction. And at the latter of strain hardening region, the strain induced crystallization along the extension direction may happen, which also lead to an increase of density. Subsequently, the density is increased after the yield point. However, at the temperature of 0.2 and 0.35, the chain's mobility is confined, which lead to a slow decline rate of the crystal blocks. In this case the crystal broken and unfolding of the chain is the main structure transition mechanism. It is hard for the recrystallization behavior from the amorphous regions to happen by the thermal motion of the molecular chains. Comprehensively consideration, the density is still decline but with a more slowly decline rate. From these two phenomena, it seems that there must be a critical temperature to make the density keep constant after the yield point. Another phenomenon was found that the density of the system was decreased with the increase of strain rate. This is because the slower the strain rate the smaller the resistance of the interaction between the chains. Thus, at the slow strain rate, the molecular chains have a strong mobility, which lead to the lateral concentration more easily. Subsequently, the system's volume will be smaller than the volume at the higher strain rate. So the density is larger at the relatively slow strain rate. This is because at the high temperature the molecular chain's mobility is also high, which lead to a short relaxation time of bond. While at the low temperature, the friction between the molecular chains is high, which lead to a long relaxation time. Effects of strain rate and temperature to structure parameter Another manifestation is that the wider bond length distribution with the increase of temperature, and is represented in Fig 11(c). This is consistent with the increase of chain's mobility with temperature. Orientation parameter and entanglement parameter In order to characterize the extent of the chain stretching along the deformation direction, the orientation parameter was used. It was calculated using the Hermains' orientation function: where i e and Since the uniaxial deformation was performed along the x-axis direction, therefore, the unit vector . The values of P orientation is varied between -0.5 to 1.0, which denote that the molecular chain is perpendicular to the stretching direction or parallel to the stretching direction, respectively. The orientation parameters and entanglement parameters during uniaxial deformation at different temperatures and strain rates are represented in Fig 12. From these figures, the chain orientation parameters are all increased with the increase of the strain, while, the entanglement parameters have displayed an opposite trend. During uniaxial deformation, more and more chains will be aligned toward the extension direction which leads to the increase of orientation parameter. The disentanglement process was also occurred simultaneously due to the extension of polymer chains, which lead to the decreasing of entanglement parameter. From the figures the orientation parameters are increased with the decreasing of strain rate, while the entanglement parameters represent a decline tendency. This is because the chain will behaviors more flexible at the slow strain rate, and the molecular chain can be easily aligned toward the stretching direction, showing an increase in orientation parameter. This is also conducive to the progress of disentanglement behavior. Therefore, the entanglement parameter represents a decline tendency with the decreasing of strain rate. However, at the temperature of 0.2 and 0.35, from Fig 12(d) and (f), the entanglement parameter does not show a decreasement with the increase of strain rate. The cures are all winded together. It is believed that the temperature are too low, the speed of disentanglement at different strain rates becomes hardly to distinguish between each other. During this simulation the influence of temperature is also considered. In order to investigate the influence of temperature to the evolution of microstructure during tensile test, three different temperatures were applied here. In Fig 12(h) the entanglement parameter is decreased monotonically with the increase of temperature. This is consistent with the phenomenon that the motility of molecular chain is increased with the increase of temperature. At the high temperature, the relaxation process can be easily performed, which for further promote the disentanglement progress. The influence of temperature to the orientation parameter represents some complicated relations. The orientation parameter is not increased monotonically with the increase of temperature and is represented in Fig 12(g). The orientation parameter at T=0.35 is larger than the value at T= 0.7. This anomaly phenomenon is also not the first time to be found. In fact, in our previous paper in investigating the deformation mechanisms of amorphous polymers [27] this phenomenon have been occurred. As is well known that the higher the temperature the more flexible the chain will be, that is to say the chain will be more easily aligned toward the stretching direction at high temperature during deformation. However, another influential factor couldn't be neglect, at the high temperature the kinetic energy is also high, the relaxation time of the chains is small, which will lead to the increase of the trans-gauche and At the high temperature of 0.7, the molecular chains have a strong mobility, and the stability of the crystalline is low, which may lead to partially unfolding process in crystal domain with the molecular chains go into the amorphous phase, thus the initial decrease of trans-trans conformation. At this temperature, the molecular chain can be easily aligned toward the stretching direction and the strain induce recrystallization process from the amorphous phase to form fibrillar structure can also be happened, and in turn, make the trans-trans conformation increase rapidly after the certain transition point. The snapshots of the system during deformation at different temperatures and strain rates are showed in Fig 15. When at the low temperature of 0.35, the friction between molecular chain is very high, and the high stability of the crystalline. At the initial deformation the crystalline domain keeps intact, it is the orientation of the molecular chain in the amorphous regions that lead to the increase of trans-trans conformation. The crystal broken will be happened after the yield point, and let the molecular chain go into the amorphous regions, which will slow the increase rate of trans-trans conformation of the simulation system. At the temperature of 0.2, the mobility of the chain is more slower, thus, the increase rate of the trans-trans conformation at the initial is smaller than the value at the temperature of 0.35. At some rapid strain rates, 5×10 -5 , 1×10 -4 , the trans-trans conformation represent a slow decline after the initial increase. It is the influence of two aspects. For one aspect, the temperature is too low and the strain rates are high, due to the large friction between molecular chains, the trans-trans conformation transition is progressed very slow. For the other aspect, it is easily for the broken of the crystal domain at these conditions, and then reduce the trans-trans conformation. It is the interaction of these two aspects that makes the decline of the trans-trans conformation after the initial increase. Subsequently, the trans-trans conformation will be increased due to the further orientation of the chain toward the deformation direction. The other phenomenon is found in Fig 14 that the trans-trans conformation is larger at the slow strain rate than at the high strain rate along the deformation process. This is because the high the strain rate the slow the chain's mobility and then the small proportion of trans-trans conformation during deformation. Different strain rates and temperatures have been applied to the simulation system to investigate the effects on the mechanical properties. The yield stress is increased with the increase of strain rates and falling of temperatures. During deformation all the systems' density represent a decreasement at the initial regardless of the temperatures and strain rates. However, at the high temperature, the density will then increase after the minimal point, while at the low temperature, the density was still decreasing but with a more slowly decline rate. The orientation parameter of the system is increased with the decreasing of strain rates, while the entanglement parameter represent an opposite trend. A strange phenomenon was found that the orientation parameter is not increased monotonically with the increase of temperature. The temperature have double effects on the polymer chain during deformation, one is make it more flexible and easily be stretched, the other is reducing the relaxation time. Orientation parameter is determined by the interaction of these two effects under the certain strain rate. During deformation the trans-trans conformation shows different evolution process at different temperatures. It is the interaction of the chain's mobility and the stability of the crystal domain that determine the mechanism of the conformation evolution. The evolution of the trans-trans conformation have a lot to do with the evolution of crystal blocks and the formation of fibril structures. It can be concluded that temperature have a great effects on the behavior of the semicrystalline polymers during deformation. And increasing strain rate has some similarity effects on structure change with lowing temperature. Deeply understanding the effects of temperature and strain rate on the chain's mobility and stability of the crystalline region are an important bridge to reveal the structure/property relations of semi-crystalline polymer during tensile deformation.
2020-05-19T01:01:07.622Z
2020-05-18T00:00:00.000
{ "year": 2020, "sha1": "cfef3cb9384bef7683f667a04e3a0113d825fcc8", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "cfef3cb9384bef7683f667a04e3a0113d825fcc8", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
52001861
pes2o/s2orc
v3-fos-license
Accurate Titration of Infectious AAV Particles Requires Measurement of Biologically Active Vector Genomes and Suitable Controls Although the clinical use of recombinant adeno-associated virus (rAAV) vectors is constantly increasing, the development of suitable quality control methods is still needed for accurate vector characterization. Among the quality criteria, the titration of infectious particles is critical to determine vector efficacy. Different methods have been developed for the measurement of rAAV infectivity in vitro, based on detection of vector genome replication in trans-complementing cells infected with adenovirus, detection of transgene expression in permissive cells, or simply detection of intracellular vector genomes following the infection of indicator cells. In the present study, we have compared these methods for the titration of infectious rAAV8 vector particles, and, to assess their ability to discriminate infectious and non-infectious rAAV serotype 8 particles, we have generated a VP1-defective AAV8-GFP vector. Since VP1 is required to enter the cell nucleus, the lack of VP1 should drastically reduce the infectivity of rAAV particles. The AAV8 reference standard material was used as a positive control. Our results demonstrated that methods based on measurement of rAAV biological activity (i.e., vector genome replication or transgene expression) were able to accurately discriminate infectious versus non-infectious particles, whereas methods simply measuring intracellular vector genomes were not. Several cell fractionation protocols were tested in an attempt to specifically measure vector genomes that had reached the nucleus, but genomes from wild-type and VP1-defective AAV8 particles were equally detected in the nuclear fraction by qPCR. These data highlight the importance of using suitable controls, including a negative control, for the development of biological assays such as infectious unit titration. INTRODUCTION Adeno-associated viruses (AAVs) were discovered in electron micrographs as contamination in adenovirus preparations, and soon became the subject of interest of scientists around the world. 1 Over 50 years later, the interest in AAV as a vector in gene therapy continues to grow. The AAV belongs to the parvovirus family, specifically the dependoparvovirus subfamily. The members of this subfamily require a helper virus, such as the adenovirus (Ad) or herpes simplex virus (HSV), to facilitate productive infection and replication. Although it is estimated that 90% of the human population is AAV seropositive, 2,3 these viruses do not cause any known disease in humans, being an important safety criterion for their use in gene therapy approaches. Clinical trials using recombinant AAV (rAAV) vectors have shown impressive results for Leber congenital amaurosis, 4 hemophilia B, 5 spinal muscular atrophy (ClinicalTrials.gov: NCT02122952), and other diseases. The first commercial product based on rAAV was approved in 2012 by the European Medicine Agency for the treatment of lipoprotein lipase deficiency, and a second drug could be approved soon according to positive results of a phase III trial. 4 Nonetheless, a major bottleneck to commercialize these products is the manufacturing of rAAV in accordance with current good manufacturing practices (cGMPs) on a large scale. Production of rAAV in human cells (HEK293) transiently transfected with plasmids is probably the most common approach, 6 but the use of insect cells and baculoviruses is highly convenient for industrial manufacturing. 7 Other viable approaches consist of using a recombinant HSV complementation system in suspension-cultured mammalian cells (BHK21 or HEK293) 8,9 or mammalian-derived producer cell lines containing the rep and cap genes and the AAV vector integrated into the genome. In the latter case, the amplification of the rAAVs is initiated upon infection by a helper virus, such as Ad. 10,11 Given that quality attributes of rAAV stocks could be different depending on the manufacturing platform, it is highly important to have accurate analytical methods for their characterization, as emphasized by the FDA. 12 Among the quality attributes, the infectious titer is critical to ensure the efficacy of the product. AAV infection does not result in cytopathic effect, and, therefore, plaque assays cannot be used to determine infectious titers; but, in the presence of a helper virus, it is possible to induce the replication of AAV genomes and measure infectious events. One of the most widely used methods to titer infectious units is the median tissue culture infective dose (TCID 50 ); the assay utilizes an HeLa-derived AAV2 rep and cap-expressing cell line, grown in 96-well plates and infected with replicate 10-fold serial dilutions of AAV vector in the presence of Ad type 5. After infection, vector genome replication is determined by qPCR. 13 Similarly, the infectious center assay (ICA) uses HeLa rep-cap cells and Ad, but, after incubation, cells are transferred to a membrane and infectious centers (representing individual infected cells) are detected by hybridization with a labeled probe complementary to a portion of the recombinant genome. [14][15][16] In this study, we compared these titration methods using rAAV serotype 8 vectors. In particular, we produced and characterized a VP1-defective AAV8-GFP vector that was used to mimic a non-infectious rAAV vector. [17][18][19] This non-infectious vector lot allowed us to assess the ability of the different methods to discriminate between infectious and non-infectious rAAV serotype 8 vectors. In addition, another objective of our study was to develop a new protocol for the titration of infectious AAV vector particles using sensitive qPCR-based quantification of intracellular or intranuclear vector genomes following the transduction of a permissive cell line, without helper virus co-infection. Such a procedure could be very useful for the titration of any AAV serotype, including those that do not infect standard cell lines such as HeLa rep-cap cells. Ideally, the protocol could be adapted to any type of cultured cells, including differentiated cells mimicking a targeted tissue, and it could result in infectious titers more predictive of in vivo vector efficiency. Our results demonstrated that ICA was the most selective method to discriminate between infectious AAV8 particles and AAV8DVP1 negative control and correlated with vector-encoded transgene expression. Moreover, all methods tested for cytoplasm and nuclei fractionation of infected cells and measure of AAV genomes failed to distinguish infectious AAV8 and VP1-deficient particles. These data highlight the need for using appropriate biological assays to accurately measure the infectivity of rAAV stocks and the importance of including relevant controls in testing protocols. Production and Characterization of a VP1-Defective AAV8 Vector The aim of the present study was to evaluate the accuracy of different methods for the titration of rAAV infectious particles; thus, we decided to generate a non-infectious AAV vector for use as a negative control. To this end, the ATG initiation codon of VP1 was changed to a stop (TGA) codon in the pKO-R2C8 packaging plasmid encoding AAV2 Rep and AAV8 capsid proteins. This mutated (pKO-R2C8DVP1) plasmid was co-transfected in HEK293 cells with pAdDF6 helper and pTR-UF11 vector plasmids to produce an AAV8-GFP vector lacking VP1. The AAV8-GFP control vector was produced in parallel using the original pKO-R2C8 plasmid to get an infectious vector produced by the same method (i.e., three-plasmid transfection). Preliminary testing of AAV8DVP1 production demonstrated not only that vector genome packaging actually occurred into VP2 and VP3 particles but also that vector genome (VG) titers were reduced compared to a vector with wild-type AAV8 capsid composed of VP1, VP2, and VP3 polypeptides (data not shown). Thus, AAV8DVP1-GFP vector stock was produced through transfection of three CellStack-5 chambers (CS5), whereas a single CS5 was used for the control AAV8-GFP vector with wild-type capsid, but both vectors were then processed identically. This resulted in an AAV8DVP1-GFP vector stock with a higher VG titer (3.3  10 13 and 2.4  10 13 VG/mL based on bGH and SV40 polyA sequences, respectively) than the AAV8-GFP control vector stock (8.8  10 12 and 8.8  10 12 VG/mL based on bGH and SV40 polyA sequences, respectively), following purification through CsCl gradients (Table 1). Total AAV8 capsid titers were determined by ELISA for the calculation of total:full particle ratio from CsCl-purified preparations, indicative of vector quality (Table 1). For vectors with wild-type AAV8 capsid, ELISA (total particles) and qPCR (full, recombinant genome-containing particles) titers were basically the same, indicating that they essentially contained particles with encapsidated VG, similar to the rAAV8RSM. 20 In contrast, the AAV8DVP1-GFP vector stock contained 3-fold more total particles than VG-containing particles, i.e., two-thirds of the particles with no (empty) or illegitimate (non-vector) DNA encapsidated. Thus, although rAAV-UF11 VGs were actually encapsidated into VP2 and VP3 particles, the absence of VP1 apparently resulted in an apparent lower packaging efficiency. However, we cannot exclude that this result was not due to an ELISA quantification bias. Indeed, the ADK8 antibody used in the ELISA has its conformational epitope localized in VP3, 21 and it may be more accessible in VP1-deleted capsid than in the wildtype AAV8 capsid, thus resulting in a higher ELISA signal. SDS-PAGE analysis of vector preparations showed that all AAV8-GFP vectors had similar purity, ranging from 91% for AAV8DVP1-GFP to 96% for internal control 1 (IC1), as determined by optical density scanning of Coomassie blue-stained gels (Figure 1A). It also showed the complete absence of VP1 in AAV8DVP1-GFP preparation, which was further confirmed by western blot analysis with anti-VP1/2/3 antibody B1 ( Figure 1B, lane 4). 22 As an additional quality control, particle size was measured in AAV8-GFP control and AAV8DVP1-GFP vector preparations by dynamic light scattering (DLS) ( Figure S2). The results showed that both vector preparations had a very similar particle size distribution (24.57 ± 7.02 and 24.33 ± 7.38 nm, respectively), and they were quite homogeneous with no detectable particle aggregates (Table S2). AAV8-GFP control and AAV8DVP1-GFP vectors were also analyzed by differential scanning fluorimetry (DSF), and they were shown to have similar thermal stability with a melting temperature (Tm) of $70.5 C. 23 These data indicated that the absence of VP1 had no major impact on AAV8 particle size and capsid stability in Dubelcco's phosphate buffer saline (DPBS). Analysis of AAV8 Vector Intracellular Trafficking To further characterize the AAV8DVP1 vector, we analyzed AAV8 particle intracellular trafficking in HeLa cells infected with AAV8DVP1-GFP or AAV8-GFP control vector. To this end, HeLa cells were infected at an MOI of 20,000 VG/cell during 1, 5, or 16 hr, intracellular AAV8 particles were labeled with ADK8 antibody that specifically recognizes assembled AAV8 capsids 24 and Alexa Fluor 555 secondary antibody, and nuclei were stained with DraQ5 fluorescent dye. Confocal microscopy images ( Figure 2A) were then analyzed, using the compartmentalization task of Volocity software, for counting viral capsids according to their localization. Intranuclear particles were counted using the "inside" class (full colocalization with DraQ5 staining), perinuclear particles were calculated as the difference between "overlapping" (full or partial colocalization with DraQ5) and "inside" classes, and cytoplasmic particles were the difference between "nearest by edge" (no colocalization with DraQ5) and "overlapping" classes. The results indicated that assembled AAV8 particles entered the cytoplasm and then accumulated in the nucleus over time, the ratio of perinuclearly localized particles remaining constant ( Figure 2B), as already described for AAV2. 19,25 In contrast, AAV8DVP1 particles were not actively entering the nucleus, as expected, and rather accumulated in the perinuclear region ( Figure 2B). About 10% of intracellular AAV8DVP1 particles were detected within the nucleus, which may represent the background of the method, possibly due to a technical artifact, but not to non-specific antibody binding, since no signal was detected in non-infected HeLa cells. Since HeLa cells were dividing, we hypothesized that intracellular AAV8DVP1 particles were able to enter the nucleus during cell division, when the nuclear membrane was disrupted. When comparing AAV8 and AAV8DVP1 particle distribution by a two-tailed Mann-Whitney test, a significant difference was found at 5 hr post-infection (p = 0.0001) for intranuclear capsids, and at 16 hr post-infection for both intranuclear (p < 0.0001) and perinuclear (p = 0.0028) capsids. Overall, this analysis clearly confirmed that intracellular trafficking of AAV8 particles was altered in the absence of VP1, resulting in poor or absent nuclear translocation. Titration of Infectious AAV8 Particles through the Detection of VG Replication One method that is largely used to quantify infectious AAV vectors consists in infecting trans-complementing cells that have stably integrated AAV2 rep and cap genes, 26 such as HeRC32 cells. 27 When infected with Ad, these so-called packaging cells express both the AAV Rep and Ad helper proteins, allowing replication of the recombinant AAV genomes that have reached the nucleus, which correspond to infectious vector particles. Here we compared two methods based on this principle for the titration of infectious units (IUs) in AAV8 vector lots, which differ in particular by the way VG replication is detected. The TCID 50 uses qPCR as the detection method, and VG replication is calculated by the Spearman-Kärber method. 13,28 In contrast, the ICA uses whole-cell DNA hybridization to detect cells in which VG replication happened. 14 Another major difference is that infected cells are harvested 72 hr post-infection in the standard TCID 50 assay but only 24 hr post-infection in the ICA currently performed in our laboratory. www.moleculartherapy.org The results obtained with the AAV8 control and AAV8RSM vectors indicated that both vectors have similar infectivity when comparing the VG:IU ratio obtained with each method (Table 2; Figure S3A). For both vectors, the VG:IU ratio calculated with the TCID 50 titers (4.6  10 2 and 3.5  10 2 ) were 40-to 60-fold lower than those calculated with the ICA titers (2.5  10 4 and 1.5  10 4 ), indicating either Cells were non-infected (no AAV) or infected with AAV8 control or AAV8DVP1 at a multiplicity of 20,000 VG/cell, and then they were fixed after 1, 5, or 16 hr. Cell nuclei stained with DraQ5 appear in red, and assembled AAV8 particles stained with Alexa Fluor 555 appear in blue, green, or cyan, depending on their localization (cytoplasmic, intranuclear, or perinuclear, respectively). (B) Quantitative analysis of the immunofluorescence pictures. AAV8-assembled particles were quantified in the intranuclear, perinuclear, and cytoplasmic cellular compartments at 1, 5, and 16 hr post-infection with AAV8-GFP (upper panel) or AAV8DVP1-GFP (lower panel). Results obtained with AAV8DVP1 and AAV8 were compared by a two-tailed Mann-Whitney test for each cell compartment. *p < 0.05, **p < 0.005, ***p % 0.0001; N = total number of AAV8 particles counted at each time point. Data are presented as mean ± SD. that the TCID 50 is more sensitive and can detect infectious particles not detected by the ICA or that it overestimates the infectious titer due to higher background. For the AAV8DVP1 vector, the VG:IU ratio obtained with both assays was higher than controls, confirming an altered infectivity for VP1-defective particles (Table 2; Figure S3A), but, surprisingly, it was not null. The detection of positive cells in the ICA method was not due to non-specific hybridization of the GFP probe, as shown by the absence of background on non-infected ( Figure S3A) and AAV8-LacZ-infected cells ( Figure S3B). We hypothesized that infectious AAV8DVP1 particles could be explained by the relatively high AAV multiplicity used and the presence of Ad, known to be an intracellular carrier for various biological molecules, especially through its endosomolytic activity. 29,30 Thus, the high Ad multiplicity (500 IUs/cell) used in the TCID 50 and ICA may help non-infectious AAV particles to escape the endosomes and reach the cell nucleus. Importantly, in the case of AAV8DVP1, the difference in the VG:IU ratio calculated with the TCID 50 (2.9  10 3 ) was about 4-log 10 higher compared to that calculated with the ICA (3.1  10 7 ). In addition, the mean of TCID 50 results from 3 independent assays indicated only a 6-fold difference in VG:IU ratio between AAV8 control and AAV8DVP1 vectors (p = 0.1), while the VG:IU ratio calculated by the ICA method resulted in a 1,240-fold difference between AAV8 control and AAV8DVP1, that difference being statistically significant (p = 0.0002). One additional observation that confirmed the infectivity defect of AAV8DVP1 was the absence of AAV replication in the control ICA performed on HeLa cells co-infected with Ad5 ( Figures S3A and S4A). In contrast, some AAV replication was detected with the AAV8 control vector, which was correlated with the presence of infectious rep-positive particles that were clearly detected by probing the ICA membranes with a rep probe ( Figure S4B). This vector lot was found to contain 8.5  10 3 rep-positive infectious particles per milliliter (i.e., 1 rep-positive into 4  10 4 AAV8-GFP infectious par-ticles). These rep-positive particles are known to be generated during vector production when using three-plasmid transfection with a rep-cap plasmid containing a full-length p5 promoter. 31 Importantly, no replication was detected in HeLa cells with the AAV8RSM (Figure S3A) and the AAV8 internal control 2 vector ( Figure S4), both produced by double transfection using the large helper plasmid pDP8 with the MMTV LTR replacing the p5 promoter. 20,32 The high background observed with the humanized green fluorescent protein (hGFP) probe on HeLa cells infected with AAV8-GFP IC2 and AAV8-GFP DVP1 vectors was likely due to the very high vector input ( Figure S4A). The signal observed with the rep probe on HeRC32 cells was due to Ad-induced amplification of the integrated rep-cap sequences ( Figure S4B). The discrepancy between TCID 50 and ICA results obtained with the AAV8DVP1 vector may indicate that the current TCID 50 assay indeed leads to a higher background and overestimates vector infectivity compared to ICA. In addition to the sensitivity of the detection method (qPCR versus blotting), this difference could also be explained by the different infection duration used in the assays (i.e., 3 days for the TCID 50 versus 1 day for the ICA), which may allow non-infectious AAV genomes to enter the nucleus during cell divisions. To assess the impact of the incubation time on TCID 50 titers, additional assays were performed using 24-hr incubation, similar to the ICA. The infectious titers obtained with this modified TCID 50 assay were reduced 7-fold and 11-fold for the AAV8 control and AAV8DVP1 vectors, respectively, and they resulted in an increased difference (11-fold versus 6-fold) in VG:IU ratio between both vectors (Table 2). Thus, by reducing the infection time of the protocol, it is possible to improve accuracy of the TCID 50 assay. Indeed, the VG:IU ratio calculated for the AAV8 control vector using this modified TCID 50 was closer (less than 10-fold higher) to that calculated with the ICA. However, this ratio was still 1,000-fold higher for the AAV8DVP1 vector. To further investigate the differences between the TCID 50 and ICA, additional experiments were conducted using a different AAV serotype, i.e., AAV2. When the TCID 50 was performed with the AAVRSMWG 28 vector and an internal control AAV2 vector (AAV2 IC) following the standard adeno-associated virus reference strain material working group (AAVRSMWG) method, i.e., using 72-hr incubation, the calculated VG:IU ratio was 1.5 and 1.3, respectively (Table S4), indicating almost 100% infectious particles in both vector lots, which seems rather unlikely and supports the assumption that the standard TCID 50 may overestimate the infectious titer in some cases. Indeed, the VG:IU ratio published by the adeno-associated virus type 2 reference strain materials working group (AAVRSMWG) was 7.5. 28 By reducing the incubation time to 24 hr, the TCID 50 infectious titers were reduced 7.8-and 25-fold for the AAVRSMWG and AAV2 IC vectors, respectively, which resulted in VG:IU ratios of 11.7 and 33.1, respectively. The infectious titers determined by this modified TCID 50 assay were closer to those obtained by the ICA method (Table S4), i.e., almost identical for the AAV2 IC vector and only 6.7-fold higher for the AAVRSMWG. Based on these results and those obtained with the AAV8 control vector, it appears that 24-hr incubation is sufficient to determine consistent infectious titers using both TCID 50 and ICA methods, and it decreases the discrepancy between both assays. In addition, reducing the incubation time is likely decreasing the variability of the TCID 50 , as already shown for the ICA. Indeed, in the original ICA protocol published by our laboratory, 14 the infection time was fixed at 42-44 hr using an Ad5 multiplicity of 50 IUs/cell, but subsequent studies demonstrated that 24-26 hr of infection using a higher Ad5 multiplicity (500 IUs/cell) was sufficient to detect rAAV replication in infected cells. This shorter incubation also resulted in less background signal and more reproducible results, which was correlated with the appearance of Ad-induced cytopathic effect around 36 hr post-infection. Since high Ad5 multiplicity is also used in the TCID 50 and the qPCR-based detection is highly sensitive, the use of 24-hr incubation was considered an improvement for this assay. Titration of Infectious AAV8 Particles by Transgene Expression Assay Another biological assay that is commonly used to test infectivity of AAV preparation consists of infection of permissive cells and measuring transgene expression to determine a titer in transducing units (TUs). Similar to the replication assays described above, transduction requires the entry of rAAV vectors into the cells, the translocation of VGs to the nucleus and their conversion in double-stranded DNA, and, in addition, transcription of the transgene. The readout is the detection of the transgene-encoded protein. To analyze GFP transgene expression from AAV8 and AAV8DVP1 vectors, we used HeLa cells because they have similar permissiveness compared with HeRC32 cells, thus allowing comparison of infectious titers obtained by the TU, TCID 50 , and ICA methods. HeLa cells were infected with controlled AAV multiplicities in the presence or absence of Ad (Addl324) at 50 IUs/cell. Analysis of GFP expression by fluorescence-activated cell sorting (FACS) showed a defect in infectivity for the AAV8DVP1 vector (p = 0.0782), and the amount of GFP-positive cells was found above background only at the higher AAV multiplicity (Figure 3). In contrast, a significant amount of GFP-positive cells was found with AAV8 compared to AAV8DVP1, in particular at MOIs greater than or equal to 5  10 4 VG/cell. Transgene expression was enhanced by Ad transduction, by promoting most likely second-strand DNA synthesis 33 and possibly endosomal escape of AAV as discussed above. According to infectious titers calculated by FACS analysis, the difference in VG:IU ratio between AAV8 and AAV8DVP1 vectors was 3-log without Ad and 2.3-log when Addl324 was added, which were consistent with the results obtained using the ICA method. Evaluation of Sample Preparation Methods for the Isolation of Intracellular AAV VGs In an attempt to develop a new, more sensitive and accurate assay, we evaluated the possibility to quantify infectious AAV particles through a qPCR-based detection method of intracellular or intranuclear VGs. Indeed, such a procedure would be of interest for vector serotypes that do not infect HeLa rep-cap cells, such as HeRC32, and it would be virtually adaptable to any type of permissive cells. In addition, the procedure would not require Ad co-infection to avoid possible bias caused by Ad-induced endosome disruption. To this end, we tested several protocols to isolate vector DNA from AAV8-infected cells. For the development of the so-called infectious genome (IG) assay, we used HeLa cells to allow comparison of titers with the other methods. Cells were seeded in 12-well plates, infected (or not) for 16 hr with different MOIs (2,000 or 10,000 VG/cell) of AAV8 control or AAV8DVP1 vectors, and harvested by trypsin-EDTA treatment. Next, we tested four sample preparation procedures prior to DNA isolation. After washing with PBS, harvested cells were kept intact and washed again with PBS to remove extracellular AAV particles (procedure 1), or they were submitted to cytoplasm/nucleus fractionation (procedures 2, 3, and 4). Fractionation was performed in order to remove cytoplasmic AAV particles and to isolate the VGs of infectious particles from the nucleus. To achieve this, three different protocols were tested and compared. The first one (procedure 2) used cell lysis and nuclei-stabilizing solutions, allowing the preparation of a single-cell nuclei suspension for cell counting and viability analysis (http://shop.chemometec.com/product/reagent-a100-500-ml/). The second one (procedure 3) is based on a commercial cell fractionation kit (NE-PER Nuclear and Cytoplasmic Extraction, Thermo Scientific) that has been already used for the analysis of AAV2 intracellular trafficking in a recent study. 34 The last one (procedure 4) was adapted from a protocol that was used for the analysis of the subcellular localization of factors of the RNAi pathway. 35 All procedures are described in detail in the Materials and Methods. The purity of both cytoplasmic and nuclear fractions was controlled by western blot using antibodies against proteins localized exclusively in the cytosol (a-tubulin), the mitochondrion (ATP-synthase b subunit), or the nucleus (lamin B). Since protein samples were precipitated with acetone to get equal sample volumes prior to SDS-PAGE, purified His-tagged Rep68 protein was added as a loading control. The signal obtained with the spiked Rep68 protein was equivalent in all lanes, showing that protein precipitation by acetone was equally efficient for all samples ( Figure 4A). Results with procedure 2 showed that fractionation was not efficient, since the cytoplasmic (lane 3) and nuclear (lane 7) fractions both contained the 3 indicator proteins, similar to whole-cell lysate prepared by procedure 1 (lane 8). Both procedures 3 and 4 resulted in apparent pure cytoplasmic fractions in which lamin B was not detected (lanes 1 and 2), but a pure nuclear fraction was obtained only with procedure 4 (lane 6), showing no a-tubulin signal and no detection of ATP-synthase. Indeed, a weak but clear a-tubulin signal was detected in the nuclear fraction obtained with procedure 3 (lane 5). Since detection of intracellular VG is based on a DNA method and not a protein method, we decided also to control cell fractionating by qPCR. To this end, cytoplasmic and nuclear fractions as well as whole-cell samples were analyzed with albumin and cytochrome B primers, as markers of nuclear genomic DNA (gDNA) and cytoplasmic mitochondrial DNA (mtDNA), respectively. The qPCR analysis showed similar results for procedures 2 and 4 ( Figure 4B), for which nuclear fractions were enriched in gDNA but still contained mtDNA. Although this contaminating mtDNA was less than 0.5% of the total mtDNA detected in control cells, it still represented more than 10 6 copies per sample. Similarly, more than 10 4 copies of gDNA were detected in the cytoplasmic fractions from both procedures, representing around 1.5% of total gDNA detected in control cells. Regarding procedure 3, the distribution of gDNA and mtDNA in the nuclear and cytoplasmic fractions was very similar, but the copy numbers in both fractions were lower for both albumin and cytochrome B compared to the other methods. In particular, albumin copy number in the nuclear fraction represented only 16.5% of that detected in control cells, which may reflect a problem of DNA recovery when using the NE-PER kit that is primarily intended to extract proteins. Strikingly, the amount of mtDNA was higher in the nuclear fractions than in the cytoplasmic fractions for all three fractionation procedures, demonstrating that nuclei are not efficiently separated from cytoplasmic organelles. The results obtained by qPCR were in contradiction with western blot analyses that showed efficient nuclei isolation when using procedure 4, maybe due to the higher sensitivity of the qPCR compared to immunoblotting. Although the method published by Gagnon et al. 35 (procedure 4) appeared as the most efficient in terms of purity of the subcellular fractions, our results showed that none of the cell fractionation methods tested can completely separate nuclei from cytoplasmic components. Titration of AAV8 Particles in Cellular Fractions by qPCR DNA samples prepared by the four different procedures were also analyzed using a qPCR assay targeting the SV40 poly(A) sequence for quantification of intracellular, cytoplasmic, and intranuclear AAV VGs. The qPCR results obtained with different MOIs were normalized to an input of 1,000 VG/cell ( Figure 5) to facilitate comparison. Interestingly, the amount of intracellular VGs was higher with AAV8DVP1 compared to AAV8 (p = 0.015), showing that washing the cells with PBS does not remove defective particles lacking VP1. Nonetheless, it is known that AAV interaction with cellular receptors and cell entry do not depend on the presence of VP1. 19 A more striking observation was the higher amounts of VGs detected in the nuclear fractions with AAV8DVP1 compared to www.moleculartherapy.org AAV8 (p = 0.011), which is consistent with the confocal microscopy results showing perinuclear accumulation of VP1-defective particles ( Figure 2) and the inefficient separation of nuclei ( Figure 4B). Finally, similar results were observed in the cytoplasmic fractions (p = 0.047), suggesting that VP1 defect resulted in an overall increase of intracellular VGs at 16 hr post-infection. One possible explanation could be that AAV8 particles undergo a pH-induced conformational change during trafficking through the late endosome, 36,37 leading in particular to the externalization of VP1 N-terminal domain and capsid destabilization, 38 which may increase their sensitivity to proteolytic degradation. In the absence of VP1, the effect of low pH on capsid stability may be less significant, leading to an increased intracellular persistence of VP1defective particles. Infectious titers of AAV were calculated using the values obtained for intracellular (procedure 1) or intranuclear (procedures 2-4) VGs ( Table 2). The formula used for titer calculation considered both the number of cells at infection and the amount of gDNA (based on the albumin gene copy number) in order to normalize the VG copy values to the efficiency of nuclear DNA recovery. The infectious titer for the AAV8 control was in the same range as those obtained with the standard methods, TCID 50 and ICA (Table 2). However, lower VG:IU ratios were consistently obtained with the AAV8DVP1 vector with all four sample preparation procedures, wrongly indicating a higher infectivity for AAV8DVP1. This result is in conflict with the VG:IU ratio calculated using the TCID 50 , ICA, and TU methods, and it does not reflect the current knowledge of VP1 function, recognized as critical for AAV transduction. In summary, none of these methods was able to discriminate between a fully functional (i.e., infectious) AAV8 vector and a non-infectious counterpart lacking the VP1 capsid protein. DISCUSSION The aim of the present study was to compare different methods for the titration of infectious AAV vector particles and to evaluate their ability to accurately discriminate infectious and non-infectious vectors. The study was focused on AAV serotype 8 vectors, a capsid serotype that allows highly efficient gene transfer in vivo in different tissues and animal models 39 and has been used successfully in clinical trials. 5 In addition, a fully characterized reference standard material (AAV8RSM) is available to the community for this serotype. 20 To challenge the titration methods, we have generated an AAV8 vector lacking the large capsid protein VP1, known to be essential for AAV endosomal escape and nuclear translocation following cell entry. 17,19 In particular, virus infectivity is largely dependent on the phospholipase A2 17 and basic domains 18 located in the VP1 N terminus sequence. To our knowledge, this is the first time that such a VP1 negative control is used in infectious titer assays. To discriminate the infectivity of the AAV particles depending on the VP1 content is currently of high importance, because it has been shown that the baculovirus and insect cell system in particular could generate rAAV with variable VP1 content, depending on the cap sequence design. [40][41][42] Moreover, during gene therapy drug development phases, a change in the upstream or downstream manufacturing process may be required to be more amenable to commercial scale. In this case, it will be mandatory to conduct equivalence studies, and then accurate characterization of the infectivity of the rAAV products will be critical for regulatory approval. Here we show that AAV8 nuclear targeting following cell entry is clearly altered in the absence of VP1, as shown by immunofluorescence analysis of intracellular trafficking of AAV8DVP1 particles. In the same line, the ICA titration method showed that AAV8DVP1 was 1,000-fold less infectious than AAV8 and was consistent with the almost absence of GFP transgene expression. In contrast, by using the TCID 50 titration method, the difference in infectivity (VG:IU ratio) between AAV8 and AAV8DVP1 was only 6-fold, suggesting that partial loss of infectivity of an AAV8 lot might be more difficult to detect by TCID 50 compared to the ICA method. Thus, the ICA appears as a more discriminating method to distinguish between an infectious and a non-infectious AAV8 vector, and it is applicable independently of the transgene cassette, in contrast to transgene expression assays (not suitable for, e.g., coding sequences controlled by tissue-specific promoters and non-coding sequences such as small hairpin RNA [shRNA]). We acknowledge that there could be differences between serotypes, and, thus, it would be interesting to implement a similar method evaluation using VP1-defective control vectors for other AAV serotypes. It is worth mentioning that the TCID 50 method was originally developed with AAV2, a serotype that is much more infectious than AAV8 on cultured cell lines. 43 Indeed, standard TCID 50 assays performed using the AAVRSMWG and an internal AAV2 vector resulted in a VG:IU ratio close to 1, whereas this ratio was between 350 and 460 for AAV8RSM and AAV8 control vectors. Hence, some adjustments of the TCID 50 method could be implemented to improve the accuracy of the method for AAV8 or other serotypes with low infectivity in vitro. We investigated, among the possible adjustments, a reduction of the infection time to 24 hr (instead of 72 hr in the standard method). This modification resulted in an almost 2-fold increased difference in infectivity (VG:IU ratio) between AAV8 and AAV8DVP1 (i.e., 11-fold versus 6-fold), and, thus, it could be considered as an improvement of the method. Reducing the incubation time in the TCID 50 assay also resulted in about 10-fold lower infectious titers for both AAV8 and AAV2 vectors, thus mitigating the discrepancy between the TCID 50 and ICA methods. However, the infection-defective AAV8DVP1 vector was still found to be 1,000-fold more infectious with the modified TCID 50 compared to the ICA. On the other hand, we found that none of the four physicochemical methods tested to fraction cytoplasm versus nuclei was selective enough to specifically detect AAV8 infectious genomes by qPCR. Nonetheless, cell fractionation could be suitable for AAV serotypes with high infectivity in vitro, as shown by Salganik et al. 34 using AAV2 on HeLa cells. Indeed, the authors were able to distinguish VG distribution of wild-type and different capsid mutants using the NE-PER kit for cell fractionation (similar to procedure 3 in this study) and qPCR. Since none of the cell fractionation methods tested here was suitable for AAV8, further evaluation of these methods using AAV2 was not considered. Moreover, we did not include cell fractionation methods based on ultracentrifugation in sucrose gradient to separate cell compartments, 18,19 since they were considered very difficult to implement in quality control laboratories following good laboratory practices (GLPs), in particular because (1) they are laborious, (2) they need ultracentrifugation equipment, (3) they require large amounts of cells (and thus large amounts of vector preparation), and (4) they would be difficult to standardize. In conclusion, our data demonstrate that transgene expression and ICA were the most sensitive methods to detect changes in infectivity of AAV8 stocks. Moreover, lessons learned during the development of cell-fractioning protocols were key to understanding the limitations of current tools (e.g., cell lines) and the need for developing more accurate protocols. This study also highlights the importance of including suitable positive and negative controls for the evaluation of analytical methods, and it suggests implementing VP1-defective reference vectors as negative controls for the validation of AAV infectivity assays. pAdDF6 helper plasmid, which contains E2A, VA RNA, and E4 Ad helper functions, was obtained from the Penn Vector Core (Philadelphia, PA, USA). pDP8-KanR plasmid contains the same elements as pDP8 used for manufacturing of the rAAV8RSM (i.e., E2A, VA RNA, and E4 Ad helper functions, as well as AAV2 rep and AAV8 cap sequences), except ampicillin resistance that was replaced by kanamycin for amplification in E. coli. pCQAAV1 plasmid is a derivative of pSub201 containing all qPCR target sequences used in this study, i.e., human albumin gene, human cytochrome B gene, SV40, and bGH polyA signals. AAV8 Vector Preparations AAV8 vectors were manufactured as described for the rAAV8RSM. 20 Briefly, HEK293 cells in CS5 were transfected with two or three plasmids by the calcium phosphate precipitation method, and AAV8 particles in the culture supernatant were polyethylene glycol (PEG)-precipitated, then purified by double CsCl density gradient ultracentrifugation, and finally formulated in 1 DPBS containing Ca 2+ and Mg 2+ through dialysis in Slide-A-Lyzer 10K cassettes (Thermo Scientific, Illkirch, France). AAV8 internal control vectors (IC1 and IC2), used as quality control standards in our laboratory, were produced by co-transfection of pTR-UF11 vector plasmid and pDP8-KanR helper plasmid. AAV8 control and AAV8DVP1 vectors were produced by co-transfection of pTR-UF11, pAdDF6 helper plasmid, and either pKO-R2C8 or pKO-R2C8DVP1 plasmid. AAV8 reference standard material (rAAV8RSM, ATCC VR-1816) was used as an additional control. VG Titration by qPCR Purified AAV vectors (3 mL) were treated with 4 U DNase I (Sigma-Aldrich) in DNase buffer (13 mM Tris [pH 7.5], 0.12 mM CaCl 2 , and 5 mM MgCl 2 ) for 45 min at 37 C. Then, DNase I-resistant nucleic acids were purified by the NucleoSpin RNA Virus kit (Macherey-Nagel, Hoerdt, France), and VGs were quantified by TaqMan qPCR in Premix Ex Taq probe qPCR master mix (TaKaRa Bio, Saint-Germain-en-Laye, France). Primers were targeted to either the SV40 or the bGH polyA signal (Table S1). Standard curves were obtained using 10 2 -10 8 copies pTR-UF11 plasmid linearized with ScaI (New England Biolabs, Evry, France). VG titers were calculated from at least 3 independent titrations, and mean values are reported in Table 1. SDS-PAGE and Western Blot Analysis of AAV Vector Preparations For SDS-PAGE analysis of rAAV preparations, vectors were denatured for 5 min at 95 C in Laemmli buffer and loaded on 10% Tris-Glycine polyacrylamide gels (Life Technologies). Precision Plus Protein All Blue Standards (Bio-Rad, Marnes-la-Coquette, France) was used as a molecular weight marker. Following electrophoresis, gels were either submitted to Coomassie blue staining (Imperial Protein staining, Thermo Fisher Scientific) or transferred on nitrocellulose membranes for western blot analysis. Membranes were probed with monoclonal antibody B1, which recognizes all three AAV capsid proteins. The amounts of VGs loaded on the gel were 1.5  10 11 and 1.5-3.0  10 9 for Coomassie blue staining and western blotting, respectively. For the determination of vector purity, Coomassie blue-stained gels images were analyzed with Gene Tools software (Syngene, Cambridge, UK) to determine signal intensity of each protein band. Purity was then calculated as the relative amount of VP1, VP2, and VP3 proteins over the total amount of proteins present in the sample, all extrabands being integrated as non-expected contaminant proteins. Fluorescence Microscopy Analysis HeLa cells were seeded in 8-well m-Slide ibiTreat chambers (Biovalley, Nanterre, France) at 2  10 5 (for 1-or 5-hr infections) or 8  10 4 (for 16-hr infection) cells/well. The day after, cells were infected or not with the rAAV vectors at a multiplicity of 20,000 VG/cell in duplicate. After 1, 5, or 16 hr, cells were washed in PBS, fixed with 4% paraformaldehyde in PBS, and permeabilized with 0.2% Triton X-100 in PBS. Permeabilized cells were blocked with PBS containing 10% goat serum (Sigma-Aldrich), incubated with ADK8 antibody (undiluted hybridoma culture supernatant) recognizing AAV8 intact capsid, 21,47 and washed with PBS. Cells were then incubated with anti-mouse Alexa Fluor 555 secondary antibody (1:200 in PBS), washed with PBS, and incubated with DraQ5 (1:1,000 in PBS) for nuclei staining (Interchim, Montluçon, France). Slides were finally mounted with Prolong Gold antifade reagent (Life Technologies). Images were captured on a Nikon A1 confocal microscope (Nikon Instruments, Amsterdam, Netherlands), and quantification of AAV8 particles according to their localization was then performed using the compartmentalization task of Volocity software (PerkinElmer, Courtaboeuf, France). Images were processed with ImageJ software (https://imagej.nih.gov/ij/) for picture presentation. Infectious AAV8 Particle Titration by the TCID 50 Assay Titration of infectious AAV8 particles by qPCR was performed following the procedure for the rAAV8RSM infectious titer of the Adeno-Associated Virus Reference Standard Working Group, available on the ATCC website (https://www.lgcstandards-atcc. org/$/media/AAV8_Information/AAV8%20RSM%20Infectious%20 titer%20assay.ashx). A modified assay was also tested where analysis was performed at 24 hr post-infection (instead of 72 hr in the standard procedure). For titration of AAV8 control and AAV8DVP1, the vector preparations were diluted in 1 DPBS to get a VG titer approaching that of the AAV8RSM, in order to perform serial dilutions exactly the same way. TCID 50 titers were calculated from two independent assays. Infectious AAV Particle Titration by the ICA ICAs were performed as previously described, 14 except incubation time and Ad concentration were modified. Briefly, HeRC32 cells were seeded in 48-well plates at 7.0  10 4 cells/well, and they were infected the next day with duplicate 10-fold dilutions of the AAV vector preparations, in the presence of wild-type Ad5 at a multiplicity of 500 IUs per cell. Negative controls were HeRC32 cells without Ad5 and HeLa cells with Ad5, both infected with the AAV vectors. Cells were harvested 24-26 hr post-infection and filtered through Zeta-Probe nylon membranes (Bio-Rad) using a vacuum device. Membrane filters were hybridized overnight at 65 C with vector-specific probes generated with the PCR Fluorescein Labeling Mix (Sigma-Aldrich), and detection was performed using the CDP-Star readyto-use labeling kit (Sigma-Aldrich). Titers were determined by counting dots (i.e., AAV-infected cells) on membrane autoradiography, which was done blind by two independent operators. ICA titers were calculated from two independent assays. www.moleculartherapy.org GFP Expression by Flow Cytometry HeLa cells were seeded in 24-well plates at 1.0  10 5 cells/well, and they were infected the next day with AAV vectors at different MOIs in triplicates, in the presence or absence of Addl324 virus 48 at a multiplicity of 50 infectious particles per cell. Cells were harvested 48 hr post-infection, fixed with 4% paraformaldehyde in PBS, and resuspended in FACS buffer (1 PBS, 0.5% BSA, and 2 mM EDTA). Analysis of GFP expression was performed on 30,000-31,000 cells from each well in a LSRII cell analyzer instrument (Becton Dickinson, Le Pont de Claix, France). The TU titer was calculated using the following formula: where N GFP is the mean number of GFP-positive cells (% GFP-positive cells  total number of cells) per well minus background (% GFPpositive cells  total number of cells in non-infected wells), and V is the volume of vector used to infect cells in microliters. Cell Processing for the Quantification of Intracellular AAV Genomes HeLa cells were seeded in 12-well plates at 3.5  10 5 cells/well. The day after, cells (4-7  10 5 per well) were infected or not with the rAAV vectors at a multiplicity of 2,000 or 10,000 VG/cell in duplicate. At 16 hr post-infection, cells were washed with PBS, detached with Trypsin-EDTA (Sigma-Aldrich), and pelleted at 2,000  g at 4 C, then washed again with 500 mL PBS and pelleted at 2,000  g at 4 C. Cells were then either left unfractionated (procedure 1) or fractionated into nuclear and cytoplasmic fractions (procedures 2, 3, and 4) as described below, all centrifugation steps being performed at 4 C. Procedure 1 Cell pellets were resuspended by repeated pipetting in 800 mL PBS, then cells were pelleted at 200  g for 5 min and 700 mL supernatant was discarded. Procedure 2 Cell pellets were resuspended by repeated pipetting in 400 mL Lysis Buffer (Reagent A100, Chemometec, Villeneuve-Loubet, France) before the addition of 400 mL Stabilizing Buffer (Reagent B, Chemometec). After centrifugation at 200  g for 5 min, 700 mL supernatant was collected as the cytoplasmic fractions and the remaining 100 mL was kept as the nuclear fractions. Procedure 3 This protocol makes use of the NE-PER Nuclear and Cytoplasmic Extraction Reagents (Thermo Scientific). Cells were resuspended in 100 mL ice-cold CER I solution by vortexing and incubated on ice for 10 min, then 5.5 mL CER II solution was added and samples were mixed by vortexing and left on ice for 1 min. The nuclear fractions were pelleted at 16,000  g for 5 min and resuspended in 100 mL PBS, and the supernatants were collected as the cytoplasmic fractions. Procedure 4 This protocol was described by Gagnon et al. 35 Briefly, cell pellets were resuspended by repeated pipetting in 380 mL ice-cold hypotonic lysis buffer (HLB), containing 10 mM Tris (pH 7.5), 10 mM NaCl, 3 mM MgCl 2 , 0.3% (v:v) NP-40, and 10% (v:v) glycerol, and incubated 10 min on ice. After brief vortexing, nuclear fractions were pelleted at 1,000  g for 3 min, and supernatants were collected as the cytoplasmic fractions. Nuclear pellets were then washed 3 times using 1 mL ice-cold HLB and centrifugation at 200  g for 2 min, and nuclei were kept in 100 mL HLB after the last wash. Analysis of Cellular Fractions and Quantification of Intracellular AAV Genomes by qPCR For whole-cell (procedure 1) and nuclear fraction samples (procedures 2, 3, and 4), DNA was extracted and purified using the Gentra Puregene Blood kit (QIAGEN, Courtaboeuf, France), following the protocol for body fluids with proteinase K digestion, adapted to the different sample volumes. DNA pellets were resuspended in a final volume of 50 mL. DNA concentrations were measured by OD 260 and adjusted to 20 ng/mL. For cytoplasmic fraction samples, DNA was extracted using the High Pure Viral Nucleic Acid kit (Roche, Meylan, France). TaqMan qPCR was performed using 5 mL sample DNA, in Premix Ex Taq 2X RT-PCR reagent (TaKaRa Bio) with primers targeted to the following: (1) the SV40 polyA signal for quantification of rAAV genomes, (2) the human albumin gene for quantification of cellular genomic (nuclear) DNA, and (3) the human cytochrome B gene for quantification of cellular mitochondrion (cytoplasmic) DNA (Table S1). Standard curves were obtained using 10 2 -10 8 copies of ClaI-linearized pCQAAV1 plasmid. IG titers were calculated from whole cell (procedure 1) and nuclear fraction samples (procedures 2, 3, and 4) using the following formula: where N vector is the SV40 copy number per sample, N Alb is the albumin gene copy number per sample, V is the volume of vector used to infect cells in microliters, and n is the number of cells at the time of infection. The first multiplication factor of 2 is because there are two albumin gene copies per cell genome. The second multiplication factor of 2 is because AAV genome is single stranded and plasmid used for the standard curve is double stranded. Analysis of Cellular Fractions by Western Blot For western blot analysis, we used a procedure allowing efficient recovery of proteins from all samples, i.e., whole cells or subcellular fractions. To this end, for all samples prepared by procedures 1-4, protease inhibitors (Complete Protease Inhibitor Cocktail, Roche) were added and samples underwent 3 freeze/thaw cycles to disrupt membranes. Purified His-tagged Rep68 protein (500 ng) 30 was then spiked in each sample, before the addition of 4 sample volumes of ice-cold acetone and overnight precipitation at À20 C. Proteins were then pelleted at 16,000  g for 15 min at 4 C and resuspended in 150 mL of 1 Laemmli buffer. SDS-PAGE was conducted using 25 mL of each protein sample, which was denatured at 95 C for 5 min before loading on 10% Tris-glycine gels (Life Technologies). Gels were transferred on nitrocellulose membranes and probed with antibody against lamin B, a-tubulin, ATP-synthase, or Rep proteins. Membranes were then incubated with HRP-conjugated secondary antibodies followed by chemiluminescence detection with ECL substrate (Thermo Scientific). Statistical Analysis Data are presented as mean ± SD. Statistical analyses were performed using PRISM 5 software (GraphPad). Differences between AAV8 control and AAV8DVP1 were assessed by Mann-Whitney tests. The p values lower than 0.05 were considered statistically significant.
2018-08-17T21:20:39.672Z
2018-07-27T00:00:00.000
{ "year": 2018, "sha1": "c9c895a05bf0c9863642fa452d8953a954b237c2", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S2329050118300676/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f986250c0e23faf6d5ec55a667caaf7d94f03270", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
43959397
pes2o/s2orc
v3-fos-license
Dual origin of relapses in retinoic-acid resistant acute promyelocytic leukemia Retinoic acid (RA) and arsenic target the t(15;17)(q24;q21) PML/RARA driver of acute promyelocytic leukemia (APL), their combination now curing over 95% patients. We report exome sequencing of 64 matched samples collected from patients at initial diagnosis, during remission, and following relapse after historical combined RA-chemotherapy treatments. A first subgroup presents a high incidence of additional oncogenic mutations disrupting key epigenetic or transcriptional regulators (primarily WT1) or activating MAPK signaling at diagnosis. Relapses retain these cooperating oncogenes and exhibit additional oncogenic alterations and/or mutations impeding therapy response (RARA, NT5C2). The second group primarily exhibits FLT3 activation at diagnosis, which is lost upon relapse together with most other passenger mutations, implying that these relapses derive from ancestral pre-leukemic PML/RARA-expressing cells that survived RA/chemotherapy. Accordingly, clonogenic activity of PML/RARA-immortalized progenitors ex vivo is only transiently affected by RA, but selectively abrogated by arsenic. Our studies stress the role of cooperating oncogenes in direct relapses and suggest that targeting pre-leukemic cells by arsenic contributes to its clinical efficacy. M ost acute promyelocytic leukemias (APLs) are driven by the t (15,17) translocation that yields the PML/RARA fusion. PML/RARA deregulates transcription, blocking myeloid differentiation and enhancing progenitor self-renewal 1,2 . PML/RARA also disrupts PML nuclear bodies, blunting p53 signaling, impeding senescence, and promoting clonogenic growth 3,4 . Epidemiological studies have supported the view that APL development requires a single rate-limiting step 5 . Yet, in PML/RARA transgenic mouse models, leukemia development requires secondary cooperating changes [6][7][8] . WT1, KRAS, NRAS mutations, FLT3 activation, or Myc trisomy, which are common genetic events in many other subsets of acute myeloid leukemia (AML), may be observed in APL patients [9][10][11][12][13][14] . These progression events, which occur late in APL or AML development, sharply accelerate PML/RARA-driven transformation in murine models [15][16][17] . APL is a model for targeted leukemia cure, as two nonchemotherapeutic agents, retinoic acid (RA) and arsenic trioxide (hereafter referred as arsenic), have extraordinary clinical potency and cooperate to eradicate the disease without the need for DNAdamaging chemotherapy 1,[18][19][20][21][22] . Retinoic acid and arsenic initiate the degradation of PML/RARA by directly binding to respectively its RARA and PML moieties 18,23 . Importantly, arsenic also targets normal PML-the effector of APL cure 24-26 -likely explaining its extremely potent anti-leukemic effects as a single agent 1,27 . In historical patients whose frontline treatment did not include arsenic, relapse rates were up to 30% (ref. 28 ). Some situations of RA resistance may be caused by mutations in the RARA moiety of PML/RARA 29 , but the natural history of APL development and resistance to the RA/chemotherapy regimen remains imperfectly understood. Here we show that relapses are associated with the presence of potent PML/RARA cooperating oncogenes at diagnosis, or re-emergence of an ancestral pre-leukemic clone that survived targeted therapy with RA. Results Exome sequencing of diagnosis and relapse APLs pairs. To define the pre-existing or acquired mutations associated with RA/ chemotherapy resistance, we performed whole-exome sequencing of diagnosis and relapse pairs from 23 patients recruited through the French Swiss Belgian APL group (GTLAP) trials. Complete remission samples were available for 18 patients allowing identification of somatic variants at diagnosis and relapse; the 5 others diagnosis and relapse pairs were used to identify mutations acquired at relapse (patients' features in Supplementary Table 2). Most of these changes are non-synonymous mutations in genes never implicated in cancer, likely representing passenger mutations acquired before oncogenic activation or early during expansion of PML/RARA clones 12 . At relapse, we only observed a median of three additional genetic lesions, very unevenly distributed among patients (range 0-61, Fig. 1a). These data are in line with previous studies suggestive for a reliable estimation of the mutation burden in APL. and Supplementary We estimated the cancer cell fraction (CCF), i.e. the proportion of tumor cells harboring each somatic mutation at diagnosis and relapse (Supplementary Figure 3 and Table 2). Only 31 mutations identified in diagnosis samples were considered subclonal, including 4 FLT3 mutations, 3 of which were lost at relapse (patients P2, P16, and P22) and one became clonal at relapse (P31). Thus, in line with other AML studies, FLT3 mutations tend to occur late in sub-clones of diagnosis samples and may or may not be present at relapse. Recurrent alterations acquired at relapse. As expected, RARA mutations were frequently acquired at relapse (four patients), but we also identified recurrent mutations of NT5C2 specifically acquired at relapse in three patients (Fig. 1b). This gene, implicated in cytarabine or 6-Mercaptopurine responses, was identified as a driver of relapse in childhood acute lymphoblastic leukemias 35 . Among the 18 patients with complete remission, diagnosis, and relapse samples (Fig. 2), 10 displayed acquisition of new driver mutations at relapse (RARA, n = 4; WT1, n = 3; NT5C2, n = 2; KRAS, NRAS, TFE3, MED12, CDK12, SALL4, NSD1, KMT2C, MYB, TET2, n = 1). Four patients acquired no additional driver alteration at relapse but displayed one or more potent driver already present in the last common ancestor (FLT3, n = 2; ETV6, STK11, HYDIN, MAP2K1, H3F3A, NIN, NSD1, DCTN1, KDM6A, n = 1). The four remaining patients had no identified driver alteration in the relapse sample. These relapses may be driven by alterations undetectable by whole-exome sequencing or insufficiently covered in our data. Clonal evolution models define different relapse patterns. These genetic markers allowed us to reconstruct unambiguous clonal evolution models (Fig. 2). Nine patients presented with a simple linear evolution where all oncogenic or passenger alterations present at diagnosis were similarly found at relapse (Fig. 2a). Similar to chemotherapy-treated AMLs 36,37 , four related cases had evidence for subclonal evolution at relapse, with loss of at least one mutation present at diagnosis and acquisition of additional relapse-specific changes (Fig. 2b). Critically, 11 of these 13 patients harbored, at both diagnosis and relapse, potent cooperating oncogenic mutations that likely precluded efficient clearance of the diagnosis clone. In 8 of these 13 cases, relapses were accompanied by acquisition of new oncogenic mutations (WT1, NRAS, NSD1, MED12, KRAS, ETV6) or inactivation of genes directly implicated in RA (RARA) or cytarabine (NT5C2) responses 35 . In contrast, in the five remaining APLs trios, mutations present at diagnosis, notably FLT3 activation in three of these cases, were no longer detected at relapse (Fig. 2c). While patient 19 presented with a TET2 mutation and patient 16 with MYB and NRAS mutations, the three others did not exhibit any relapse-specific genomic alterations. Diagnosis and relapses only shared rare passenger mutations. Yet, breakpoints, assessed at the mRNA levels, were identical between diagnosis and relapse. The distinct relapse patterns in these patients are thus suggestive for the existence of a pre-leukemic PML/RARAexpressing clone that survived RA/chemotherapy and reinitiated APL (see below). Analysis of the five diagnosis-relapse only pairs (in which somatic mutations at diagnosis could not be assessed) confirmed the low number of additional mutations at relapse observed in the trios (Fig. 1a). One pair had evidence for a linear evolution (FLT3/MYC activation at both diagnosis and relapse, with mutation of NT5C2 at relapse). The three others did not exhibit new oncogenic changes upon relapse. Collectively, diagnosis WT1, epigenetic or kinase mutants were generally retained in the relapse APL clones and favored the emergence of additional driver or resistance mutations. In contrast, in cases where FLT3 was the only cooperating mutation, it was often lost at relapse and the disease reinitiated from a preleukemic clone. Arsenic, but not retinoic acid target self-renewal in APL. That some relapses have very distinct mutational profiles from the diagnosis clone implies that they derive from ancestral preleukemic PML/RARA-only expressing cells. However, prolonged RA-therapy, by triggering PML/RARA destruction, should have precipitated their loss. Persistence of pre-leukemic AML clones in remissions after chemotherapy was reported 38 . Yet, PML/RARA expressing cells were repeatedly undetectable in remission. We therefore explored ex vivo RARA-or PML/RARA-transformed mouse progenitors in methylcellulose cultures, examining their clonogenic potential upon RA-exposure and subsequent drug withdrawal. In RARA-transformed cells, RA definitively abolished clonogenic growth (Fig. 3a). In contrast, in PML/RARA-transformed progenitors, RA only transiently affected growth, as differentiated RA-treated progenitors could reinitiate colony formation upon drug withdrawal 39 (Fig. 3b). Arsenic modestly decreased growth and did not affect self-renewal of RARA- Four patients display branched evolution with many alterations shared by the primary and relapse samples but also specific to one or the other, suggesting that the relapse evolved from a sub-clone of the primary tumor. c Five patients displayed no or very few alterations apart from the PML-RARA fusion in the relapse samples, suggesting that they emerged from pre-leukemic PML/RARA-expressing clones transformed progenitors, but strikingly abolished clonogenic activity of PML/RARA-transformed ones (Fig. 3a, b), further demonstrating that arsenic response is mediated through PML 40,41 . We assessed the relevance of these observations in vivo, by engrafting bone marrow from pre-leukemic PML/RARA transgenics into irradiated syngenic recipients. When bone marrow chimerism was established, animals were treated for 3 weeks with RA and bone marrow collected. In keeping with the ex vivo studies, only a small decrease in PML/RARA burden, as detected by quantitative PCR, was observed (Fig. 3c). Discussion Our studies of relapsing patients collected over 20 years bear several important conclusions. First, they highlight the genetic simplicity of APLs, since in these stringently explored patients, some APLs did not exhibit any PML/RARA cooperating events, although we cannot exclude the existence of non-coding mutations or epigenetic changes. Second, we found that many APLs that will undergo relapse are associated with the presence at diagnosis of a high prevalence of WT1 alteration as well as uncommon mutations affecting activators of MAP kinase pathway and/or other epigenetic regulators (Fig. 2), suggesting that these are responsible for therapy resistance 31 . In that respect, MAP kinase activation blunts p53 response by enhancing HDM2 expression 42 , likely opposing the PML/p53-driven senescence program implicated in APL eradication 1,24 . The high incidence of WT1 inactivation, often bi-allelic, stresses the importance of this complex pathway 43 which promotes growth, affects epigenetic regulation 44,45 , but also directly influences RA signaling 46 . These and other strong survival/proliferation signals are expected to favor the subsequent selection of PML/RARA mutations associated to RA resistance or other oncogenic mutations, as was indeed observed (Fig. 2). Third, we observed recurrent mutations in key regulators of RA signaling: NSD1, an epigenetic regulator of RA response 47 , translocated in rare AMLs and mutated in some solid tumors 48 and in SALL4, a key RA-target in germ cell development 49 . MED12 and RARA (altered in some of our APLs) may also be mutated in phyllode breast cancers 50 . Identification of relapse-specific NT5C2 mutations, previously reported in acute lymphoblastic leukemia 35 but never in APL or non-APL AMLs, genetically demonstrates that cytarabine and/or 6-Mercaptopurine have therapeutic efficacy in APL 28,51 . A previous study explored 8 trios and subsequently investigated 400 loci (genes from their discovery set and others known from the literature) in a large cohort of 200 APLs. Contrasting with the current study, these patients had been treated with very heterogeneous regimen (RA, As and/or chemotherapy) and germline DNA and/or relapse samples were unavailable for many patients. While their conclusions are generally in line with our findings, they did not observed higher WT1 alterations at relapse, but found a high frequency of ARID1A/B or RUNX1 mutations 11 (see Supplementary Table 2 for comparison). This may reflect their high patient heterogeneity and the comparatively smaller patient number from our study. The most unexpected observation was that some relapses were completely distinct from the diagnostic APL clone. This was previously demonstrated in chemotherapy-treated core-binding factor leukemias 37,52,53 . These APL relapses likely derive from long-lasting pre-leukemic PML/RARA-expressing clones, undetectable in remission bone marrow samples, which resisted prolonged RA therapy 54 . Such retinoic acid-resistance of the clonogenic activity of pre-leukemic cells is directly supported by ex vivo studies (Fig. 3), highlighting the uncoupling between differentiation and loss of self-renewal 2,39 . PML/RARA opposes senescence 55,56 , explaining maintained self-renewal upon RAretrieval. In sharp contrast, arsenic abolishes self-renewal of PML/ RARA-, but not RARA-driven pre-leukemic cells, perhaps contributing to its clinical superiority to preclude occurrence of late relapses 21,[57][58][59] . In that respect, most patients received arsenic at relapse and reached complete remissions, except for patient 5, who presented with a deletion in the normal allele of PML at relapse, predicted to impede therapy response [24][25][26] . Critically, this implies that arsenic therapy can override the survival signals enforced by cooperating oncogenes of type I relapses (Fig. 2a, b). Clinical data (white blood cell count and time to relapse) were not statistically linked to the different types of relapses (Supplementary Table 1), likely reflecting low statistical power within this small population. Collectively, these findings highlight two very distinct modes of APL relapse to historical RA/chemotherapy regimen. They suggest a novel mechanism explaining the clinical activity of arsenic and have broad implications to our understanding of targeted therapies. Methods Patients. We identified patients treated in trials of the French Swiss Belgian APL group (APL93 (2 patients), APL2000 (20 patients), and APL2006 (1 patient)) and who had experienced at least one relapse. All patients had received first-line treatment with ATRA + chemotherapy according to the protocols. Patient did not receive any arsenic as induction or consolidation therapy. We retrospectively collected samples from patients at diagnosis, at complete remission and at first relapse except for P22 who was analyzed at second relapse. The ethical review board (CPPRB 2016-04-01) approved the study and informed consent for genomic analyses was obtained from patients, considering absence of opposition after a month as approval. Genomic DNA was retrieved from frozen cells or samples for cytogenetic analyses. DNA was extracted by conventional techniques. Genomic DNA quantity and purity were assessed by Qubit® 2.0 Fluorometer (Invitrogen) and NanoDrop ND-1000 (Thermo Scientific) as well as visual inspection of agarose gel electrophoresis. For whole-exome sequencing, native genomic DNA was fragmented with the Covaris S2 system. Sequencing adaptor ligation was performed using the Agilent SureSelect XT (Agilent Technologies) preparation kit. Subsequently, the libraries were captured using Agilent SureSelect XT Human All Exon v.5 probes (Agilent Technologies) and amplified. After quantification and qualification on a Caliper LabChip GX (Caliper Lifescience), the libraries were sequenced on an Illumina HiSeq 1000 platform (Illumina), 2 × 100 cycles, with TruSeq SBS v3 chemistry. Five trios were discarded on the basis of insufficient quality of one of the samples. The relapse of patient 3, which comprised 10% blasts, did not show evidence for the FLT3 indels in any of the 125 reads of the relapse sample, allowing assignment to the second group of relapses. All alterations of driver genes were controlled by stringent visual inspection of the primary sequencing data. Whenever DNA was left for analysis, several clonal oncogenic alterations were confirmed by Sanger sequencing (n = 5) or allelic discrimination (n = 1). In all cases, the method was adapted to the sensitivity needed to detect the alteration. Sequence alignment and variant calling. Raw sequence alignment and variant calling were carried out using Illumina CASAVA 1.8 software. CASAVA performs the alignment of reads to the human reference genome (hg19) using the alignment algorithm ELANDv2, and then calls single-nucleotide variants and short insertions and deletions (indels) based on allele calls and read depth. We used an Integragen in-house pipeline to annotate each variant according to its presence in the 1000Genome 60 , Exome Variant Server (EVS) 61 or Integragen database, and according to its functional category (synonymous, missense, nonsense, splice variant, frameshift, or in-frame indels). To detect the common FLT3 internal duplications that may be missed by classical variant calling algorithms, we used bam-readcount (https://github.com/genome/bam-readcount) to determine the number of mutated bases in an extended region around the known duplication site (chr13: 28608150-28608349). Samples with mutated bases across the regions were then screened visually using the Integrative Genomics Viewer 62 . Somatic coding variants at diagnosis and relapse. We considered only variants located within the exome capture baits and we applied stringent filters to keep only reliable variants sequenced in ≥10 reads, with ≥5 variant calls and a QPHRED score ≥20 for both SNP detection and genotype calling (≥30 for indels). Somatic status was first defined for each leukemic sample (diagnosis and relapse) individually: we considered a mutation to be somatic if the variant allele fraction (VAF) was ≥0.15 in the tumor and <0.05 in the remission sample. To identify variants that may be missed in one of the two tumor samples of a same patient due to clonality or technical differences, we then recovered variants that were detected as somatic in one tumor and displayed a VAF ≥0.05 or at least two mutated reads in the second tumor sample. We excluded known germline variants with a minor allele frequency >1% in 1000Genomes, EVS, or Integragen proprietary database. All mutations in recurrently mutated genes (≥5 overall variants or ≥2 relapse-specific variants) or critical to reconstruct evolutionary trees were validated by stringent visual control using the Integrative Genomics Viewer 62 . We used the Palimpsest R package (https://github.com/FunGeST/Palimpsest) to estimate the CCF of each somatic mutation, i.e., the proportion of tumor cells harboring the mutation, taking into account the VAF and local copy-number estimates, as previously described 63,64 . A mutation was considered subclonal if the upper boundary of the 95% confidence interval of the CCF was smaller than 0.95. Copy-number analysis. To identify CNAs in diagnostic and relapse samples, we identified germline single-nucleotide polymorphisms (SNPs) in each sample and we calculated the coverage log ratio (LRR) and B allele frequency (BAF) at each SNP site. Genomic profiles were divided into homogeneous segments by applying the circular binary segmentation algorithm, as implemented in the Bioconductor package DNAcopy 65 , to both LRR and BAF values. We then used the Genome Alteration Print (GAP) method 66 to determine the ploidy of each sample, the level of contamination with normal cells, and the allele-specific copy number of each segment. Chromosome aberrations were defined using empirically determined thresholds as follows: gain, copy number ≥ ploidy +1; loss, copy number ≤ ploidy −1; high-level amplification, copy number > ploidy +2; homozygous deletion, copy number = 0. Finally, we considered a segment to have undergone LOH when the copy number of the minor allele was equal to 0. All aberrations were validated by visual inspection of the LRR and BAF profiles. Reconstructing tumor progression trees. Tumor progression trees could be reconstructed for 18 patients for which complete remission, diagnosis, and relapse samples were available. We first discarded mutations that were identified in only one of the tumor sample but covered by <10 reads in the other and may thus be undetected for technical reasons. We then classified somatic mutations and CNAs in three categories: events common to the diagnosis and relapse samples (that were thus acquired early in a common ancestor of the two clones), events specific to the diagnosis, and events specific to the relapse that were acquired late after the separation of the two clones. For nine patients, all mutations present at diagnosis were also present at relapse so the last common ancestor was the diagnosis clone itself. Of note, PML-RARA fusions were common to diagnosis and relapse in all patients and were thus early events occurring in the last common ancestor. Mouse studies. RARA-and PML/RARA-immortalization of primary hematopoietic progenitors were performed as previously 39,67 . Briefly, lineage-depleted mouse bone marrow hematopoietic cells collected from 5-fluorouracil-treated mice (3 mg) were infected with retroviruses obtained by transient transfection of Plat-E cells with pMSCV-PML-RARA. After spinoculation, we cultured transduced cells in methylcellulose medium (Stem Cell Technologies, M3231) supplemented with 100 ng/ml stem cell factor and 10 ng/ml each of interleukin IL-3, IL-6 and granulocyte/macrophage colony-stimulating factor (Stem Cell Technologies). After a week, we recovered neomycin-selected cells from methylcellulose and replated them at 10,000 cells per well. Treatment with 10 −7 M RA or 3 × 10 −7 M arsenic trioxide were performed by mixing the drug with the methylcellulose media. After a week, colonies were counted and cells were regrown in fresh media without treatment. Differentiation was morphologically assessed on MGG-stained cells. Frozen bone marrows from MRP8-PML/RARA transgenics 68 were injected in lethally irradiated FVB-strain male syngenic mice of 8 weeks of age (n = 7). After a month and complete hematopoietic restoration, bone marrow was taken by femoral puncture and chimerism assessed by qPCR comparing abundance of the PML/RARA transgene and CEBPA, a gene common to transgenic and host cells. Mice were then randomized to be treated or not by slow release 10 mg RA tablets (Innovative Research of America) and bone marrow was drawn after 3 weeks for a new determination of PML/RARA-positive cells. Animals were handled according to the guidelines of institutional animal care committees, using protocols approved by the "Comité d'Ethique Experimentation Animal Paris-Nord" (no. 121). Data availability. Exome data from the 64 samples used in this study have been deposited at the European Genome-phenome Archive (EGA), which is hosted at the EBI and the CRG, under accession number EGAS00001002893. The authors declare that all the other data supporting the findings of this study are available within the article and its supplementary information files and from the corresponding author upon reasonable request.
2018-05-25T13:08:22.847Z
2018-05-24T00:00:00.000
{ "year": 2018, "sha1": "055367ffe905f7ce4889e7f9fd31cc52fd80fee7", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41467-018-04384-5.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "32cbf0327c50d05faf069e6c3a35a1eb8f15dcf8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
260601722
pes2o/s2orc
v3-fos-license
Hydrogen Sulfide-Releasing Indomethacin-Derivative (ATB-344) Prevents the Development of Oxidative Gastric Mucosal Injuries Hydrogen sulfide (H2S) emerged recently as an anti-oxidative signaling molecule that contributes to gastrointestinal (GI) mucosal defense and repair. Indomethacin belongs to the class of non-steroidal anti-inflammatory drugs (NSAIDs) and is used as an effective intervention in the treatment of gout- or osteoarthritis-related inflammation. However, its clinical use is strongly limited since indomethacin inhibits gastric mucosal prostaglandin (PG) biosynthesis, predisposing to or even inducing ulcerogenesis. The H2S moiety was shown to decrease the GI toxicity of some NSAIDs. However, the GI safety and anti-oxidative effect of a novel H2S-releasing indomethacin derivative (ATB-344) remain unexplored. Thus, we aimed here to compare the impact of ATB-344 and classic indomethacin on gastric mucosal integrity and their ability to counteract the development of oxidative gastric mucosal injuries. Wistar rats were pretreated intragastrically (i.g.) with vehicle, ATB-344 (7–28 mg/kg i.g.), or indomethacin (5–20 mg/kg i.g.). Next, animals were exposed to microsurgical gastric ischemia-reperfusion (I/R). Gastric damage was assessed micro- and macroscopically. The volatile H2S level was assessed in the gastric mucosa using the modified methylene blue method. Serum and gastric mucosal PGE2 and 8-hydroxyguanozine (8-OHG) concentrations were evaluated by ELISA. Molecular alterations for gastric mucosal barrier-specific targets such as cyclooxygenase-1 (COX)-1, COX-2, heme oxygenase-1 (HMOX)-1, HMOX-2, superoxide dismutase-1 (SOD)-1, SOD-2, hypoxia inducible factor (HIF)-1α, xanthine oxidase (XDH), suppressor of cytokine signaling 3 (SOCS3), CCAAT enhancer binding protein (C/EBP), annexin A1 (ANXA1), interleukin 1 beta (IL-1β), interleukin 1 receptor type I (IL-1R1), interleukin 1 receptor type II (IL-1R2), inducible nitric oxide synthase (iNOS), tumor necrosis factor receptor 2 (TNFR2), or H2S-producing enzymes, cystathionine γ-lyase (CTH), cystathionine β-synthase (CBS), or 3-mercaptopyruvate sulfur transferase (MPST), were assessed at the mRNA level by real-time PCR. ATB-344 (7 mg/kg i.g.) reduced the area of gastric I/R injuries in contrast to an equimolar dose of indomethacin. ATB-344 increased gastric H2S production, did not affect gastric mucosal PGE2 content, prevented RNA oxidation, and maintained or enhanced the expression of oxidation-sensitive HMOX-1 and SOD-2 in line with decreased IL-1β and XDH. We conclude that due to the H2S-releasing ability, i.g., treatment with ATB-344 not only exerts dose-dependent GI safety but even enhances gastric mucosal barrier capacity to counteract acute oxidative injury development when applied at a low dose of 7 mg/kg, in contrast to classic indomethacin. ATB-344 (7 mg/kg) inhibited COX activity on a systemic level but did not affect cytoprotective PGE2 content in the gastric mucosa and, as a result, evoked gastroprotection against oxidative damage. Introduction Indomethacin (indo) is a well-known non-steroidal anti-inflammatory drug (NSAID), used as an antipyretic, anti-inflammatory, and analgesic pharmacological intervention [1].Indo is prescribed to relieve pain and inflammation related to osteoarthritis, rheumatoid and gouty arthritis, ankylosing spondylitis, or an acutely painful shoulder [2].However, indo is considered to have the greatest ability to cause gastric injury compared to other NSAIDs [3,4].Indo causes gastric mucosal damage by inhibiting the activity of cyclooxygenase 1 (COX-1) that produces gastroprotective prostaglandin E 2 (PGE 2 ), decreasing bicarbonate and mucus secretion, stimulating gastric acid secretion, increasing reactive oxygen species (ROS) generation, and decreasing the level of physiological anti-oxidative molecular response [3].NSAIDs were reported to impair gastric mucosal biosynthesis of cytoprotective hydrogen sulfide (H 2 S).H 2 S, next to nitric oxide (NO) or carbon monoxide (CO), is an endogenous gaseous mediator with anti-inflammatory, anti-oxidative, and cytoprotective properties [5,6].H 2 S is biosynthesized mainly by three enzymes, cystathionine γ-lyase (CTH), cystathionine β-synthetase (CBS), and 3-mercaptopyruvate sulfur transferase (MPST), of which CBS and CTH are considered to be cytosolic enzymes, while MPST may be localized in both mitochondria and the cytosol [7,8].H 2 S plays an important role in the maintenance of the integrity of the gastric mucosa [9,10].Importantly, oxidative stress and gastric mucosal injury evoked by ischemia-reperfusion (I/R) are characterized by a sudden fall in blood supply to tissues and organs, followed by immediate restoration of blood flow and reoxygenation [11]. Under clinical conditions, I/R damage of the stomach occurs as a result of bleeding from a peptic ulcer, rupture of a vessel, surgery, ischemic disease of the GI tract, and hemorrhagic shock [12].The mechanism of I/R damage is complex and associated with many factors, including inflammation, excessive production of ROS in the mucosa, leukocyte infiltration, and reduced NO release.However, oxidative stress seems to be predominant [13].ROS excess causes lipid peroxidation of cell membranes, ribonucleic acid (RNA) or deoxyribonucleic acid (DNA) oxidation, and contributes to the production of toxic products such as malondialdehyde (MDA) [14,15].On the other hand, H 2 S exhibits anti-oxidative effects due to the inhibition of ROS production, modulation of glutathione (GSH) activity, activation of the expression of antioxidant enzymes (AOE) [16,17], and enhancement of mitochondrial integrity [11].Indeed, we reported recently that mitochondria-targeted H 2 S donor AP39 protected the gastric mucosa against gastric I/R damage [18]. To counteract the gastrointestinal (GI) toxicity of NSAIDs, H 2 S-releasing derivatives of these drugs were developed.Some of them were shown in clinical and/or preclinical studies to be GI-safe compared to the parent drugs [19][20][21].Additionally, ATB-346 (H 2 Sreleasing naproxen derivative (Otenaproxesul, Antibe Therapeutics Inc., Toronto, ON, Canada) was shown to exert chemo-preventive effects vs. colorectal cancer [22].We reported that H 2 S-releasing ketoprofen derivative (ATB-352), unlike classic ketoprofen, is GI-safe and does not significantly affect the intestinal microbiome profile [23]. Thus, we aimed to investigate here for the first time the impact of the new hybrid NSAID, H 2 S-releasing ATB-344 vs. classic indomethacin, on gastric mucosal integrity and the capacity of gastric mucosal defense to cope with acute oxidative injury induced by I/R.We focused on the pharmacological impact of these drugs on redox balance and gastric mucosal integrity based on macro-and microscopic evaluation and the assessment of the molecular pattern of gastric mucosal barrier components. Experimental Design, Chemicals and Drugs Male Wistar rats (40) with an average weight of 220-300 g were deprived of food for 12-16 h with free access to tap water before the treatments and exposure to I/R.Regular compounds and chemicals were purchased from Sigma Aldrich (Schnelldorf, Germany) unless otherwise stated. .The principles of the 3 Rs (Replacement, Reduction, and Refinement) were incorporated into the research design.The difference between male and female rats occurs, but it is not clearly evidenced in terms of the integrity of the gastric mucosal barrier and its resistance to NSAIDs [24].Therefore, to reduce the number of animals, we included only male rats in this study. I/R-Induced Gastric Lesions, Macro-and Microscopic Assessment of Gastric Damage, Tissue Collection and Storage I/R gastric lesions were induced 30 min after the treatments, as described previously [10,25].Briefly, under isoflurane anesthesia, the abdomen was opened, the celiac artery was clamped for 30 min (hypoxia), and then the clamp was removed (reperfusion).After 3 h of reperfusion, rats were sacrificed by i.p. administration of a lethal dose of pentobarbital (Biowet, Pulawy, Poland), and the gastric damage was measured planimetrically (mm 2 ).Gastric mucosa from each rat was collected, immediately frozen in liquid nitrogen, and stored at −80 • C for further analysis.For microscopic analysis, the gastric tissue sections were excised and fixed in 10% buffered formalin, pH = 7.4.Samples were stained with haematoxylin/eosin (H&E) as described previously [26].Digital documentation of histological slides was obtained using a light microscope (AxioVert A1, Carl Zeiss, Oberkochen, Germany) and the ZEN Pro 2.3 software (Carl Zeiss, Oberkochen, Germany) [27]. Assessment of H 2 S Release in Gastric Mucosa by Modified Zinc Trapping Assay and Methylene-Blue Method H 2 S release in the gastric mucosa was determined by the modified methylene blue method, allowing for the assessment of the level of volatile sulfide release from the gastric mucosa as previously described [10,23,[28][29][30][31].Briefly, gastric mucosa was homogenized in an ice-cold 50 mM potassium phosphate buffer, pH = 8.0.Then, L-cysteine (10 mM) and pyridoxal-5 -phosphate (P5P; 2 mM) were added to the homogenate, and the vials, including inner tubes with zinc acetate (to avoid direct contact with the tissue and reaction mixture), were then incubated in a shaking water bath (37 • C) for 90 min.Next, trichloroacetic acid (TCA; 50%; 0.5 mL) was injected into the reaction mixture through a septum plug.The mixture remained to stand for 60 min at 50 • C to allow H 2 S trapping by zinc acetate.N,N-Dimethyl-p-phenylenediamine sulfate (20 mM; 50 µL) in 7.2 M HCl and FeCl 3 (30 mM; 50 µL) in 1.2 M HCl were added to the internal tubes once separated out of the reaction mixture flask.After 20 min, absorbance at 670 nm was measured with a microplate reader (Tecan Sunrise, Mannedorf, Switzerland).The calibration curve of the absorbance as a function of H 2 S concentration was obtained using NaHS solution in various concentrations. Determination of PGE 2 Concentration in Gastric Mucosa and Serum by ELISA Test PGE 2 concentrations in gastric mucosa and serum were determined according to the manufacturer's protocol (EHPGE 2 , PGE 2 ELISA Kit, Invitrogen, Thermo Fisher Scientific, Vilnius, Lithuania) and as described in detail elsewhere [27].Results were expressed in pg/mL of gastric tissue homogenate or serum. Statistical Analysis Results were analyzed using GraphPad Prism 9.0 software (GraphPad Software Inc., La Jolla, CA, USA).Statistical analysis was conducted using Student's t-test or ANOVA with Dunnett's multiple comparison if more than two experimental groups were compared.The Mann-Whitney test was used for the data shown on 5D.The size of each experimental group was n = 5, and p < 0.05 was considered statistically significant. Results 3.1.Dose-Dependent Impact of H 2 S-Releasing ATB-344 and Indomethacin on Gastric Mucosal Integrity and H 2 S Production in Gastric Mucosa under Oxidative Stress Figure 1A shows the mean lesion area of I/R-induced gastric lesions in rats pretreated with vehicle, ATB-344 (7-28 mg/kg i.g.), or indo (5-20 mg/kg i.g.).ATB-344 applied in a dose of 7 mg/kg but not 14 and 28 significantly reduced I/R-induced gastric lesions area compared with vehicle (p < 0.05).Indo (5 mg/kg i.g.), significantly increased I/R-damage area compared with the equimolar dose of ATB-344 (p < 0.05).Therefore, ATB-344 (7 mg/kg i.g.) and indo (5 mg/kg i.g.) were further evaluated on a molecular level.Figure 1B shows the macroscopic appearance of representative gastric mucosa, exposed or not (intact) to I/R.In rats pretreated with ATB-344 (7 mg/kg) but not with vehicle or indo (5 mg/kg), gastric erosions were limited to a few hemorrhagic dot-like lesions.Figure 1C shows the microscopic appearance of gastric mucosa exposed to I/R in rats pretreated with vehicle, ATB-344 (7 mg/kg), or indo (5 mg/kg).I/R caused disruption of the mucus layer, deep epithelial damage with leukocyte infiltration, and bleeding.In ATB-344 pretreated gastric mucosa, I/R-injury was superficial without bleeding, whereas I/R-exposed gastric mucosa pretreated with indo was microscopically similar to vehicle. Figure 2A shows that the level of released volatile H2S was significantly increased in gastric mucosa treated with ATB-344 (7 and 28 mg/kg/i.g.) compared to vehicle (p < 0.05).Indo (5 mg/kg i.g.) significantly decreased H2S release compared with the equimolar dose of ATB-344 (p < 0.05) but not with vehicle.We reported previously that there is no significant difference in H2S release from healthy (intact) gastric mucosa vs. gastric mucosa Figure 2A shows that the level of released volatile H 2 S was significantly increased in gastric mucosa treated with ATB-344 (7 and 28 mg/kg/i.g.) compared to vehicle (p < 0.05).Indo (5 mg/kg i.g.) significantly decreased H 2 S release compared with the equimolar dose of ATB-344 (p < 0.05) but not with vehicle.We reported previously that there is no significant difference in H 2 S release from healthy (intact) gastric mucosa vs. gastric mucosa exposed to 3.5 h of I/R [10].Figure 2B demonstrates that ATB-344 administered in a dose of 7 mg/kg (i.g.) significantly decreased gastric mucosal mRNA expression of CBS but not CTH or MPST compared with vehicle (p < 0.05).We reported previously that CTH expression was elevated, while CBS and MPST expression were downregulated in gastric mucosa exposed to 3.5 h of I/R vs. healthy (intact) gastric mucosa [10].Figure 3A shows that ATB-344 applied in doses of 14 and 28 mg/kg i.g. and indomethacin (5 mg/kg i.g.) reduced PGE2 concentration in gastric mucosa versus vehicle (p < 0.05).ATB-344 (applied in a dose of 7 mg/kg i.g.) significantly reduced PGE2 concentration in gastric mucosa but not in serum compared to vehicle (p < 0.05) (Figure 3A,B).Indo (5 mg/kg i.g.) significantly decreased gastric mucosal PGE2 concentration compared with an equimolar dose of ATB-344 (p < 0.05) (Figure 3A).We showed previously that gastric mucosal levels of PGE2 were decreased in gastric mucosa exposed to 3.5 h of I/R vs. healthy (intact) gastric mucosa [32].Indo (5 mg/kg i.g.) significantly reduced serum concentrations of PGE2 compared with vehicle (p < 0.05) (Figure 3B). Impact of H 2 S-Releasing ATB-344 and Indomethacin on Gastric Mucosal and Serum PGE 2 Concentration and Gastric Mucosal mRNA Expression of COX-1 and COX-2 Figure 3A shows that ATB-344 applied in doses of 14 and 28 mg/kg i.g. and indomethacin (5 mg/kg i.g.) reduced PGE 2 concentration in gastric mucosa versus vehicle (p < 0.05).ATB-344 (applied in a dose of 7 mg/kg i.g.) significantly reduced PGE 2 concentration in gastric mucosa but not in serum compared to vehicle (p < 0.05) (Figure 3A,B).Indo (5 mg/kg i.g.) significantly decreased gastric mucosal PGE 2 concentration compared with an equimolar dose of ATB-344 (p < 0.05) (Figure 3A).We showed previously that gastric mucosal levels of PGE 2 were decreased in gastric mucosa exposed to 3.5 h of I/R vs. healthy (intact) gastric mucosa [32].Indo (5 mg/kg i.g.) significantly reduced serum concentrations of PGE 2 compared with vehicle (p < 0.05) (Figure 3B).Exposure to I/R significantly elevated gastric mucosal COX-2 but not COX-1 mRNA expression vs. intact (p < 0.05) (Figure 4A,B).Pretreatment with ATB-344 and indo did not alter these markers compared to the vehicle. Results are mean ±SEM of five values per group.Asterisk (*) indicates significant changes compared to intact (p < 0.05); cross (+) indicates significant changes compared to vehicle (p < 0.05). Discussion We demonstrated here for the first time that H2S-releasing ATB-344, a hybrid derivative of indo (that belongs to NSAIDs), dose-dependently enhanced gastric mucosal ability to cope with oxidative injuries [38].We observed that, i.g.pretreatment with ATB-344 (7 mg/kg) but not an equimolar dose of classic indo, reduced the gastric damage induced by the exposure to I/R.This observation is in complete opposition to the widely observed gastrotoxicity of classic indo and other NSAID in clinical pharmacology [39,40].On the other hand, H2S signaling is known to contribute to the maintenance of gastric mucosal integrity, regeneration, and oxidative balance [13,18,41].H2S, as an endogenous molecule produced by the enzymatic activity of CTH, CBS, or MPST, is the main regulator of post-translational S-sulfhydration (persulfidation) of proteins that has been reported, e.g., in aging, Alzheimer's disease, or the cardiovascular system [33- 37,42].Importantly, due to the development of a new methodological approach, sulfide signaling and its anti-oxidative capacity were shown to involve the generation of reactive sulfur species and persulfide or polysulfide formation, which could also be considered an H2S storage system [34, [43][44][45][46].We have implemented here the well-known zinc trapping assay, but with a modified protocol allowing us to assess the level of volatile sulfide released from gastric mucosa [10,28,30,31,47].Polysulfides are not generally volatile but are a direct product of sulfide oxidation and are very unstable in a reducing environment.Therefore, we could not exclude them as possible mediators of the H2S-triggered activity of ATB-344 in the gastric mucosa.In fact, our data revealed that the gastroprotective dose of ATB-344 (7 mg/kg i.g.) enhanced the levels of H2S released in gastric mucosa (by approx.50%) and decreased PGE2 content in serum but not gastric mucosa.However, the equimolar dose of indomethacin (5 mg/kg i.g.) did not elevate gastric mucosal levels of H2S and decreased PGE2 content in serum and gastric mucosa.As a result, there was no gastroprotection observed.Of note, PGE2 is known to contribute to the maintenance of gastric mucosal integrity, e.g., by decreasing bicarbonate and mucus secretion or by modulating gastric acid secretion [3]. Discussion We demonstrated here for the first time that H 2 S-releasing ATB-344, a hybrid derivative of indo (that belongs to NSAIDs), dose-dependently enhanced gastric mucosal ability to cope with oxidative injuries [38].We observed that, i.g.pretreatment with ATB-344 (7 mg/kg) but not an equimolar dose of classic indo, reduced the gastric damage induced by the exposure to I/R.This observation is in complete opposition to the widely observed gastrotoxicity of classic indo and other NSAID in clinical pharmacology [39,40].On the other hand, H 2 S signaling is known to contribute to the maintenance of gastric mucosal integrity, regeneration, and oxidative balance [13,18,41].H 2 S, as an endogenous molecule produced by the enzymatic activity of CTH, CBS, or MPST, is the main regulator of post-translational S-sulfhydration (persulfidation) of proteins that has been reported, e.g., in aging, Alzheimer's disease, or the cardiovascular system [33- 37,42].Importantly, due to the development of a new methodological approach, sulfide signaling and its anti-oxidative capacity were shown to involve the generation of reactive sulfur species and persulfide or polysulfide formation, which could also be considered an H 2 S storage system [34, [43][44][45][46].We have implemented here the well-known zinc trapping assay, but with a modified protocol allowing us to assess the level of volatile sulfide released from gastric mucosa [10,28,30,31,47].Polysulfides are not generally volatile but are a direct product of sulfide oxidation and are very unstable in a reducing environment.Therefore, we could not exclude them as possible mediators of the H 2 S-triggered activity of ATB-344 in the gastric mucosa.In fact, our data revealed that the gastroprotective dose of ATB-344 (7 mg/kg i.g.) enhanced the levels of H 2 S released in gastric mucosa (by approx.50%) and decreased PGE 2 content in serum but not gastric mucosa.However, the equimolar dose of indomethacin (5 mg/kg i.g.) did not elevate gastric mucosal levels of H 2 S and decreased PGE 2 content in serum and gastric mucosa.As a result, there was no gastroprotection observed.Of note, PGE 2 is known to contribute to the maintenance of gastric mucosal integrity, e.g., by decreasing bicarbonate and mucus secretion or by modulating gastric acid secretion [3]. We implemented here the starting dose of 5 mg/kg i.g. for indomethacin, which has been shown previously to reverse beneficial effects of possibly gastroprotective compounds when applied i.p., as a model dose in gastrointestinal pharmacology [32].Additionally, 30 mg/kg i.g. of indomethacin is known to induce gastric mucosal damage itself, and we aimed to avoid this effect [38].Therefore, in our study, we implemented for this NSAID a dose range of 5-20 mg/kg i.g.Interestingly, we observed that higher doses of ATB-344 (14 and 28 mg/kg i.g.) decreased serum and gastric mucosal levels of PGE 2 .A further increase in gastric mucosal H 2 S level due to the administration of ATB-344 (28 mg/kg i.g.) did not counteract the indomethacin-triggered fall in gastric mucosal PGE 2 content.The COX-inhibiting effect exceeded H 2 S-mediated molecular benefits and led to the loss of gastroprotective capacity at higher doses of ATB-344.Therefore, we conclude that 7 mg/kg of ATB-344 is the maximal gastroprotective dose that, due to its H 2 S-releasing properties, did not alter gastric mucosal PGE 2 content but still maintained its ability to inhibit COX on a systemic level.At this dose, the H 2 S-releasing moiety counteracted pathogenic inhibition of COX in gastric mucosa induced by indomethacin, which evoked the gastroprotection of ATB-344 against I/R-induced gastric mucosal injury. Our previous study revealed that the H 2 S release due to the activity of the enzymes involved in endogenous H 2 S biosynthesis (CTH, CBS, or MPST) was not affected in gastric mucosa exposed to 3.5 h of I/R [10].At the same time, gastric mucosal expression for CTH was upregulated, while for CBS or MPST, it decreased.Elevated bioavailability of H 2 S due to, i.g.pretreatment with NaHS (as H 2 S-releasing salt) attenuated I/R-damage development [10].In this study, we observed that ATB-344-triggered H 2 S release did not affect the expression of CTH or MPST, similarly to classic indomethacin.However, gastric mucosal expression of CBS was downregulated by ATB-344.In fact, overexpression of CBS has been suggested to contribute to the pathogenesis of various pathologies [48,49].This is in line with the study of Scheid et al., where inhaled H 2 S prevented ischemia-reperfusion injury of neuronal tissue but also downregulated CBS expression [50].We also previously observed the downregulation of gastrointestinal expression of CBS by the H 2 S-delivering derivative of ketoprofen (ATB-352), in parallel with elevated gastric mucosal H 2 S release, in opposition to the classic form of this NSAID [23].Moreover, it was shown that protein expression of CTH, CBS, and MPST in gastric mucosa exposed to oxidative stress was not altered by ATB-346 (an H 2 S-releasing derivative of naproxen) that has the same H 2 Sreleasing moiety as ATB-344 [51].Taken together, we conclude that the gastroprotective effect of ATB-344 does not depend on the modulation of enzymatic H 2 S production but it is rather due to the increased level of H 2 S that is released from the appropriate chemical moiety (based on 4-hydroxythiobenzamide) of this derivative of indomethacin. The H 2 S-releasing group combined with naproxen or ketoprofen (ATB-346 and ATB-352, respectively) was reported to enhance the GI safety of these drugs [21,23].However, the implementation of this platform to indo remained unexplored in terms of its impact on gastric mucosal integrity under oxidative conditions.In fact, despite the very effective anti-inflammatory, anti-pyretic, or analgesic activity of NSAIDs, clinical use of these interventions is limited due to the adverse effects on the gastric mucosa, especially in individuals with aging-related disrupted GI integrity and predisposed to oxidative stress [52]. We evaluated here the pharmacological effect of ATB-344 vs. indo (applied i.g.) on gastric mucosal integrity and defense against oxidative I/R injury.We have implemented the experimental model of I/R-induced gastric damage that is based on 30 min of ischemia followed by 3 h of reperfusion.This scheme was previously shown to be optimal for testing possible therapeutic options [18].The time point was selected based on previous studies investigating the impact of indomethacin on gastric I/R-damage and, most importantly, is supported by our recent study on the impact of NaHS on the course of I/R-gastric mucosal damage in a time-dependent manner [10,38].Decreased blood supply to the gastric tissues causes cell dysfunction and, during prolonged ischemia, leads to cell death, e.g., as a result of bleeding from a peptic ulcer or hemorrhagic shock [53].Paradoxically, after reperfusion, pre-existing damage deepens.Excessive production of ROS is considered a critical factor in the development of reperfusion injury [54].In ischemic tissues, accumulation of adenosine and hypoxanthine-a substrate for xanthine oxidase (XDH) is well recognized as the major source of cellular ROS predominantly raised by reperfusion [54].Indeed, during reperfusion, hypoxanthine is metabolized to xanthine, forming ROS [55].In animal studies of I/R injury, allopurinol (XDH inhibitor) has been shown to reduce the damage, improve functional response after I/R injury, and decrease the scale of oxidative stress [56,57]. We observed in this study that ATB-344-mediated gastroprotection was accompanied by changes in crucial molecular targets levels reflecting the status of gastric mucosal integrity.We showed that H 2 S-releasing ATB-344 (7 mg/kg i.g.) but not indo (5 mg/kg i.g.) inhibited I/R-induced upregulation of gastric mucosal XDH expression and downregulation of antioxidative SOD-2.SOD activity is a key protective cellular response against ROS [58,59].SOD-2 is the mitochondrial isoform of this antioxidative enzyme that efficiently converts superoxide to less reactive hydrogen peroxide (H 2 O 2 ) and scavenges superoxide radicals [60,61].A deficiency of SOD-2 in the mitochondria may increase the production of ROS and interfere with mitochondrial metabolism and cellular redox balance [62]. The cellular response to hypoxia involves alterations in the expression profiles of various genes, including HIF [63].The stability and activity of HIF-1α are regulated by a plethora of post-translational modifications, including hydroxylation, acetylation, and phosphorylation [64].Numerous animal and in vitro studies indicated that the activation of the HIF axis might protect against I/R damage, but this effect is time-dependent [41,42].It is suggested that controllable enhancement of HIF-1α expression could be used as a therapeutic strategy to treat or prevent ischemic damage [65].In our study, we confirmed previously observed downregulation of HIF-1α expression in gastric mucosa exposed to I/R.Indo, in contrast to ATB-344, enhanced this decline.Finally, our data revealed that ATB-344 (7 mg/kg i.g.) in contrast to indomethacin (5 mg/kg i.g.) decreased gastric mucosal RNA oxidation induced by exposure to ischemia/reperfusion.This confirms the antioxidative properties of ATB-344.Therefore, we conclude that H 2 S released from ATB-344 evoked gastroprotection followed by the enhanced defensive capacity of the gastric mucosa that prevented I/R-induced hypoxic and oxidative alterations reflected by the expression of SOD-1, SOD-2, XDH, and HIF-1α and decreased levels of RNA oxidation. Gastric mucosal I/R injury triggers an inflammatory response expressed by the expression of inflammatory genes such as, e.g., iNOS, COX-2, and IL-1.Additionally, COX inhibition is the pharmacological target for indo and other NSAIDs [66].Gemici et al. have found that gastric I/R increased neutrophil infiltration and iNOS protein expression [67].Next to ROS, reactive nitrogen species (RNS) are also involved in the development of gastric I/R [68].Moreover, NO can react with ROS to form toxic substances such as peroxynitrite and singlet oxygen [68,69].Oxidative stress itself upregulates COX-2 and iNOS expression [55,70].Arachidonic acid is a substrate for inflammation sensitive prostaglandins via the enzymatic activity of COX and free oxygen radicals [55,70,71].In this study, we showed that gastric I/R increased the gastric mucosal expression of COX-2, IL-1β, IL-1R1, IL-1R2, TNFR2, and iNOS.Both ATB-344 and indo reduced the expression of inflammationsensitive markers, but only ATB-344 decreased iNOS mRNA fold change in parallel with its gastroprotective effect.Indeed, iNOS inhibitors are considered useful agents to ameliorate the damage and dysfunction of various organs caused by I/R [71,72].Interestingly, I/R injury activated the upregulation of anti-inflammatory SOCS3 and ANXA1 in a pathologycounteracting manner.H 2 S-releasing ATB-344, but not indo, maintained elevated expression of SOCS3.We assume that anti-inflammatory activity for both compounds was similar, but ATB-344 additionally reduced the expression of iNOS as a possible source of RNS and enhanced anti-inflammatory SOCS3. Heat shock proteins (HSPs), such as HMOX-1, are molecular chaperones produced in response to oxidative stress, including I/R [73,74].HMOX-1 is considered a cytoprotective pathway that is activated by harmful factors, such as I/R, and plays a protective role in the cellular defensive response to ROS-induced injury [75].Importantly, H 2 S gastroprotection was shown to be dependent on CO bioavailability [76].Our previously published data revealed that the GI safety of ATB-346 (an H 2 S-releasing naproxen derivative) or ATB-352 (an H 2 S-releasing ketoprofen derivative) was accompanied by enhanced mRNA and/or protein expression of HMOX-1 [23,51].We reported here that, in contrast to classic indo, H 2 S-releasing ATB-344 maintained I/R-induced overexpression of HMOX-1 that was accompanied by decreased gastric I/R damage.We are aware that our observation is limited to the evaluation of gastric mucosal mRNA expression of HMOX-1/2.However, based on this and previously published data, we conclude that HMOX-1 activity could be the crucial mechanistic target determining the beneficial effects or GI safety of H 2 S-releasing NSAIDs. In summary, we showed that H 2 S-releasing ability evoked the beneficial effects and GI safety of ATB-344.Precisely, ATB-344 applied i.g. in a low dose of 7 mg/kg, enhanced gastric mucosal defense against oxidative injury induced by exposure to gastric I/R.This effect was not observed for higher doses of ATB-344 (14 and 28 mg/kg) or for all equimolar doses of classic indo (5, 10, and 20 mg/kg).We assume that the effects of ATB-344 were due to H 2 S delivery rather than modulation of endogenous H 2 S production.H 2 S-releasing moiety counteracted pathogenic inhibition of COX activity and the fall in cytoprotective PGE 2 generation in gastric mucosa induced by classic indomethacin and higher doses of ATB-344.This phenomenon evoked the dose-dependent gastroprotection of ATB-344 against I/Rinduced gastric mucosal injury and, importantly, maintained its capacity to inhibit COX at the systemic level.We also conclude that the predominant anti-inflammatory and antioxidative capacity of ATB-344 to cope with oxidative GI lesions and gastric mucosal RNA oxidation could involve the maintenance of HMOX-1 and mitochondrial SOD-2 mRNA expression.These effects were summarized on the Figure 9. Taken together, we confirmed that H 2 S-releasing moieties conjugated with NSAIDs or other drugs are still promising targets for GI pharmacology and anti-oxidative therapeutic alternatives development. 2. 6 . Determination of mRNA Expression for Selected Genes by Real-Time Polymerase Chain Reaction (PCR) Figure 1 . Figure 1.The area of gastric mucosal lesions induced by exposure to 3.5 h of I/R in rats pretreated with vehicle, ATB-344 (7, 14, and 28 mg/kg i.g.), or indomethacin (5, 10, and 20 mg/kg i.g.) (A).Intact refers to healthy gastric mucosa without exposure to I/R. Results are mean ± SEM of 4-5 rats per group.An asterisk (*) indicates a significant change compared to intact (p < 0.05).Cross (+) indicates a significant change compared to vehicle (p < 0.05).Hash (#) indicates a significant change between ATB-344 and indo (p < 0.05).Macroscopic (B) and microscopic (C) appearance of representative gastric mucosa of rats exposed or not (intact) to I/R and pretreated with vehicle, ATB-344 (7 mg/kg i.g.), or indo (5 mg/kg i.g.).Yellow arrows pointed out I/R-induced epithelial erosions.Histological slides were stained with hematoxylin and eosin (H/E). Figure 9 . Figure 9. Schematic comparative overview of the main molecular effects of H 2 S-releasing ATB-344 and classic indomethacin during the development of oxidative gastric mucosal injuries. All procedures performed in the study were approved by the I Local Ethical Committee for Care and Use of Experimental Animals, held by the Faculty of Pharmacy, Jagiellonian University Medical College in Cracow (Decision No.: 311/2019; Date: 17 July 2019 and 661/2022; Date: 27 September 2022
2023-08-06T15:26:39.525Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "a9e9ccaa70c017616ec5477ef4a7e54835385522", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3921/12/8/1545/pdf?version=1690958624", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "693fdcd8e7a6f351f13d0b52febd867d4f2ed169", "s2fieldsofstudy": [ "Medicine", "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
17409664
pes2o/s2orc
v3-fos-license
II. BILE CHOLESTEROL FLUCTUATIONS D~rE TO DIET FACTORS, BILE SALT, LIVER INJURY AND HEMOLYSIS The blood and body fluids are so crowded with "chemical messengers" and vitamines that to some readers it appears a miracle that these substances ever reach their destination. Cholesterol has been looked upon as an innocent bystander, inert and going along with the crowd. Some of the recent work with hormones and vitamines would seem to focus attention on cholesterol as a close relative of other sterols and perhaps of ergosterol and the group of fat soluble vitamines. Further work with hormones (estrin and the male hormone) indicates a chemical constitution relating these "messengers" to the sterols. The same four ring nucleus is common to all these substances (18) (Received for publication, December 28, 1933) The blood and body fluids are so crowded with "chemical messengers" and vitamines that to some readers it appears a miracle that these substances ever reach their destination. Cholesterol has been looked upon as an innocent bystander, inert and going along with the crowd. Some of the recent work with hormones and vitamines would seem to focus attention on cholesterol as a close relative of other sterols and perhaps of ergosterol and the group of fat soluble vitamines. Further work with hormones (estrin and the male hormone) indicates a chemical constitution relating these "messengers" to the sterols. The same four ring nucleus is common to all these substances (18) Therefore instead of an innocent bystander cholesterol may prove to be a messenger of importance and authority related to many vital body processes. It can be seen from the tables below that cholesterol is influenced profoundly by bile salt metabolism and circulation. Bile salt feeding together with cholesterol may give maximal values for cholesterol in the bile. All evidence (15) points to the liver cell as the only source of bile salts but this does not necessarily mean that cholesterol is produced in the liver cell. However, it would be difficult indeed to prove that the liver is not concerned with cholesterol metabolism and its production in the body. It is significant that the blood plasma of the dog contains 10 to 20 times as much cholesterol per 100 cc. as does the bile. Cholesterol in blood plasma averages 150-300 rag. per 100 cc. in contrast to bile which averages 10-15 mg. per 24 hour output in a total volume of 80-130 cc. This suggests a liver threshold of elimination but if such a threshold does exist it differs conspicuously from the renal threshold as it is understood today. It is possible to raise the blood cholesterol without a large increase in bile cholesterol and also to increase the cholesterol elimination in the bile without a change in blood cholesterol concentration. Cholesterol esters make up a large part of the blood cholesterol but the esters do not appear in the bile under the conditions of these experiments. The normal liver cell if it has a threshold for free cholesterol will not pass on into the bile any cholesterol esters. This question is receiving further study. It may be argued that cholesterol as it appears in the bile is dependent upon the circulation of the bile salts. This may be in part a physical relationship as bile salts increase the solubility of cholesterol in the whole bile. It is also possible that the bile salts exert an influence upon the liver cell, modifying its physiological state and permitting the passage of cholesterol. It is generally accepted that the bile salts modify definite body functions in the gastro-intestinal tract in the external sector of their cycle in the body. We believe that the internal sector of the bile salt cycle may be even more important for the body, and that the hepatic cells and other body cells may be modified in their activity by the presence of bile salts. Interesting types of intoxication which develop in the fistula dog after long periods of bile salt deprivation point in this direction. There is no dearth of experimental observations dealing with bile cholesterol in humans and animals. McMaster (9) has reviewed the earlier work and points out that much of the recorded data was unsatisfactory because of the type of bile fistula used. He showed that cholesterol in the bile can be increased by diets rich in cholesterol. The bile fistula introduced by Rous and McMaster (10) enables the investigator to collect accurately the 24-hour sample of sterile bile and marked a distinct advance in this field of study. Methods for bile cholesterol analysis have been unsatisfactory and inaccurate until quite recently and the recorded data are therefore inaccurate and subject to review. In the method used by McMaster (9) the bile pigments introduce large errors and his base line for bile cholesterol output in the dog runs about double the amount recorded in the tables below. Biliary cholesterol has been studied by D'Amato (2), Stepp (17), Dostal and Andrews (3), Fox (5), Salomon and Silva (16), Gardner and Fox (6), Elman and Taussig (4), McClure (8) and many others. Some of these papers deal with human, others with animal bile. The objections noted above apply to these observations. The greatest diversity of opinion on all phases of the subject is revealed by these papers. Methods The methods used in the quantitative determination of bile cholesterol are described above (Paper I). The bile fistula dogs were prepared according to the method of Rous and McMaster (10). Meticulous attention to the details of aseptic technique is needful in the care and daily bile collection in these animals (12). Their general supervision requires the bulk of the time of one technician. This type of fistula is made with excision of the gall bladder and insertion of a cannula in the common bile duct so that the bile is collected in a sterile bag. A comfortable canvas binder retains the bag and enables the dog to live a quiet and comfortable life for many months. It is highly important that these dogs remain in excellent clinical condition with little or no loss of weight and freedom from gastro-intestinal disturbances. Little significance can be attached to observations on dogs showing clinical abnormalities which are usually recorded in the published experiments from many laboratories. The standard or control diet consists of a bread prepared in the laboratory and much used in the anemia colony. The bread contains wheat flour, starch, bran, sugar, cod liver oil, canned tomatoes, canned salmon, yeast and a salt mixture. Its preparation has been adequately described (20). This is a complete diet for the normal dog and will maintain anemic animals in health indefinitely. The control periods given in the tables below precede immediately the periods dealing with special diets, liver injury, bile feeding, etc. After operation there may be fluctuations in bile cholesterol which may be due to obscure factors. For this reason the dog was observed for a period of 7-10 days before the regular control periods were begun. EXPERIMENTAL OBSERVATIONS Brief clinical histories of the several bile fistula dogs are given in the following paragraphs. It will be noted that the weight at the end of the experimental observations is in no instance lower than the weight recorded at the beginning. This means excellent clinical condition, good food consumption and no gastro-intestinal disturbances. Fasting or intoxication will always reduce the normal output level of bile cholesterol. Clinical Histories Dog 32-161. Adult female white bull mongrel, operation Jan. 12,1933. Weight at beginning of analyses 14.4 kg. Hemoglobin 144 per cent. This animaps weight remained constant except during a period of liver injury due to intravenous injection of hematin (Table 27) when the weight dropped to 13.0 kg. This loss was rapidly recovered and at present (Oct. 6,1933), hemoglobin 111 per cent, the animal weighs 14.8 kg. The hemoglobin level has maintained a similar constant level with the exception of periods during which it has been lowered as a result of direct experimentation. Food consumption has always been good. The animal has been in excellent physical condition throughout the period of observation. Dog Table 21 gives characteristic control observations on two bile fistula dogs. Dog 32-161 shows a normal level in the first control period but a low normal in the second control period which was 7 months subsequently and followed a period of liver injury (see Table 27). The fore periods on salmon bread immediately preceded the test periods on calves' brains. In the control periods the fluctuations in bile cholesterol output from day to day rarely exceeds 1-2 rag. (see Table 25). Calves' brains in the older experiments reported in the literature were usually fed with egg yolk and assumed to be in part responsible for the bile cholesterol increase if any was observed. In our experi-ments the calves' brains alone (containing approximately 1.5 gin. cholesterol) have a negligible effect as a 10 per cent increase is within physiological fluctuations related to uncontrollable factors. The significant rise in bile cholesterol output when bile salt is added will be discussed under Table 23. We know of no satisfactory explanation for the observations ( Table 21) that the feeding of cholesterol in calves' brains gives no increase in biliary cholesterol, whereas the egg yolk feeding will give a definite increase (Table 23). It has been suggested (11) that the presence of the phosphatides and cerebrosides may prevent the absorption of the brain cholesterol. Table 22 shows some satisfactory and representative experiments with widely different food factors all done on the same dog which was in perfect physical condition and ate all the food as indicated. The sugar diet and zein (digested with trypsin) were given daily by stomach tube. The control salmon bread diet periods show a large bile elimination-an average of about 140 cc. daily, and a uniform output of bile salt-an average of about 1.1 gm. per day. Liver added to this control diet causes little or no change in bile volume, cholesterol or bile salt output. Lean beef feeding causes a distinct rise in bile salt but not in the cholesterol output. Sugar alone fed to a bile fistula dog always causes a sharp drop in bile volume and bile salt output. The drop in bile cholesterol is less conspicuous. Dog 31-203. Zein is an incomplete protein which we have used in a study of bile salt metabolism. It causes a sharp fall in bile volume and bile salt output and even more conspicuous drop in bile cholesterol. This deserves further study. At any rate we see that it is possible to dissociate bile volume, bile salt and bile cholesterol concentration. In a general way the bile cholesterol-bile salt ratio is about 1 to 100 but this is not constant. The gist of Table 23 is that egg yolk feeding without bile or bile salt will cause a 40-50 per cent increase of bile cholesterol. A single egg yolk contains 0.3-0.5 gin. cholesterol. Bile alone by mouth containing 1 gm. bile salt will cause about the same increase in bile cholesterol. When larger doses of bile salt (3 gin.) alone are fed we note an increase of over 100 per cent of bile cholesterol and there is no further rise in bile cholesterol if we give this dose of bile salt plus egg yolks. This point has not been observed by other workers and gives less emphasis to heavy cholesterol feeding (egg yolks). It is of interest that blood cholesterol remains unchanged with bile salt feeding but rises to high levels when bile salt plus egg yolk is fed. The bile cholesterol elimination remains at the same level in both experiments (Dog 32-161, Table 23). Under normal physiological conditions with an intact bile circulation and no bile fistula it is probable that heavy cholesterol feeding would cause no reaction (Table 23) or at best a slight rise in bile cholesterol (see Table 24). Table 24 indicates the maximum level to which we have been able to push cholesterol excretion in the bile by means of continued bile feeding plus egg yolk plus large supplementary bile salt additions. This dog was in perfect physical condition and consumed daily its salmon bread ration. The supplements added to this ration or given by stomach tube are shown (Table 24). For 13 days preceding the 1st day given in Table 24, the dog was refed daily the total bile output as collected, minus 10 cc. for routine analysis. It has been shown elsewhere (21) that refeeding of bile over considerable periods will raise the bile salt output to a level which is sustained at about 7-8 gin. bile salt output per 24 hours. This dog had not reached this plateau at the time the observations were begun in Table 24 and we note a bile salt output of 4.5 gin. per day. Meanwhile the bile cholesterol has increased slowly from the control level at the start of bile refeeding--9.3 rag. to 21.5 rag. When bile salt (3 gin.) is added to the bile refeeding we note a great increase in bile cholesterol--42.6 rag. per 24 hours. The peak of cholesterol production follows by 1 day the peak of bile salt output. Egg yolks added to the bile refeeding increase the bile cholesterol almost as much as does the bile salt but meanwhile the bile salt output is on the decline. Maximum figures for bile cholesterol (61 lug. per 24 hours) are observed when we combine egg yolk and bile salt with the whole bile refeeding. This high level is more than 6 times the base line but if we consider as normal the output due to bile refeeding then the output is doubled by egg yolk and bile salt supplementary feeding (Table 24). When bile refeeding is stopped the output falls promptly to the control level on salmon bread diet--9.1 mg. cholesterol per 24 hours. The dog was then fasted for 2 days and the cholesterol fell to 4.3 mg. Table 25 shows that isatin by mouth or decholin by vein or by mouth will give a definite cholagogue effect without any influence on cholesterol elimination by the bile. In fact as these substances cause some gastro-intestinal disturbance and occasional vomiting we note more or less decrease in cholesterol elimination. Decholin by vein in one instance caused a good deal of clinical disturbance, very low food consumption and a very low cholesterol output (3.0 rag. per 24 hours). This is practically the fasting level. It is known that isatin (1.4) causes no increase in bile salts but decholin does cause a moderate increase in bile salt elimination. This does not compare with the reaction to bile salt by mouth which is subsequently eliminated within 24 hours in amount practically 100 per cent of the intake. We cannot say whether the decholin may be eliminated as such in the bile as the method used would not detect it. Evidently some of the introduced decholin is linked in the body with taurin to yield taurocholic acid. The cholagogue reaction to decholin is more conspicuous when the drug is given by mouth as compared with intravenous administration. The last two figures for bile salts (1100 mg.) in control periods (Table 25) are general average values. Table 26 shows a satisfactory experiment in which moderate liver injury was produced by small doses of chloroform by mouth. The repair took place promptly and was probably complete in 7-10 days. There was no clinical disturbance, the dog acting normally and eating all food. Bile volume shows a sharp fall to about 10 per cent of normal and the bile cholesterol fails even closer to zero. Bilirubinemia developed with an icterus index of 10 and bile was present in the urine. From published experiments (22) we know that the bile salts in the bile also fell very close to zero. The return toward normal in bile cholesterol is well shown (Table 26) and parallels closely the liver repair and bile salt excretion curve (22) as given elsewhere. From other experiments we know that in such animals the signs of liver injury are very slight as shown by histological study--a few ceils about the central vein showing fat or hyaline necroses. The repair is prompt and completed usually within 7 days. Table 27 shows a severe liver injury followed by slow return to normal over a period of 4-5 weeks. In connection with other experiments this dog was given hematin intravenously which caused severe and almost fatal poisoning. From autopsy examinations in other dogs we have assurance that there resulted an extensive central liver necrosis, which healed slowly. This dog was severely intoxicated and appeared clinically very sick. Bilirubinemia was severe and the blood fibrinogen fell to 170 mg. per cent. There was bleeding from vein punctures. Clinical improvement began 4 days after the second hematin injection but recovery was slow. There was some loss of weight. For 3 days there was complete suppression of bile flow. The bile cholesterol values came back slowly. In these severe injuries the change of cho-lesterol output is less spectacular than with slight injuries when the bile flow is not suppressed. These data are in accord with those presented in Table 26. Blood Destruction and Cholesterol Elimination in Bile It has been claimed by some investigators and assumed by many others that red cell destruction sets free the cholesterol in the red cell matrix, which logically might well appear in the bile. Other materials coming from red cell destruction (pigments and iron) appear in the liver or bile so why not cholesterol? But experiments indicate that this is not the way of body physiology. The experiment outlined just below shows no increase in bile cholesterol but rather a slight decrease, probably due to slight intoxication by the hydrazine used to destroy red cells. Dog 32-161 (see clinical history above). Weight 14 kg., hemoglobin 158 per cent, normal in all respects. The fore period of 10 days showed a somewhat low normal cholesterol daily output of 6.3 mg. During a 4 day period the dog was given subcutaneously 100 nag. daily of acetylphenylhydrazine. This caused a drop in the hemoglobin level to 86 per cent. Calculating the destroyed hemoglobin on the basis of the dog's weight and our general experience with anemia in dogs, it is safe to say that not less than 100 gm. hemoglobin were destroyed. If any cholesterol is to be derived from hemoglobin destruction and appear in the bile, this would seem an adequate test, During the 4-day period of hydrazine administration and the subsequent 10 days, the bile cholesterol averaged 5.5 mg. per day. The after period of 16 days shows a bile cholesterol daily output of 7.6 mg. At the end of this last period the hemoglobin level had come back to 112 per cent. The dog was fed the standard control salmon bread diet throughout and the weight was unchanged. DISCUSSION Possibly clinical treatment of abnormalities of the biliary system has not taken into consideration some of the facts established by experimental study of the bile. This may not be the place for a discussion of clinical problems but it may be proper to indicate that certain cholagogues can be used with advantage in human cases presenting irritation or inflammation of the biliary tree. Under these conditions it is recognized that stasis of bile and high cholesterol concentration may favor the precipitation of cholesterol with subsequent building up of gall stones. It is logical to assume that on such occasions an active flushing of the biliary ducts by means of some cholagogue might forestall the unfortunate precipitation of debris and cholesterol. Also bile salts in addition to their active cholagogue effect will appear in the bile and help to hold any excess of cholesterol in solution. It is even conceivable that a small soft precipitate of cholesterol under such conditions might go back into solution, as bile salts effect rapid solution of cholesterol. In the dog's gall bladder it has been shown (1, 7) that human gall stones will be dissolved during the course of many weeks. The cholesterol-bile salt ratio is about 1 to 100 in the bile fistula bile but considerable variations may be noted. The ratio in the blood must be vastly different although we cannot say how much bile salt is to be found in the circulating blood. As the normal blood plasma contains about 200 mg. cholesterol per 100 cc., if the same ratio obtained we should find about 20 gin. bile salt per 100 cc. plasma which is ridiculous. It is probable that the blood plasma contains only a few milligrams of bile salt per 100 cc. but present methods do not permit us to measure this with any accuracy. Therefore we have a considerable amount of cholesterol in circulation-for example a 10 kg. dog would have a plasma volume of 500 cc. and a cholesterol concentration of 150 to 300 rag. per cent---or 750 to 1500 rag. in circulation. From this reservoir of ±1 gm. plasma cholesterol we have only a trickle of 10-20 rag. per day appearing in the bile. Meanwhile the feeding of cholesterol and bile salt may change the level of the plasma reservoir of cholesterol by large amounts. All this would point to the bile cholesterol elimination as a minor shunt for certain surplus material. We do not accept this conclusion without protest and believe that the bile cholesterol is related in some way to the important internal cholesterol metabolism which goes on in the liver cell. Because cholesterol and bile salt have marked similarities in their structural formulas--both contain the same four ring nucleus--it has been claimed that cholesterol is the precursor of bile salt. This thesis has been shown by Smith and Whipple (13) to be untenable. But may a surplus of bile salt be changed to cholesterol? This seems to be unlikely on theoretical grounds and there is no real support for this hypothesis on experimental grounds. The body seems able to dispose of any surplus of bile salts without any demonstrable increase in cholesterol stores or elimination. However the experimental data are not adequate as yet to exclude this possibility. SUMMARY Under uniform diet conditions the normal bile fistula dog will eliminate pretty constant amounts of cholesterol--about 0.5 to 1.0 nag. cholesterol per kilo per 24 hours. Diets rich in cholesterol (egg yolk) will raise the cholesterol output in the bile but compared to the diet intake (1.5 gm. cholesterol) the output increase in the bile is trivial (5-15 rag.). Calves' brains in the diet are inert. Bile salt alone will raise the cholesterol output in the bile as much and often more than a cholesterol rich diet. Bile salt plus egg yolk plus whole bile give maximal output figures for bile cholesterol--60 rag. per 24 hours. Liver injury (chloroform) decreases both bile salt and cholesterol elimination in the bile. Blood destruction (hydrazine) fails to increase the bile cholesterol output and this eliminates the red cell stroma as an important contributing factor. Certain cholagogues (isatin and decholin) will increase the bile flow but cause no change in cholesterol elimination. The ratio of cholesterol to bile salt in the bile normally is about 1 to 100 but the bile salts are more labile in their fluctuations. The ratio is about reversed in the circulating blood plasma where the cholesterol is high (150-300 nag. per cent) and the bile salt concentration very low. Cholesterol runs so closely parallel to bile salt in the bile that one may feel confident of a physical relationship. In addition there is a suspicion that the bile cholesterol is in some obscure fashion linked with the physiological activity of hepatic epithelium.
2014-10-01T00:00:00.000Z
2003-01-01T00:00:00.000
{ "year": 2003, "sha1": "b10c0ab6c87c55dcb24dd5bb4ed67ddf14a7b23e", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "b10c0ab6c87c55dcb24dd5bb4ed67ddf14a7b23e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
248432438
pes2o/s2orc
v3-fos-license
Level of dengue preventive practices and associated factors in a Malaysian residential area during the COVID-19 pandemic: A cross-sectional study Background Dengue fever is a mosquito-borne viral infection that is endemic in more than 100 countries and has the highest incidence among infectious diseases in Malaysia. The increase of dengue fever cases during the COVID-19 pandemic and the movement control order (MCO) highlighted the necessity to assess the dengue preventive practices among the population. Thus, this study aimed to determine the level of dengue preventive practices and its associated factors among residents in a residential area in Johor, Malaysia during the COVID-19 pandemic. Method A community-based cross-sectional study was conducted on 303 respondents from a Johor residential area between May and June 2021. A validated self-administered questionnaire was created using google forms and distributed to the respondents via WhatsApp. The questionnaire consisted of three sections: (i) Sociodemographic characteristics and history of dengue fever, (ii) dengue preventive practices, and (iii) six constructs of the Health Belief Model (HBM). The association between the dependent and independent variables were examined using multiple logistic regression with a significant level set at less than 0.05. Result About half of the respondents have a good level of dengue preventive practices. Respondents with a history of dengue fever (aOR = 2.1, 95% CI: 1.1–4.2, p = 0.033), low perceived susceptibility (aOR = 1.8, 95% CI: 1.1–3.0, p = 0.018), high self-efficacy (aOR = 1.7, 95% CI: 1.0–2.8, p = 0.045), and high cues to take action (aOR = 2.5, 95% CI: 1.5–4.2, p < 0.001) had higher odds of practicing good dengue preventive measures. Conclusion This study demonstrated a moderate level of dengue preventive practices during the COVID-19 pandemic. Therefore, a stronger dengue control programme is recommended by focusing on cues to take action, self-efficacy, and recruiting those with a history of dengue fever to assist health authorities in promoting good dengue preventive practices in the community. used as a social cognition model in predicting health behaviour [26]. According to HBM, an individual's commitment to health-promoting behaviour is influenced by their views about the severity of the health problem and their likelihood of contracting the disease, as well as their perceptions of the benefits and barriers to the health behaviour [27]. Studies have used HBM to predict dengue preventive practices [25,26]. Furthermore, both Aedes aegypti and Aedes albopictus are highly anthropophilic, preventive actions are especially important during the COVID-19 pandemic as indoor mosquitoes increased during the lockdown [28]. In addition, having a history of dengue fever may play a vital role in dengue preventive actions, but earlier researchers have found inconclusive associations [25,26]. Therefore, this study aimed to identify the level of dengue preventive practices and its associated factors during the COVID-19 pandemic using the theoretical construct of HBM. We hypothesised that good dengue preventive practices are associated with a higher perception of getting dengue, their beliefs on the severity of dengue, and their corresponding with their perceptions towards the benefits of and barriers to the dengue preventive practices. It is also associated with higher cues to take action and confidence in performing dengue preventive practices. Study design and setting A cross-sectional study was conducted in Taman Kota Masai, a residential area in Johor Bahru district under Pasir Gudang Municipal Council. Johor Bahru is the capital of the state of Johor, located at the southern end of Peninsular Malaysia. The Malay ethnicity group makes up the majority (74.72%) of the population in Pasir Gudang, followed by Indians (4.62%) and Chinese (2.33%) [29]. Johor is the state with the highest reported dengue cases after Selangor and Kuala Lumpur. Nevertheless, Johor has a limited number of studies compared to the other two states. Johor Bahru district recorded the most cases of dengue fever in Johor, accounting for almost 80% of all the cases in the state. Taman Kota Masai was chosen as the study location because it is one of the residential areas with frequent dengue outbreaks, with approximately 100 incidents in 2020 [12]. The Ministry of Health divided Taman Kota Masai into 35 zones to maximise vector control in the localities. The population of Taman Kota Masai is estimated at 92 thousand people. Terrace houses make up the majority of the housing areas, which are surrounded by other residential and industrial regions. Data were collected from May to June 2021. This study adhered to the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guidelines for cross-sectional epidemiological studies in terms of design, setting, analysis, and reporting (see S1 Table for the detailed checklist of STROBE criteria). Sampling and sample size The required number of samples for this study was calculated using two independent gender proportions based on Rakhmani (2018) [24] and the formula by Lemeshow et al. (1990) [30]. The final sample size needed for this study was 646 after adjusting for the estimated sample design effect for cluster sampling where the effect size was two and assuming a 60% response rate. The sampling method used was two-stage cluster sampling. In the first stage, the zones were selected from 35 existing areas. However, five zones were omitted as they were located in the factory or the new residential area. Thus, 16 over 30 zones were randomly selected using a random number generator. Next, roads were chosen from each of the zones. The number of roads chosen for each zone was determined by the zone's total number of roads, where more samples were collected in the zone with a greater number of roads. From the 16 zones, only one zone contributed three roads, while the rest contributed one road each. The roads were also randomly selected using a random number generator. Thus, a total of 18 roads were selected from the study area. From the chosen roads, the heads of households who were over 18 years old, had resided in Taman Kota Masai for more than 6 months, and were able to communicate through WhatsApp application were recruited for this study. The household heads were chosen because they symbolise the behaviour of the household and influence health practices [31]. Moreover, vector control at the household level is a fundamental strategy for dengue fever prevention [32,33]. The WhatsApp application was used as the communication medium to minimise contact during the MCO, and it is one of the widely used forms of communication among Malaysian adults [34]. Ethics approval and consent to participate Ethical approval was obtained from Ethics Committee for Research Involving Human Subject University Putra Malaysia, and the referral number is UPM/TNPCI/RMC/1.4.18.2(JKEUPM). Implied consent was obtained from respondents via a google form before participation in this study. The identity of the respondents are kept anonymous, and the information in this study is kept strictly confidential. Study instrument The questionnaire used in this study was adapted from previous studies [35][36][37][38]. The content validity of the questionnaire was reviewed by an expert panel composed of public health specialists. The questionnaire was modified based on the experts' comments before being checked for face validity. The questionnaire's face validity was determined by asking five respondents who did not participate in the main study to provide comments on the questionnaire's sentence, wording, and structure. Their comments on the comprehensibility, language suitability, and duration to complete the questionnaire were also taken into consideration. Later, the questionnaire was amended based on their comments. A test-retest was conducted among 30 residents in other localities in Johor Bahru to examine the stability of the questionnaire. The data from this assessment was not included in the final analysis. The data was analysed using intra-class correlation coefficient (ICC), and the ICC ranged between 0.69-0.962, which were within good to excellent agreement [39,40]. Data was collected online using Google Forms, which was distributed to the respondents via the WhatsApp application. The self-administered online questionnaire was in the Malay language, the national language of Malaysia and consisted of three parts: Part A-Sociodemographic data and history of dengue fever Part B-Dengue preventive practices Part C-Constructs of Health Belief Model (HBM) Dependent variable. The dependent variable in this study was the level of dengue preventive practices, which was adapted from a validated questionnaire with a Cronbach's alpha value of 0.790 [36]. Dengue preventive practices are defined as respondents' efforts to avoid contracting dengue fever and preventing action to minimize dengue fever occurrence [36]. The preventive practices measured in this study include prevention of mosquito breeding, prevention of mosquito bites and prevention of dengue transmission [26]. Respondents were asked to rate 15 items on a five-point Likert scale, i.e., "never", "rarely", "sometimes", "usually", and "always", with scores "1" to "5", respectively. The total score ranged from 15 to 75. The scores were dichotomised for analysis using the median split, with 15-59 being categorised as poor practice and 60-75 as good practice. Independent variable. The sociodemographic profile of respondents that included age, gender, highest education level, and monthly household income was collected. For analysis, age was divided into � 30 years old (youth) and > 30 years old [41]; the educational level was categorised into primary, secondary, and tertiary; the monthly household income was classified into � MYR 4,849, which represents the bottom 40% of population income (B40), and income of > MYR 4,849 [42]. The previous history of dengue fever was assessed by asking the respondents or their family members if they had been admitted to the hospital due to dengue, with answers ranging from "yes", "no", or "unsure". The constructs of HBM include perceived susceptibility, perceived benefit, perceived barrier, perceive severity, self-efficacy and cues to take action. The definition of these constructs are as follows; perceive susceptibility is defined as assessing one's subjective perception of the risk of getting dengue; perceive benefit is defined as if they believe that a dengue prevention practice would reduce the susceptibility or severity or lead to another positive outcome; perceive barrier define as if they perceive few negative attributes related to dengue prevention practice; perceive severity is one's belief of how severe the condition is and its consequences; self-efficacy is defined as the opinion of an individual to perform dengue preventive practices successfully, and cues to take action against dengue vectors are defined as things that may affect the individual's perception and may indirectly influence their health-related behaviour [23,26,43]. The items on the HBM constructs, which comprised of perceived susceptibility, benefits, and severity of dengue were adopted from a study with the Cronbach's alpha values of 0.943, 0.910, and 0.591, respectively [37]. The perceived barrier, cues to take action against dengue vector, and self-efficacy were taken from two earlier studies in Selangor, Malaysia [35,38]. No Cronbach's alpha value were reported. The assessment of perceived benefit consisted of five items rated using a four-point Likert scale ranging from "strongly disagree" (1) to "strongly agree" (4) with a total score range of 5-20. The perceived benefit was categorised into low (5-17) and high (18-20) using a cut-off point determined by the median split. Meanwhile, perceived susceptibility consisted of six items that examined the respondents' perception of the risk of getting dengue fever using a similar four-point Likert scale. Four items (C3.1; C3.2, C3.3, and C3.4) had a reverse four-point Likert scale coding ranging from "strongly disagree" (4) to "strongly agree" (1). The total scores were between 6 and 24, with 18-24 was classified as high perceived susceptibility and 6-17 as low. Perceived severity consisted of four items examining the respondents' feelings towards the seriousness of dengue fever. The score used a four-point Likert scale and scores added up to 4-16. Only item C4.3 had a reverse coding in this construct. The scores of 15-16 were categorised as having high perceived severity and 4-14 as low perceived severity. The perceived barrier also used a four-point Likert scale to answer six questions on barriers in performing dengue prevention practice. The range of scores was 6-24. Those scoring 6 to 11 have a lower perceived barrier, whereas those scoring 12 to 24 have a higher perceived barrier. Nine items made up the cues to take action against the dengue vector construct and three items made up self-efficacy. The total score from a four-point Likert scale for cues to take action was 9-36. The scores 30-36 were categorised as having high cues to take action, and scores 9-29 were categorised as low cues to take action. For self-efficacy, respondents who answered "strongly agree" and "agree" to questions C6.1 and C6.3 and "strongly disagree" and "disagree" to question C6.2 were considered confident in performing dengue preventive practices. Statistical analysis The data collected was analysed using Statistical Package for Social Sciences (SPSS) version 25.0 (IBM Corp, 2016). Data were cleaned prior to the data analysis and data for each continuous variable was checked for normal distribution. Descriptive statistics were performed on the sociodemographic factors, history of dengue fever, levels of dengue prevention practice, perceived susceptibility to dengue fever, perceived benefit of preventive practices, perceived barriers, perceived severity of dengue fever, cues to take action against dengue vector, and selfefficacy variables and were presented in frequency and percentage for categorical data. The median and interquartile range was used for data that were not normally distributed. Simple and multiple logistic regression were used to determine the associations between the sociodemographic factors, history of dengue fever, perceived susceptibility, perceived benefit, perceived barrier, perceived severity of dengue fever, cues to take action against dengue vector, and self-efficacy with the level of dengue preventive practices. The results were expressed as crude and adjusted odds ratios with the statistical significance level set at less than 0.05 (p < 0.05). Participation rate A total of 646 eligible respondents from Taman Kota Masai were invited to participate in this study, but only 303 sets of questionnaires were completed, giving a response rate of 47%. Despite the low response rate, the post-hoc power analysis of this study yielded 80.3% power, indicating that the result of this study has sufficient power to detect statistical differences [44]. The parameters used to calculate post-hoc power analysis of comparing two independent groups for this study were between respondents with previous history of dengue fever and no previous history of dengue fever. Table 1 illustrates the sociodemographic characteristics of the household head and the history of dengue fever illness of the household head and their family members. Almost all the respondents were Malays (99.4%). There were more female (53.1%) than male (46.9%) respondents with a median age of 49 years old and an interquartile range of 14. The majority of the head of the households had secondary as the highest education level (55.4%), and a monthly household income of � MYR 4,849.00 (B40) [42]. About one-fifth of either the head of household or their family members had a history of dengue fever. Level of dengue preventive practices Only approximately half of the respondents performed good dengue preventive practices (50.2%). The maximum score for dengue preventive practices was 75, and the minimum score was 15 with a median of 60 and an interquartile range of 9. The most common dengue prevention practices performed by the participants were the use of a water container with a lid. 84.49% of the respondents usually and always use water containers with lids and close them immediately after usage, and 81.85% of them usually and always clean the water container when they found mosquito larvae. 55.78% of respondents usually and frequently change the water in the vase cover, and 56.11% of them check the presence of mosquito larvae in the vase cover. Mosquito repellent and mosquito nets usage are the least common dengue prevention measures reported by participants. Only 43.23% of the respondents usually and always use mosquito repellent, and 4.95% stated to use a mosquito net. Level of HBM constructs among respondents More than half of the respondents had high levels of perceived susceptibility, perceived benefit, perceived barrier, perceived severity, cues to take action, and self-efficacy to execute dengue preventive practices. The scores and the respective constructs of the HBM are shown in Table 2. Associations of good dengue preventive practices Through simple logistics regression analysis, this study found that people with a history of dengue fever, high perceived benefit, perceived barriers, cues to take action, and self-efficacy were significantly associated with a good level of dengue preventive (Table 3). Meanwhile, sociodemographic factors were not significantly associated with the dengue preventive practices. However, multiple logistic regression analyses revealed that only history of dengue fever, perceived susceptibility, cues to take action, and self-efficacy were significantly linked to a good level of dengue prevention. Respondents with prior experience of dengue fever were twice more likely to have good dengue preventive practices compared to individuals who had no history of dengue fever (aOR = 2.4, 95% CI: 1.2-4.7, p = 0.012). Additionally, respondents with low perceived susceptibility were nearly twice more likely to have strong dengue preventive measures than those with high perceived susceptibility (aOR = 1.8, 95% CI: 1.1-3.0, p = 0.018). Respondents with high cues to take action were 2.6 times more likely than those with low cues to take action to have a good level of dengue preventive practices (aOR = 2.6, 95% CI: 1.6-4.2, p < 0.001). Also, respondents with yes self-efficacy were nearly twice as likely as those with no self-efficacy to have good dengue preventive practices (aOR = 1.8, 95% CI: 1.1-2.9, p = 0.023). Discussion This study was conducted to evaluate the level of dengue preventive practices and its associated factors during the COVID-19 pandemic among residents of Taman Kota Masai, Johor. The result showed that only half of the respondents demonstrated a good level of dengue preventive practices. It is slightly lower than the results of a study conducted in another Malaysian state with dengue hotspot areas before the pandemic COVID-19. The study reported that 56% of the population practised good dengue prevention [45]. It shows that even people stay at home during the lockdown, the level of dengue prevention is still low. Moreover, a study that compared dengue preventive practices among communities lived in hotspot and non-hotspot dengue areas in Selangor discovered that more people in non-hotspot areas engaged in dengue preventive practices [46]. Hence, the low level of dengue prevention practices in Taman Kota Masai may be related to people's health beliefs rather than the amount of time they spent at home during a pandemic. In terms of health belief using the HBM, this study showed that respondents with a low perceived susceptibility were nearly twice more likely to have a good level of dengue preventive practices than those with high perceived susceptibility. A plausible explanation for this may be that people who practise good dengue prevention measures such as cleaning their house area have a reduced perception of susceptibility to contract dengue fever [47]. A study showed that people had higher health awareness during the pandemic COVID -19 [48]. They practice hygiene to a high degree, including dengue prevention. In addition, most people avoid visiting hospitals or health facilities during the pandemic, and the study also showed that the use of personal health care abruptly decreased [49]. Fear of contracting COVID -19 causes patients to avoid visiting health care facilities. It prompts them to do their utmost to stay healthy, including implementing dengue prevention measures to reduce their susceptibility to contracting dengue fever. Furthermore, this study showed that respondents with high cues to take action were more than twice likely to have a good level of dengue preventive practices compared to those with low cues to act. Cues to take action might be internal or external, ranging from personal experiences, cultural schemas, and information received from other sources or networks [50,51]. There is currently a lack of study examining the association between cues to take action and dengue preventive practices. According to some researchers, this construct may fade or be volatile [43,52]. Most of the study participants agreed that they would take precautionary measures if their residential area was declared a dengue hotspot area and were constantly reminded to carry out dengue fever preventive measures by the local authorities. In the COVID-19 pandemic, the residents spent more time at home during MCO. Thus, this will allow them to be more aware of all the banners put by the health authorities concerning dengue preventive practices as their locality is a dengue hotspot area. This is the cue for them to perform the dengue preventive practices. Besides, during MCO, certain people will follow the trend and tend to change their interest such as being involved in gardening [53,54], which also can be the cue for them to be indirectly involved in cleaning their gardening area and mosquito breeding area such as cleaning the vase cover. Additionally, a randomised control trial conducted in a different Malaysian state suggested that alerting residents when dengue positive mosquitoes were found during surveillance activities could improve source reduction practices [55]. A qualitative study conducted in two villages in Central Java province, Indonesia showed that continual media campaigns were relevant to the improvement of dengue preventive practices [56]. Hence, this demonstrates the importance of publicising hotspot areas and implementing new proactive strategies, such as notifying the community about positive dengue mosquitoes findings to maintain cues to act. Health education initiatives should also be performed regularly, especially during pandemics, to ensure sustainable cues to take action for dengue preventive practices. In addition, respondents with self-efficacy (confidence to perform dengue preventive practices) were almost twice as likely to have a good level of dengue preventive practices compared to those with no self-efficacy. This finding is corroborated by a previous study that demonstrated self-efficacy to be a predictor of dengue preventive measures [25]. Self-efficacy is an important construct in the HBM that encourages individuals to implement preventive practices [57]. The lack of self-efficacy is one of the challenges that must be overcome to effectively apply these initiatives [23]. Accordingly, authorities should regularly deliver clear messages and demonstrate simple dengue preventive practices to boost people's confidence and enhance self-efficacy [38,58]. Namely, the health authority may produce a short educational video with trained role-players demonstrating the recommended practices. The video could be distributed to those with low confidence in undertaking dengue preventive activities, with the request to practise the approaches until full confidence is achieved [58]. During the COVID-19 pandemic, a study shows a surge in social media use [59]. Personal and professional lives merged through platforms like Facebook, Twitter, and Instagram, united in isolation throughout MCO Twitter is used to create global knowledge networks by facilitating academic discussions and information sharing through crowdsourcing [60]. The hashtag is also becoming increasingly popular in the online medical community to interact and share best practices. Consequently, the snappy tweets and infographics with relevant data about dengue can strengthen readers' self-efficacy in implementing significant dengue preventive practices. Aside from that, having a history of dengue fever was found to be significantly associated with dengue preventive practices in this study. This is in line with a study conducted among international students at a public university in Malaysia, which found that students who had previously contracted dengue fever had a good level of dengue preventive practices. The authors suggested that experience contributed to patients' increased knowledge and awareness [61]. Those with a history of dengue fever may have obtained information on dengue fever prevention during a consultation with healthcare providers. Patients may also actively seek information after contracting dengue fever. This is in accord with a study conducted among Malaysians aged more than 18 years old, where respondents with a history of dengue fever had significantly higher knowledge about dengue fever and dengue preventive practices [26]. Thus, it was proposed that people who have experienced dengue fever be recruited to assist health authorities to promote dengue preventive practices in the communities. Strength and limitations of the study This study captured dengue preventive practices at the household level and may contribute to added knowledge on dengue preventive practices during the pandemic of other infectious diseases such as COVID-19 and can be used as baseline data in planning specific health intervention strategies in the future. The response rate of this current study was 47%, which is considered satisfactory given that some studies reported a 43% response rate for online surveys [62]. During the data collection process, a variety of methods to boost response were used, such as using a push survey in which the respondents were sent a direct link to the Google Forms questionnaire, three reminders, enlisting the help of communities leaders to remind the communities, and convincing the respondents that their opinion is valuable to eradicate dengue. Also, the researcher extended the data collection deadline to 14 days. Regardless, a post-hoc power analysis revealed that the power of this study is adequate to detect the statistically significant differences. Conventional methods involving face-to-face interaction are not feasible during the COVID-19 pandemic. Consequently, the data collection was conducted solely through Google Forms that were disseminated via WhatsApp, which may have caused hesitancy and suspicion among the respondents. Respondents might be unable to differentiate between spam messages and legitimate research work [63]. Moreover, selection bias might occur whereby elderly persons who might not be familiar with google form may be left out. This might explain median age of the respondents was at 49 years. The use of only national language in the google form may have resulted in the majority of the respondents being Malays. Furthermore, the use of a self-reported questionnaire in this study may cause social desirability bias. Conclusion A good level of dengue preventive was practised by around half of the respondents in this study. The previous history of dengue fever, low perceived susceptibility, high self-efficacy and cues to take action were the factors found to be associated with a good level of dengue preventive practices during the COVID-19 pandemic. This study recommended that people with dengue fever experience be recruited to help promote dengue preventive practices in the communities. Additionally, cues to act can be encouraged by disclosing hotspot areas and developing new proactive strategies, such as informing communities on positive dengue mosquitoes' findings. Besides, health education initiatives to prevent dengue must be done routinely, including during the pandemic, to ensure long-term actions and enhanced self-efficacy. The health authorities should also provide direct information and demonstrate simple dengue preventive practices regularly. Supporting information S1
2022-04-30T06:24:45.713Z
2022-04-29T00:00:00.000
{ "year": 2022, "sha1": "b92c04e41c2fdcbfbfe62618373e90052bfbca7c", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0267899&type=printable", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7b65e8db0f01714e5d7706689ffd17ad807e9a39", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
81138927
pes2o/s2orc
v3-fos-license
Stem Cell Therapy : Recent Success and Continuing Progress in Treating Diabetes Diabetes mellitus (DM), a cluster of metabolic diseases, resulting in high blood glucose levels, is prevalent in today’s world. The global costs of diabetes and its consequences are rising and are expected substantially increase by 2030, especially in middleand lower-income countries. Evidence-based therapies, specifically targeting the reduction of high blood glucose levels, and minimizing diabetic complications, are currently the choice of treatment. Stem cell therapy offers a promising vision to treat DM. Although challenges are still posed with this line of therapy, studies have produced regenerative beta-cells which closely resemble insulin-secreting cells. A number of sources for stem cells have been explored, ever since the proof-of-concept for cell therapy was laid down. This review summarizes stem cell therapy in the treatment of DM. Introduction According to the WHO report released in November 2017 the number of people with Diabetes mellitus (DM) has risen from 108 million in 1980 to 422 million in 2014 [1].The prevalence of DM across all age-groups, worldwide was estimated to be 2.8% in 2000 and is expected to rise to 4.4% by 2030, with the total number of people with DM projected to rise from 171 million in 2000, to 636 million in 2030 [2]. There are two types of DM -type 1 and type 2. Type 1 diabetes, also known as insulin-dependent diabetes and juvenile diabetes, involves the immune system, which results from a cellular-mediated autoimmune destruction of the BETA-cells of the pancreas.It can occur of biological sources like embryos, placenta, and bone marrow.Progenitor cells are another exciting avenue of research.Like stem cells, these cells can take on the form of a number of different types of mature human cells, however, unlike stem cells, they cannot divide indefinitely. Progenitor stem cells have been used to grow insulin producing cells, under laboratory conditions, from intestinal cells and undeveloped pancreatic cells [7].This article provides an overview of the various approaches used to regenerate pancreas in patients with diabetes, recent advances including our contributions and also a novel approach that may be explored in future. Progenitor cell are very similar to stem cells.They are biological cells and like stem cells, they too have the ability to differentiate into a specific type of cell.However, they are already more specific than stem cells and can only be pushed to differentiate into its "target" cell.Whereas, stem cells have the ability to differentiate into many different types of cells as shown in Figure 1.Comparison between Stem Cell and Progenitor Cell are highlighted in detail in Table 1. Islet Cell Transplants Pancreatic islets include the insulin producing beta cells, which are crucial in regulating blood glucose levels. Islet transplants are a safer option than whole pancreas transplants.The procedure to insert donor islet cells is far less critical than transplanting a complete pancreas.For any type of transplantation procedure, a balance is sought between efficacy and toxicity.With respect to islet transplantation a main concern was that many of the current agents' cause damage beta cells or induce peripheral insulin resistance [8,9].Immunosuppressant drugs also can cause problems, as suppressing the immune system raises the risk of infection [10]. Shapiro, et al., reported insulin independence, with tight glycemic control and correction of glycated hemoglobin levels, in seven consecutive subjects treated with glucocorticoid-free immunosuppressive therapy combined with infusion of an adequate mass of freshly prepared islets from two or more pancreases from deceased donors [11].This treatment came to be known as the Edmonton Protocol [12].In continuation to this protocol, a phase I/II clinical trial was undertaken to demonstrate the feasibility and reproducibility of the outcomes of the Edmonton protocol.The trial concluded that though long-term endogenous insulin production and glycemic stability in subjects with type 1 diabetes mellitus was achieved, it was found that insulin independence was more than often not sustainable, and gradually lost in the long run [13].city of organ donors and the life-long need for often-toxic antirejection drugs.He and his team argued that intrahepatic islet transplants were inefficient, due to the high number of donors required per treatment and was also associated with high early losses of islets [15].They hypothesized that high numbers of mesenchymal stem cells (MSCs) in neo-islets (NIs) would enable islet cells to survive and re-differentiate into normally functioning endocrine cells. Future studies/trials should focus on enhanced islet engraftment, less toxic immunosuppressive therapy, reduced metabolic stress, reduced apoptosis, enhanced regeneration, the use of living donors, and the induction of immunologic tolerance.This approach will ensure improved success rates in transplantation and sustained insulin independence [13].In 2017, Westenfelder, et al., [14] reiterated the need for re-establishing endogenous insulin secretion without being limited by both the scar- 2. Generate all the cell types of the organ from which they originate. 3. Potentially regenerating the entire organ from a few cells. 1. Tendency to differentiate into a specific type of cell, but is already more specific than a stem cell and is pushed to differentiate into its "target" cell. 2. Can divide only a limited number of times. Benefits They have the potential to increase healing and for potentially regenerating an entire organ from a few cells.They are investigated in treatment of: They act as a repair system for the body.They replenish special cells, but also maintain the blood, skin and intestinal tissues.Progenitor cells can be activated in case of tissue injury, damaged or dead cells.It leads to the recovery of the tissue. Controversy The use of human adult stem cells in research and therapy is not considered to be controversial.The use of human embryonic stem cells in research and therapy is controversial as they are derived from human 5-day-old embryos generated by IVF (in vitro fertility) clinics designated for scientific research. The progenitor cells are not subject to controversy. arteries feeding the pancreas.The HbA1c dropped by > 1.5% within 30 days and the C-peptide increased at the 3-month follow-up mark.All patients were reported to have had a significant reduction of their anti-diabetic medications [23]. These two recent meta-analysis of published trials concluded that both BM-MNC and peripheral blood mononuclear cell infusion may result in improvement of the HbA1C, fasting plasma glucose, C-peptide levels, and endogenous insulin production at 12 months in the majority of treated patients. Human Embryonic Stem Cells Embryonic Stem cells can differentiate in vitro and in vivo, to form a wide range of specialized cell types.Taken from the embryo at the blastocyst stage, ES cells are pluripotent.Their versatility is an asset over adult stem cells, but also a challenge.While ES cells can become insulin-secreting cells in culture, e.g., the cells are not as stable as adult stem cells.ES cells studied in vitro and in vivo can differentiate into tumor cells.Similarly, the rapid proliferation rate of ES cells, which is also greater than that of adult cells, carries greater risk of forming tumors in vivo [24][25][26].The discovery of methods to isolate and grow human-Embryonic Stem Cells (ESCs) in 1998 renewed the hopes of researchers, clinicians and diabetes patients and their families that a cure for DM TYPE 1 and perhaps non-DM TYPE 1 as well may be within striking distance.In theory, ESC could be cultivated and coaxed into developing the insulin-producing islet cells of the pancreas.With a ready supply of cultured stem cells at hand, the theory is that a line of ESC could be grown up as needed, for anyone requiring a transplant. Stem Cell Research -Advantages and Disadvantages The list of sources of stem cells advantages and disadvantages are cited in the Table 2. Advantages 1) It provides medical benefits in the fields of therapeutic cloning and regenerative medicine. 2) It provides great potential for discovering treatments and cures to a variety of diseases including Parkinson's disease, schizophrenia, Alzheimer's disease, cancer, spinal cord injuries, diabetes and many more. 3) Limbs and organs could be grown in a lab from stem cells and then used in transplants or to help treat illnesses. This treatment led to long-term glycemic control in non-obese diabetic mice [14].The NIs survived, engrafted and re-differentiated into functional insulin secreting cells in the well-vascularized omentum (via intraperitoneal administration), delivering insulin into the hepatic portal system.Simultaneously, re-expression of other islet-specific hormones occurred.Identical injection of Nis into nondiabetic animals resulted in omental engraftment without causing hypoglycemia, further demonstrating regulated islet hormone secretion [14].Both allo-and auto-immune protection was also achieved [16,17].In preparation for a pilot study in pet dogs with DM type 1, streptozotocin-diabetic nonobese diabetic/ severe combined immunodeficiency (NOD/SCID) mice were treated in a similar manner with canine NIs (cNIs).In these, euglycemia was readily and durably induced and intraperitoneal Glucose Tolerance Tests (i.p.GTTs) were normalized by the exclusive release of canine-specific insulin [14]. Ongoing studies regarding this NI technology are focused on analogous studies using human NIs in diabetic NOD/SCID mice, as well as on the characterization of the NI-intrinsic microcirculation post-engraftment in the omentum, the long-term distribution of MSCs within the NIs in vivo, their potential differentiation into insulin-producing and vascular endothelial cells, the re-differentiation of alpha and other endocrine cells in vivo, in situ IDO (canine) and iNOS (murine) expression by MSCs, and a detailed analysis of the long-term histology and cell composition of functioning NIs [14].In patients with DM type 1, glycemic control can also be achieved with intensive insulin therapy and pancreatic transplantation.Intensive insulin therapy does not normalize glycosylated hemoglobin values and may cause severe hypoglycemia.Pancreatic transplantation provides excellent glycemic control, and although the outcome of the procedure has improved dramatically over the past decade, it remains an invasive procedure with a substantial risk of morbidity.The findings indicated that islet transplantation alone is associated with a minimal risk and results in good metabolic control, with normalization of glycosylated hemoglobin values and sustained freedom from the need for exogenous insulin [18,19]. Hematopoietic and Bone Marrow Cells for Type 2 Diabetes Mellitus (DM TYPE 2) Autologous bone marrow contains hematopoietic stem cells, a mixture of mononuclear cells, a few mesenchymal cells, and other cells.Peripheral blood stem cells are mainly selected by their CD34 antigen positivity.Different preparations of the hematopoietic cells have been claimed to be effective in correcting hyperglycaemia, improvement of endogenous insulin production, and diminishing or eliminating the need for insulin and other diabetes controlling treatments [19][20][21].Wang, et al., [22] used autologous bone marrow to treat 31 patients with stem cell infusion into the major 7) Stem cell research also benefits the study of development stages that cannot be studied directly in a human embryo, which sometimes are linked with major clinical consequences such as birth defects, pregnancy-loss and infertility.A more comprehensive understanding of normal development will ultimately allow the prevention or treatment of abnormal human development. 8) An advantage of the usage of adult stem cells to treat disease is that a patient's own cells could be used to treat a patient.Risks would be quite reduced because patients' bodies would not reject their own cells.9) Embryonic stem cells can develop into any cell types of the body, and may then be more versatile than adult stem cells. 4) It will help scientists to learn about human growth and cell development. 5) Scientists and doctors will be able to test millions of potential drugs and medicine, without the use of animals or human testers.This necessitates a process of simulating the effect the drug has on a specific population of cells.This would tell if the drug is useful or has any problems. 6) Stem cell research also benefits the study of development stages that cannot be studied directly in a human embryo, which sometimes are linked with major clinical consequences such as birth defects, pregnancy-loss and infertility.A more comprehensive understanding of normal development will ultimately allow the prevention or treatment of abnormal human development.The stem cells are the promising tools addressing generation of beta-like cells/ISC (Insulin secreting cells) as well as immunomodulation (Figure 2) [25].was secreted by the seventh week after transplantation, of encapsulated pancreatic progenitors and by week 20 enough human insulin was produced to ameliorate alloxan-induced diabetic symptoms. Mesenchymal Stem Cells Mesenchymal stem cells (MSCs) are self-renewing multipotent cells that have the capacity to secrete multiple biologic factors that can restore and repair injured tissues.Preclinical and clinical evidence have substantiated the therapeutic benefit of MSCs in various medical conditions.Currently, MSCs are the most commonly used cell-based therapy in clinical trials because of their regenerative effects, ease of isolation and low immunogenicity.Experimental and clinical studies have provided promising results using MSCs to treat diabetes. In 2015, investigators from Sweden were the first to evaluate the safety and efficacy of autologous MSC treatment in newly diagnosed DM type 1. Stem cells were harvested from the iliac crest bone marrow and the median systemic single dose was 2.75 × 10 6 cells/kg.They concluded that administration of MSCs did not result in adverse events in any of the ten patients and provided promising C-peptide concentrations at the 1-year follow-up.This phase I trial did not show any functional differences between the control and MSC group in hemoglobin A1c (HbA1c) or insulin dose. Hu and coworkers conducted a single-center, double-blind study examining the safety, feasibility and preliminary outcomes of umbilical cord Wharton's jelly-derived MSCs for new-onset type I diabetics [35].The MSC-treated group underwent two intravenous infusions (mean cell count of 2.6 × 10 7 ) separated 4 weeks apart.Postprandial glucose and HbA1c measurements were lower in the experimental cohort between 9 and 24 months after MSC infusion.Also, insulin usage and fasting C-peptide were significantly improved in the MSC group.The study authors concluded that in their small study, not powered to detect functional differences, the transplant of umbilical cord MSCs is feasible and safe.A pilot study in China involving delivering placenta-derived MSCs to patients with long-standing DM type 2 revealed the transplantation was safe, easy and potentially efficacious [36].This investigation included ten patients with type 2 diabetes for a duration ≥ 3 years, insulin dependent (≥ 0.7 U/kg/day) for at least 1 year and poorly controlled glucose.The subjects received on average 1.35 × 10 6 /kg placental stem cells on three separate occasions with 1-month intervals between intravenous infusions.Six months after treatment, the insulin dosage and HbA1c measurements for all the patients improved.Moreover, C-peptide and insulin release were also higher after MSC treatment.In addition, this study included a group of individuals that translate closer to actual clinical scenarios, as they also had other comorbidities, including heart disease, kidney disease and vascular complications.Lately, researchers have de- Disadvantages 1) The use of embryonic stem cells for research involves the destruction of blastocysts formed from laboratory-fertilized human eggs.For those people who believe that life begins at conception, the blastocyst is a human life and to destroy it is immoral and unacceptable. 2) Like any other new technology, it is also completely unknown what the long-term effects of such an interference with nature could materialize. 3) Embryonic stem cells may not be the solution for all ailments. 4) According to a new research, stem cell therapy was used on heart disease patients.It was found that it can make their coronary arteries narrower. 5) A disadvantage of most adult stem cells is that they are pre-specialized, for instance, blood stem cells make only blood, and brain stem cells make only brain cells. 6) These are derived from embryos that are not a patient's own and the patient's body may reject them. Pluripotent Stem Cells Pluripotent stem cells (PSCs) have the ability to self-renew and differentiate into three germ layers including ectoderm, endoderm and mesoderm, and hence can play an important role in regenerative medicine and cell therapy.PSCs are obtained from the inner cell mass of blastocyst (embryonic stem cells, ES) or from the foetal genital ridge (embryonic germ cells, EG).Human ES cell lines were first reported in 1998 by Prof. Thomson and his group [27] whereas human EG cell lines were reported by Prof. Shamblott in the same year [28]. Technology also exists to derive PSCs from adult somatic cells by reprogramming them to embryonic state using a cocktail of factors (induced pluripotent stem cells, iPS) or by allowing factors present in the oocyte cytoplasm to reprogramme somatic cells (therapeutic cloning).Prof. Yamanaka shared the Nobel Prize for Medicine in 2012 for iPS technology [29] whereas Prof. Mitalipov's group in 2013 [30] was the first to derive human ES cell line by somatic cell nuclear transfer (SCNT).Jiang, et al., [31] observed that 30% of transplanted mice showed reduction in hyperglycaemia on transplanting insulin positive cells, obtained by differentiating ES cells, for over a period of six months.Thus, proof of concept for use of human ES cells for diabetes was established; however, the process remains highly inefficient.Schulz, et al., [32] developed a scalable system for producing functional progenitors and Bruin, et al., [33] improved the differentiation protocol further which resulted in grafts containing > 80% endocrine cells and resulted in single hormonal cells expressing either insulin or glucagon or somatostatin in contrast to earlier polyhormonal cells.Kirk, et al., [34] demonstrated that human insulin veloped insulin-secreting MSCs and delivered them, in combination with hematopoietic stem cells, to patients with DM type I [37]. Conclusion Both DM type 1 and type 2 are among the most amenable diseases for treatment.Functional restoration of existing beta-cells, transplantation of stem cells or stem cell-derived beta-like cells might provide new opportunities for treatment (Figure 3).However, the use of stem cells to generate a renewable source of beta-cells for diabetes treatment remains challenging, largely due to safety concerns.There has been a large number of small published studies that indeed do not constitute a solid scientific proof of the efficacy of different stem cells being tried.The introduction of pre-prepared or frozen cells, like the MSC of umbilical or bone marrow origin, by different pharmaceutical companies has proven extremely expensive at this point and definitely out of reach to the vast majority of individuals.Larger studies are needed to advance the field and understand the best way to realise its potential.We believe stem cell therapy should only be used within clinical trials at this time, until enough evaluable data becomes available.The laboratory methods, like culture conditions and methods of cell numbering, have to be better thought of and more uniformly standardized, and the interpretation of the results should be done critically.In summary, regenerative medicine remains a new and exciting field of research that holds much promise into the treatment of patients with endocronologic diseases of all ages.Evidence based clinical treatment of diabetic symptoms only adds to the disease burden.With the advent of stem cell therapy, the potential to eradicate diabetes seems to be on the horizon. Figure 1 : Figure 1: Formation of differentiated cells and specialized cells from stem cells and progenitor cells. Table 1 : Comparison between Stem Cell and Progenitor Cell. Features1.Multiply by cell division to replenish dying cells and regenerate damaged tissues. Table 2 : List of sources of stem cells.Their advantages and disadvantages.
2019-03-18T14:02:00.664Z
2018-06-30T00:00:00.000
{ "year": 2018, "sha1": "dab8aabff961e9d944729d76e8a991cc1989b521", "oa_license": "CCBY", "oa_url": "https://www.clinmedjournals.org/articles/ijscrt/international-journal-of-stem-cell-research-and-therapy-ijscrt-5-053.pdf?jid=ijscrt", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "dab8aabff961e9d944729d76e8a991cc1989b521", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
34214754
pes2o/s2orc
v3-fos-license
Concept for Floating and Submersible Wireless Sensor Network for Water Basin Monitoring It will show the feasibility of a Wireless Sensor Network (WSN) devoted to monitoring water basin, river, lake, and sea both on the surface and in depth. The swarm of floating probes can be programmed to periodically sink some tens of meters below the surface, collecting data, characterizing water properties and then coming to the surface again. The life span of the probes may be assured by an on-board power supply or through batteries recharged by solar cells. The basic idea of the WSN is reported together with a detailed analysis of the operational constraints, the energy requirements, and the electronic and mechanical discussion. Introduction Wireless Sensor Networks (WSNs) have attracted an increasing attention in recent years because of the large number of potential applications.They are used for collecting, storing and sharing data, for environmental monitoring application [1] [2], surveillance purposes [3], sport performance evaluation [4], agriculture [5], home automation applications and a lot of other different purposes [6]. The aim of the present paper is to show the feasibility of a Wireless Sensor Network devoted to monitoring water bodies, both on the surface and in depth. The probes should be able to perform different tasks for various monitoring purposes.Just to give some example they could collect data on the mixing of salt and fresh water at a river estuary (Figure 1), to monitor the behavior of marine currents and to collect data on lakes and water basins like temperature, salinity, pollution and so on.The monitoring operation can be done both at surface level and below the surface.The surface sea wave statistics may be collected too, just by putting a triaxial accelerometer as an additional sensor in the probe. In addition, such probes are capable of sinking and coming back to the surface in order to collect data (in particular temperature) at various depths. In the following, we will consider the possibility of reaching a depth even as deep as 100 m (obviously, when required, the depth should be programmed in function of the scientific needs), although, in most situations, depths of 10 -20 m are satisfactory.Just think, for example, the need to monitor the environmental changes in the mixing of fresh water and sea water.For such application, the probes should be able to change their volume in order to change their specific weight, so being able to sink and come back to the surface.A pressure sensor can control the reached depth, triggering the operations to invert the motion by increasing the volume (and consequently decreasing the specific weight). The shape changing requires energy which should be stored on board on each probe: it will be shown that a relatively small power supply can assure more than 100 sink and float cycles.Alternatively, energy can be collected by small solar cells. The floating probes are disposable and can be dispersed by currents on the water surface.The collected data may be interchanged among the probes forming a Wireless Sensor Network (WSN) and finally transmitted to a GPRS system or to any other locally available data wireless network.Alternatively, the WSN may be periodically visited by an Unmanned Aircraft Vehicle (UAV): It is to be noted that the probes will move around on the surface of the water body but their spatial dispersion will be relatively limited even for time spans of a few days. Shape of the Probe and Sink/Float System The probes, having to float at the water surface, do not have strict requirements for what concerns weight.A single probe with a 1 dm 3 volume may weight just somewhat less than 1 Kg in order to float, and therefore it can easily contain many sensors and the needed electronic.Its shape and weight do not depend strictly on the sensors and electronic inside, but mainly on the structures needed to realize the planned sink and float cycles.Therefore let us address ourselves to such a specific problem. Motion of the Wall Probably, the final shape of the probes will be some sort of a cylinder with a weight distribution such as to have the center of gravity in its lower part.However, for the sake of discussion, let us assume to have a cubic probe, just floating, having a 1 dm 3 volume.In order to allow it to sink, we may increase 10% its specific weight by moving one of its surfaces by 1 cm.Of course, there is no problem with this operation when the probe is floating at the surface.However, at 100 m depth with an extra pressure of 10 kg per cm 2 , we need to apply a force to invert the motion of the surface (Figure 2). Various techniques can be thought to implement the motion of the wall, but in order to do some order of magnitude evaluations; it is possible to assume that an endless screw is put in motion by a common stepper motor. Energy Requirements for the System The force acting on the surface is of the order of 10 3 Kg at 100 m depth, and the surface has to be moved 1 cm with an approximately 100 J work.That is not a huge amount of work, and it can be obtained by a power source delivering 1 W for 100 s. Furthermore, let us make the conservative hypothesis that the double of such energy is required, so in a practical application about 2 W for 100 s are needed (for a better and safer use of the batteries, it is better to supply something like10 W for 20 s); the total energy needed is approximately 0.05 Wh. A common rechargeable battery (e.g., Energizer Rechargeable AA-2650 (HR6)) may supply 2650 mAh with 1.25 V, consequently its full capacity is evaluated as (1): Taking into account that it is better to not extract the entire energy from the battery, preventing a possible irreversible damage, we may state that approximately 50 cycles of immersion and rising to the surface of the probes can be performed by just one battery (with 2 AA batteries it is possible to have energy for more than 100 cycles). In addition, very small solar cells can be applied on the top surface of the probe.Aim of the solar cell is twofold: recharge the battery and act as a sensor to evaluate the solar radiation.In fact, let us assume that the solar cell of 1 dm 2 can approximately collect 2 W per 10 h a day.Accumulating a total of 20 Wh, a single solar cell can be able to fully recharge 6 batteries in one sunny day. Sensors on Board Each probe can be equipped with a standard set of specific sensors such as: • A triaxial accelerometer. • 2 thermometers: The first one operates at the top, at the water-air interface, and the second one is installed at the bottom of the probe).• A pressure gauge: To evaluate the depth at which the probe is during the immersion).• A photovoltaic system: As stated above, the photovoltaic system can be used for two purposes: to recharge the power supply system and to evaluate the solar radiation and the solar radiation attenuation while the Thanks to the stepper motor the probe reduces its volume modifying its specific weight; 3) The probe is sinking; 4) The probe reaches the desired depth and the pressure gauge sends a signal to the stepped motor to increase the volume of the probe; 5) The probe reaches the surface.During the immersion the sensors installed on the probes measure water parameters. probe is sinking in the controlled way. • A salinity sensor: It is particularly important, for example, when analyzing the mixing of fresh water and sea water) • A GPS system: Necessary to evaluate the horizontal displacements of the probes when at the surface. The proposed probes can also be used to monitor situations derived by accidental oil spill or by spilling of other dangerous substances; in any such situation, specific sensors can be installed on board. System Architecture The network of probes is intended to behave and act as a WSN. The architecture of each node can be easily derived by tailoring the multipurpose node described for example in [7] and applying the power supply consumption techniques reported in [8].It is to be noted that the energy consumption by the sensors is much smaller than the energy needed to move up and down the probes, taking into account that the data sampling is very low and that the sensors can be used in a low power required mode. In [7] [9] also the architecture adopted to acquire data from the sensors (including the GPS system) is described in detail.It is useless to say that the probes can transmit data only when at the surface considering the harsh propagation conditions of the water environment. An Access Point (AP) node, always described in detail in [7], can be used to collect data and send them to the collecting point (CP), either a fixed ground station or a UAV to be used as a bridge towards the ground station itself.The radio system may be turned on to transmit data from AP to CP in two different modes: at specific and fixed time or after a request or an inquiring coming from the CP.Depending on the specific application and area of displacement, the radio system may be simply a GPRS system or an ad hoc developed wireless data radio system.In any case, the useful range may be of many km assuring a good coverage of a relatively large water basin. The AP will have a much robust power supply and it is not intended to sink; it is expected to move coherently with the swarm of the probes building the WSN, either it is a ground station or a UAV. In addition, and for particular applications, when the swarm is not expected to have large displacements (e.g., during the monitoring operation of a closed basin or a small lake), just a single fix ground station may be needed.In such a case the node intended to be used as AP can send data directly to the ground station and the life span of the swarm is limited just by the battery life. The proposed entire electronic system, including sensors, battery and control board, can reasonably have a total weight of 0.1 kg, while box can have a weight of approximately 0.2 kg.By assuming a total volume of 1 dm 3 , and a total weight of approximately 1 kg and in order to be able to sink the probe by reducing the volume of 10% as described above, about 0.7 kg remains for the battery pack and the "mechanical" system, mainly the stepper motor. In order to keep low the cost of each single probe, common AA batteries can be used.Each of them weighs approximately 0.03 kg. 10 batteries in each probe are weighting 0.3 kg leaving still 0.4 kg of available weight. According to what reported in the previous paragraphs, various strategies can be adopted.As a function of the research activity to be performed, it can be chosen to use the whole amount of space for batteries without implementing any energy harvesting mechanism, or, in other situations, a systematic battery recharged procedure can be programmed. Conclusions The continuous monitoring of specific and detailed hydrological situations cannot be satisfactorily performed using traditional means, whether they are remotely sensing from satellites and airplanes techniques or direct measurements of the water properties with boats or ships. Following the experience in the field of geophysical monitoring through WSNs, the approach described in this paper maybe an interesting alternative.In fact it describes the concept of disposable sounding probes that are able to collect data directly from water even at relatively significant depth (down to 100 m) and operate cooperatively as a WSN. The further steps are to build the first prototype of the system and to test it.Since the technology is fully available, no particular problem is expected in the realization of the described network of probes. The main technical problem is to assure the tightness of the probe when sinking till the proposed depth of 100 m, where an extra pressure of 10 atm is present.It has to be note that the main critical point is the coupling of the two parts of the probe, namely the slicing part with respect to the other one.However, some tests on a cylindrical box have already been performed with positive results till a depth of 40 m, making us to be confident to find the best solution to allow the probes to sink till the desired depth. Figure 1 . Figure 1.Example of application of the described WSNs of probes.It can be used to monitor the water at the mouth of a river. Figure 2 . Figure 2. Operational principle of a single probe.1) The probe is floating; 2)Thanks to the stepper motor the probe reduces its volume modifying its specific weight; 3) The probe is sinking; 4) The probe reaches the desired depth and the pressure gauge sends a signal to the stepped motor to increase the volume of the probe; 5) The probe reaches the surface.During the immersion the sensors installed on the probes measure water parameters.
2017-09-07T17:25:08.836Z
2014-06-23T00:00:00.000
{ "year": 2014, "sha1": "ff190bf3f711f234f00a853829bad0fc58383458", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=47142", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "97016ad3b88b5dadc2978f9fd21042b649efef4a", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
16216139
pes2o/s2orc
v3-fos-license
Algorithmic Analysis of Invisible Video Watermarking using LSB Encoding Over a Client-Server Framework Video watermarking is extensively used in many media-oriented applications for embedding watermarks, i.e. hidden digital data, in a video sequence to protect the video from illegal copying and to identify manipulations made in the video. In case of an invisible watermark, the human eye can not perceive any difference in the video, but a watermark extraction application can read the watermark and obtain the embedded information. Although numerous methodologies exist for embedding watermarks, many of them have shortcomings with respect to performance efficiency, especially over a distributed network. This paper proposes and analyses a 2-bit Least Significant Bit (LSB) parallel algorithmic approach for achieving performance efficiency to watermark and distribute videos over a client-server framework. I. INTRODUCTION We live today in a world where sharing and distribution of digital media such as songs, photos and videos have become very popular. With this rapid increase in sharing and distribution, comes the problem of digital media copying and piracy. As mentioned earlier, video watermarking refers to the process of embedding hidden digital data in a video. Ideally in case of an invisible watermark, a user viewing the video cannot perceive a difference between the original, unmarked video and the marked video, but a watermark extraction application can read the watermark and obtain the embedded information. The extracted information can then be used to determine the authenticity of a video as well as help in differentiating between an original and a copied video. Some common applications of video watermarking include [1], [6] : Copyright Protection: For the protection of intellectual property, the video data owner can embed a watermark representing copyright information in video data. This watermark can help prove ownership in a legal court when someone has infringed on the owner's copyrights. For instance, embedding the original video clip by noninvertible video watermarking algorithms during the verification procedure helps to prevent the multiple ownership problems in some cases. Video Authentication: Popular video editing software's available today permit users to easily tamper with video content. Authentication techniques are consequently needed in order to ensure the authenticity of the content. One solution is the use of digital watermarks. Timestamp, camera ID and frame serial number are used as a watermark and embedded into every single frame of the video stream. Video fingerprinting: To trace the source of illegal copies, a fingerprinting technique can be used. In this application, the video data owner can embed different watermarks in the copies of the data that are supplied to different customers. Fingerprinting can be compared to embedding a serial number in the data that is related to the customer's identity. It enables the intellectual property owner to identify customers who have broken their license agreement by supplying the video data to third parties. Copy control: The information stored in a watermark can be used to directly control digital recording devices for copy protection purposes. In this case, the watermark represents a copyprohibit bit and watermark detectors in the digital-media recorder determine whether the video data offered to the recorder may be stored or not. This paper presents a simple yet very efficient algorithmic approach for invisibly watermarking videos using the concept of 2-bit Least Significant Bit (LSB) encoding/decoding over a client-server framework. The following sections of this paper describe in detail how the structure of a video file can be deconstructed to isolate particular video frames for watermarking and how the encoding/decoding modules of the watermarking algorithm work using Base64 coding. ISSN: 2231-2803 http://www.ijcttjournal.org Page 144 II. IDENTIFYING I-FRAMES FROM A VIDEO SEQUENCE Video files, such as those coded by the MPEG and H.262/H.263 standards comprise of information headers followed by a sequence of image frames [2], [3] . For the purposes of clarity and demonstration, all video codings hereby mentioned will refer to the MPEG-1 standard. The video header bit-stream consists of a hierarchical structure with seven layers [4] : Video The Picture layer consists of a picture-header followed by consecutive picture frames. These picture frames are classified as follows: I-frame (intra coded picture): a picture frame that is coded independently of all other frames. Each GOP begins (in decoding order) with this type of frame. P frame (predictive coded frame): contains motion-compensated difference information relative to previously decoded frames. A P-frame may refer to only one other preceding picture frame. B frame (bi-predictive coded frame): this frame also contains motion-compensated difference information relative to previously decoded pictures but as the name indicates, such a frame can refer to two other picture framesone preceding and the other one succeeding the Bframe. D-frame (DC direct coded picture): serves as a fast-access representation of a picture for loss robustness or fast-forwarding. Continuing the above mentioned hierarchy, the Picture layer consists of a picture header followed by slice data. Among other fields in the picture-header, the most important ones are the picture-start-code (with a 32-bit value of 00000100 to indicate the beginning of a picture frame) and the picture-coding type, which is a 3-bit value indicating the frame type. Figure 1 shows the Picture layer header format [4] . Picture-startcode Temporal-sequencenumber (10 bits) Based on the value of the picture-coding-type, one can differentiate between an I-frame, P-frame, Bframe and a D-frame. The value of picture-codingtype is 001 for an I-frame, 010 for a P-frame, 011for a B-frame and 100 for a D-frame. Since the first frame in every GOP starts with an I-frame and because I-frames contain important video information, I-frames are ideally suited for being watermarked. Picturecoding-type (3 bits) & slice data In the algorithmic approach proposed in this paper, consecutive I-frames have been identified and selected for being watermarked and a checksum of that watermark has been added to the I-frames for further authentication and security. The following sections describe how the actual encoding/decoding of the watermark has been done on the I-frames as proposed. III. ENCODING/DECODING THE WATERMARK IN VIDEO FRAMES As explained in the previous section, encoding and decoding of the watermark has been done on consecutive I-frames. The watermark itself has been chosen to be a document file which contains text and logos to describe the authentication and ownership rights of the video. The proposed watermarking algorithm works in three phases: 1. Encoding the watermark: A normal unmarked video serves as the input to the watermark encoder. During encoding, whenever an I-frame is detected, the RGB color pixel values of that Iframe will be stored in a matrix format. component. This process is continued till every character in the watermark text file has been read and embedded in the I-frame and the entire procedure is repeated for other I-frames. In order to improve the robustness, a checksum value of the watermark text file is also added to each Iframe. The task of encoding is performed at the server end of the framework. 2. Distributing the watermarked video: After completing the encoding process, the server sends the watermarked video and Base64 encoded character matrix to a compliant client over a secure network. The client then begins decoding the watermarked video. 3. Decoding the watermark: After the client receives the watermarked video, the decoding process begins. Again, when an I-frame is detected, each color pixel is decomposed into its RGB components. The Base64 encoded matrix is read as a byte stream and each read group of 6 bits is then grouped into 3 pairs -V1, V2 and V3. Figure 2 shows a schematic diagram depicting the watermark encoding/decoding process. IV. ANALYZING ALGORITHM PERFORMANCE EFFICIENCY The algorithmic approach for watermarking proposed in this paper distributes the tasks of encoding and decoding to the server-end and client-end respectively. To make up for the time lost in network delays during sending and receiving videos by making the encoding/decoding processes highly efficient. To do so, the algorithm has been designed to be parallely operated upon since individual pixel watermarking operations are independent of each other and the data-structures involved in the algorithm are all stored and manipulated in the form of matrices. Due to such parallelization, the encoding/decoding processes can be carried out on many-core General Purpose Graphic Processing Units (GPGPUs) [5] , making the performance speed-up incredibly large. In a sample test-run, for a MPEG-1 video file of size 1 MB, watermark document file of size 200 KB, and the execution platform being NVIDIA Quadro 4000 (with compute capability 2.0 and 256 cores), a speed-up of the order ~75 has been achieved when compared to the normal sequential execution of the same. problem. This paper proposes and describes how invisible watermarking can be done for videos over a client-server framework using 2-bti LSB coding with checksum. To enhance the performance efficiency, the proposed algorithm and the associated datastructures have been designed in such a way so that they can be operated upon parallely with the help of many-core processors such as GPGPUs, thus greatly increasing the performance of the watermarking algorithm. This has been further confirmed by various sample test runs as described in the previous section. As a further enhancement, the proposed algorithm can be extended to watermark videos of other coding standards such as AVI, H.263/263/264 etc. Also, algorithms in the frequency domain, such as Fourier transforms and Wavelets can be used instead for encoding/decoding the watermark.
2016-07-03T12:39:53.000Z
2016-06-25T00:00:00.000
{ "year": 2016, "sha1": "1da3f751d3434a0482c62404c40faf58a21a2783", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1612.04688", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1da3f751d3434a0482c62404c40faf58a21a2783", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
216208129
pes2o/s2orc
v3-fos-license
Development and psychometric evaluation of triage questionnaire (QTrix): Exploratory factor analysis and item response theory analysis Introduction: Triage errors can occur in all emergency departments, regardless of the type of triage system being used. One way to minimize triaging errors is by enhancing the triage officers’ knowledge and attitude on triage. The assessment of knowledge and attitude can be carried out by questionnaire assessment. This study aims to perform content, face, and construct validation on a newly developed triage questionnaire, QTrix, which is designed for healthcare personnel in a tertiary teaching hospital in Kelantan, Malaysia, that uses the three-tier Malaysian Triage Category system. Methods: This study consisted of two phases: the first phase was the questionnaire development phase, which included the content validity with the expert panel and the face validity using 30 respondents; and the second phase was the psychometric assessment phase, which included the item response theory and the exploratory factor analysis using 139 respondents. Results: The knowledge section with 12 remaining items was considered unidimensional by item response theory after removing items with extreme difficulty coefficients (outside the range of −3 to +3) and items with very low discrimination values (<0.35). After exploratory factor analysis, two items in the attitude section were removed due to low factor loadings (<0.3) and high item complexity. The reliability of the remaining 13 items in the attitude section was very good as shown by Cronbach’s alpha values of more than 0.8. Conclusion: The QTrix questionnaire is a well-validated and reliable tool to assess the knowledge and the attitude on triage. Its use among healthcare personnel can help minimize triaging errors in emergency departments that utilize the Malaysian Triage Category system. Introduction Triage is a French word, meaning to sort out. In the past, this term was being used for agricultural products. 1 In medicine, to triage means to prioritize patients according to their severity. 2 Triaging is necessary in a situation where there are many patients to be treated at the same time, for example, in the Emergency Department (ED). In 2013, there were nearly 10,000,000 outpatient visits in ED in Malaysian hospitals and about three-quarters of these visits were in government hospitals. 3 A triage officer in ED is expected to perform every ED triage accurately despite having very limited time and information. This is not easy even with experience, and a wrong triage can put the patient at risk. A wrong triage can be in the form of either under-triaging a patient or over-triaging a patient. Under-triaging leads to assigning a less urgent code to a patient that can cause the patient to deteriorate while waiting for treatment, while over-triaging leads to assigning a more urgent code to a patient that can lead to unnecessary wasting of resources and man power. 4 The most important factors to improve one's triage are still education, experience, and empathy. 5 The Malaysian Triage Category (MTC) is a three-tier triaging system, in which the patients can be triaged into three categories: red, yellow, and green. 6 Upon primary triage, all unstable patients who need urgent care will be triaged to red to be managed immediately, while other stable patients will undergo secondary triage. 6 After a quick evaluation in secondary triage, the triage officer will further decide to triage the patient to yellow or green, according to the respective clinical urgency. 6 In all hospitals in Malaysia, the assistant medical officers (AMOs) and staff nurses (SNs) are the main triage officers, and in selected hospitals such as the tertiary hospitals and teaching hospitals, doctors are also posted as triage officers. In the literature, knowledge and attitude have been assessed using questionnaires, and the results are reliable and reproducible. 7,8 Exploratory factor analysis (EFA) is a form of factor analysis for polytomous data that explores the presence of factors in relation to the number of items being analyzed and has been used widely in psychology and in health education. [9][10][11][12][13] Item response theory (IRT) is another tool that has been used for test validations by assessing the difficulty and the discriminative level of a test. 14 So far, there has been no single questionnaire to assess the knowledge and the attitude among triage officers in Malaysia. The aims of this study are to perform content, face, and construct validations on a recently developed questionnaire, the QTrix questionnaire, to assess the knowledge and the attitude on ED triage among healthcare personnel in the ED of Hospital Universiti Sains Malaysia (EDHUSM), where the three-tier MTC system is used. Psychometric evaluation of the questionnaire is carried out using EFA and IRT to assess its validity and reliability. Study design This study consisted of two phases: (1) the development phase of the QTrix questionnaire, including content and face validation; and (2) the psychometric evaluation phase, which focused on internal structure validation using IRT and EFA. The internal reliability of this questionnaire is further evaluated using Cronbach's alpha coefficient and the corrected item-total correlation (CITC). The study was conducted between 16 February 2017 and 15 February 2018. Written informed consent was not necessary because no patient data have been included in this article. Phase 1: questionnaire development The QTrix questionnaire was written in English to minimize the misinterpretation of medical terms. It has three sections: the sociodemographic, knowledge, and attitude sections. Items in the knowledge section were derived from the book titled Emergency Severity Index (ESI): A Triage Tool for Emergency Department Care. 9 Content validation was conducted by discussing the relevance of each item in each section of the questionnaire with the expert panels. 10 The expert panels were selected based on their expertise in ED triage and in questionnaire validation. Five consultants from the Emergency Medicine Department and one lecturer from the Unit of Biostatistics and Research Methodology were recruited during this phase. The content-validated questionnaire then underwent pre-testing among 30 subjects to verify the applicability of each item and to evaluate respondents' understanding of the questionnaire. 10 This process is also known as the face validation. 10 The participants were recruited via convenience sampling among medical officers, assistant medical officers, and staff nurses from EDHUSM who had more than 5 years of working experience in ED. The variability of their responses and their understanding of the questions, readability, and presence of any ambiguity were recorded and evaluated. The results were used to produce the preliminary version of the questionnaire, which was subjected to the remainder of the evaluation in the study. Phase 2: psychometric evaluation The internal structure validation of this questionnaire was performed among the healthcare personnel in EDHUSM via construct validation or psychometric evaluation. 10 The sample population for this phase included house officers, assistant medical officers, staff nurses, and health assistants. Those who were already involved during pre-testing of the questionnaire were excluded. The preliminary version of the QTrix questionnaire was administered via an online form, and the results were recorded into the Statistical Package for the Social Science (SPSS) and analyzed using the R studio. Psychometric evaluation of this questionnaire involved the use of IRT for dichotomous data as in the knowledge section and EFA for polytomous data as in the attitude section. Most recommendations in the literature agreed to a sample size of at least 100 for both EFA and IRT, and others advised respondents-to-item ratio that ranged from 3:1 to 20:1. [11][12][13][14][15] The flowchart for the whole process of IRT and EFA is summarized in Figure 1. Phase 2a: psychometric evaluation using IRT IRT had two postulations in each analysis: that all items in a test were unidimensional; and that the chance of one respondent had one answer correct was independent on the chance that the same trait of respondent had the next answer correct. 10 Using two-parameter logistic IRT (2-PL IRT), each item from the knowledge section was given its difficulty and discrimination parameters. Items with extreme difficulty coefficients-either too positive (>+3) or too negative (<−3) and very low discriminative coefficients (<0.35)-were removed to ensure unidimensionality. 15,16 A test was deemed unidimensional once items with extreme difficulty coefficients and non-discriminative items were removed. 10,15,16 Subsequently, the performance of a respondent in each test item can be predicted by the respondent's ability, and the relationship between the respondent's ability and the item's discrimination and difficulty can be presented in item characteristic curves (ICC) and item information curves (IIC). 10 The result from IRT is further presented into ICC, IIC, test information function graph, and test response function graph. In ICC, a more discriminative item is represented by a steeper slope curve, while in IIC, a more discriminative item is represented by a higher peak curve. 16 In both ICC and IIC, curves on the left side represented an easier item with a more negative difficulty coefficient, while curves on the right side represented a more difficult item with a more positive difficulty coefficient. 16 Phase 2b: psychometric evaluation using EFA Analysis of the attitude section using EFA required assessment of the normality of the data to determine the mode of analysis: maximum likelihood (ML) if the data were normally distributed or principal axis factoring (PAF) if the data were not normally distributed. 17,18 In this section, the rotation method is carried out using oblimin, considering that the factors are correlated with each other. 17,18 The data were assessed for its suitability for analysis using the Kaiser-Meyer-Olkin (KMO) Measure of Sampling Adequacy (MSA) of at least 0.7 and using Bartlett's test of sphericity, where the p value needs to be significant. [17][18][19] The number of extracted factors for the attitude section was determined using Kaiser's eigenvalue of more-thanone rule, where factors with eigenvalue more than 1 are kept. 17 The number of extracted factors was confirmed with Cattell's scree test, parallel analysis, very simple structure, and Velicer's minimum average partial (MAP). 17 A good factor must have at least three items, and a good item must have the factor loading of more than 0.3 with no cross loadings, the item communality of more than 0.4, and the item complexity as close to 1. 12,13,18 Cross loading is a term for an item that bears significant factor loading on more than one factors. 12,13,18 A good factor correlation must be less than 0.85 to indicate that there was no factor overlap. 17 The result obtained from EFA with the items that fulfilled the above criteria was then assessed for the internal consistency. Among assessments of internal reliability are Cronbach's alpha coefficient and CITC. A satisfactory Cronbach's alpha coefficient ranges from 0.7 to not more than 0.9, while an ideal CITC is defined as more than 0.5. 10,12 Cronbach's alpha coefficient, if an item is deleted, is only considered significant if there is a marked improvement. 10 Phase 1: questionnaire development The QTrix questionnaire has three sections: the sociodemographic, knowledge, and attitude sections, which are summarized in Table 1. In the content validation of the knowledge section, knowledge question 10 (K10) in Table 2 was identified as unfit for its intended red domain; therefore, the item was modified by altering the patient's heart rate to "190 beats/min" so that the number of items remained equal for each domain to maintain equal coverage of items for each domain. In the attitude section, which is presented in Table 3, the expert panel unanimously agreed that all items were consistent with the intended domains in terms of relevance, coverage, and representativeness. For face validation, the content-validated QTrix questionnaire was pre-tested among senior healthcare personnel and medical officers. From their responses, a few changes were made in terms of wording, terminologies, and layout to ensure that the questions were clear and easy to understand. The final number of items in the final draft of the questionnaire at this stage remained the same. K9 Concerned parents arrived with their 4-day-old baby girl who was sleeping peacefully in the mother's arms. "I saw some blood when I changed her diaper," reported by the father. The mother told that the baby was nursing well and weighed 3.2 kg at birth. Phase 2: psychometric evaluation There were 139 respondents for the construct validation. The summary of the respondent's sociodemographic data is shown in Figure 2. The sample size for this phase fulfilled the minimum requirement of 100 respondents. [11][12][13][14] The responses in the knowledge section were coded as: 1 = correct responses and 0 = incorrect or missing responses. The responses in the attitude section were coded as: 1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, and 5 = strongly agree. Any missing data were handled using the MICE function in the R studio. Phase 2a: psychometric evaluation using IRT Items in the knowledge section of this questionnaire were given its difficulty and discrimination parameters using 2-PL IRT. A total of 12 items were identified to have extreme difficulty values and very low discrimination values, which were K3, K4, K8, K11, K13, K14, K15, K16, K19, K21, K23, and K24. These items were removed, and the remaining 12 items were unidimensional. The result was presented in Table 2. The result from IRT is presented in Figure 3(a) to (d): the ICC, the IIC, the test information function, and the test response function, respectively. The knowledge section of this questionnaire has a good discriminative peak with a value of more than 1.7, which is shown by the test information function curve in Figure 3(c). This section also has a good smoothly rising slope signifying an adequately discriminative test, as portrayed by the test response function in Figure 3(d). Phase 2b: psychometric evaluation using EFA Assessment of normality in the attitude section revealed that the data were not normally distributed. The data were deemed fit for analysis because the MSA was 0.82 which indicated good sampling adequacy. Bartlett's test of sphericity was significant with p < 0.05, indicating that there were correlations between the items in this section. Kaiser's eigenvalue calculation of more-than-one rule suggested three-factor solution, while Cattell's scree test and Velicer's MAP suggested only two factors. Analysis with two factors gave the most acceptable result, and attitude question items-three (A3a) and five (A5a)-were removed due to their low factor loadings and high item complexity. There was no factor overlap as indicated by factor correlations of less than 0.85. The two factors identified were labeled as Attitude Factor 1 (PA1) and Attitude Factor 2 (PA2). PA1 had 10 items that originated from all three domains of the attitude section, while PA2 only had three items that originated from the behavior domain. Cronbach's alpha coefficient was very good for all items, ranging from more than 0.7 to not more than 0.9. There is no significant improvement in Having separate areas for primary triage and secondary triage helps to triage patients better. A5c If a patient is considered stable by the primary triage, he/she will be sent to the secondary triage for further assessment. CITC: corrected item-total correlation. a An item can be removed if this value is significantly higher than Cronbach's alpha. b Reversed scored items (these items were recoded during analysis so that a better attitude received a higher mark: 1 = strongly agree, 2 = agree, 3 = neutral, 4 = disagree, 5 = strongly disagree). Cronbach's alpha if any item in this questionnaire was deleted. Each item's CITC of more than 0.45 was accepted and agreed upon among the expert panel. The result from the EFA was summarized in Table 3. Discussion This study aimed to develop and validate the recently developed triage questionnaire, QTrix. This questionnaire was designed in English language, using the Malaysian three-tier triaging system to assess the knowledge and the attitude on ED triage among healthcare personnel from EDHUSM-a tertiary teaching hospital in Kelantan, Malaysia. Each item in each domain was discussed among the expert panels regarding its relevance and suitability as part of the questionnaire's content validity. An identified limitation to this phase was the lack of psychologist input, considering that this questionnaire assessed not only the respondent's knowledge but also the attitude on ED triage. The expert panels and the intended group of respondents may have different understandings of the items in this questionnaire; therefore, a pre-testing was required following the content validation. Pre-testing, or also called face validity, was conducted to ensure that each item is understood the way the researcher intended. Pre-testing can enhance responses by improving respondent's motivation and cooperation, decreasing any form of respondent's disapproval, inviting more respondents, allowing other readers to be convinced by the result, and boosting public acceptance. 10 The internal structure validation, or the construct validity, of this QTrix questionnaire involved performing psychometric evaluation of the data using IRT and EFA. IRT has been widely used since the 1970s in tests like the Scholastic Aptitude Tests (SAT) in the United States and is suitable for assessing dichotomous data. 20 EFA is a form of factor analysis that is suitable for polytomous data, which was mainly used in psychology and education in the past, and now slowly gaining its glamor in the health sector. 14 Both IRT and EFA can be carried out using the R studio, which is an open free program with an extensive userfriendly network called the Comprehensive R Archive Network (CRAN). 21 Using IRT, each item was analyzed for its difficulty and discrimination values. A difficult question has a positive difficult value and vice versa, and a question with a high discrimination value can distinguish a brilliant respondent from a less brilliant respondent. 15 There were three items with difficulty coefficient of less than −3, namely, K5, K18, and K22. These items were retained because of their good discriminative coefficients (>0.7), and this action did not alter the unidimensionality of the test. Difficulty values were translated into ability in the ICC, IIC, test information function, and test response function as depicted in Figure 3. In IIC, an item that peaks at the negative ability value can discriminate respondents with lower ability, while an item that peaks at the positive ability value can discriminate respondents with higher ability. 15,16 Figure 3(b) displayed the IIC of this QTrix questionnaire, where the highest peak information was represented by knowledge question 20 (K20), at the ability value of −1.6. The test information function curve in Figure 3(c) displayed that the knowledge section of this QTrix questionnaire could best discriminate a respondent with the ability value within the range of −4 and +2. This means that the QTrix questionnaire can best discriminate respondents with a slightly lower ability and therefore is suitable for the assessment of knowledge and attitude among healthcare personnel involved in ED triage, who are mostly AMOs and SNs. EFA, on the contrary, finds out the correlation between each item in the attitude section of this questionnaire: whether the items are (1) strongly related to each other, (2) not related to each other, or (3) can be grouped together with other items alike. 10 The items in the attitude section of this QTrix questionnaire were initially drafted with three domains: the affective, behavioral, and cognitive components of attitude. 22 After analysis, there were only two identified factors: Attitude Factor 1 and Attitude Factor 2. Attitude Factor 1 had 10 items that originated from all three domains of the attitude section, while Attitude Factor 2 contained three items that originated from the behavior domain only. Attitude Factor 1 is the explicit attitude, and Attitude Factor 2 is the implicit attitude. Attitude has been widely studied way back in World War II by Stouffer et al., which concluded that a person's attitude does not necessarily predict that person's behavior and this was supported by LaPiere's observational study. [23][24][25] This buttressed the finding of two factors in the attitude section: the explicit and implicit attitudes, or in simpler words, conscious and subconscious attitudes. Explicit attitude requires more thinking due to its conscious nature as compared to implicit attitude, which occurs subconsciously. 26 While explicit attitude is easily measurable by written tests, implicit attitude is measured by how fast the test is carried out. 26 The items in Attitude Factor 1 require the respondents to think more than does the items in Attitude Factor 2; hence, it is reasonable to rename these two factors as explicit attitude and implicit attitude, respectively. The internal reliability of this questionnaire in the attitude section was achieved by its good Cronbach's alpha coefficient of each factor, ranging from 0.7 to 0.9. Although there were a few items not meeting a good value of CITC, these items were kept because of their good factor loadings and Cronbach's alpha coefficient values. The removal of these items would not significantly improve the overall value of Cronbach's alpha in the attitude section. The QTrix questionnaire has good psychometric properties, and is a valid and reliable tool to evaluate knowledge and attitude on triage among healthcare personnel in ED that practices three-tier triaging systems like the MTC. It is thus not suitable for hospitals outside Malaysia not using the same three-tier system. Other limitations of this study were its relatively small sample size. Improvements that can be done to improve the validity and the reliability of this questionnaire are to run the study with a bigger sample size and additional analysis using confirmatory factor analysis (CFA) as part of the psychometric evaluation of this questionnaire. Declaration of conflicting interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) received no financial support for the research, authorship, and/or publication of this article. Availability of data and materials The data and materials are available in soft copy attached as SPSS data named as follows: 1.SOCIODEMOGRAPHICdata.sav. 2.KNOWLEDGE.sav. 3a.ATTITUDE.sav. 3b.ATTITUDEwith IMPUTEDdata.sav. Informed consent All the participants who involved in this study had their informed consent taken by the authors.
2020-03-19T10:17:58.484Z
2020-03-17T00:00:00.000
{ "year": 2020, "sha1": "8a124c51331ab61ab5a5346bf84d208568419b3c", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1024907920908366", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "9faf537a1bdd51f065b2fb262ec74f400077f1b2", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
10940432
pes2o/s2orc
v3-fos-license
Constrained Automated Mechanism Design for Infinite Games of Incomplete Information We present a functional framework for automated mechanism design based on a two-stage game model of strategic interaction between the designer and the mechanism participants, and apply it to several classes of two-player infinite games of incomplete information. At the core of our framework is a black-box optimization algorithm which guides the selection process of candidate mechanisms. Our approach yields optimal or nearly optimal mechanisms in several application domains using various objective functions. By comparing our results with known optimal mechanisms, and in some cases improving on the best known mechanisms, we provide evidence that ours is a promising approach to parametric design of indirect mechanisms. Abstract We present a functional framework for automated mechanism design based on a twostage game model of strategic interaction between the designer and the mechanism participants, and apply it to several classes of two-player infinite games of incomplete information. At the core of our framework is a black-box optimization algorithm which guides the selection process of candidate mechanisms. Our approach yields optimal or nearly optimal mechanisms in several application domains using various objective functions. By comparing our results with known optimal mechanisms, and in some cases improving on the best known mechanisms, we provide evidence that ours is a promising approach to parametric design of indirect mechanisms. Motivation While the field of Mechanism Design has been quite successful within a wide range of academic disciplines, much of its progress came as a series of arduous theoretical efforts. In its practical applications, however, successes have often been preceded by a series of setbacks, with the drama of auctioning radio spectrum licenses that unfolded in several countries providing a powerful example [McMillan, 1994]. A difficulty in practical mechanism design that has been especially emphasized is the unique nature of most practical design problems. Often, this uniqueness is manifest in the idiosyncratic nature of objectives and constraints. For example, when the US government tried to set up a mechanism to sell the radio spectrum licenses, it identified among its objectives promotion of rapid deployment of new technologies. Additionally, it imposed a number of ad hoc constraints, such as ensuring that some licenses go to minority-owned and women-owned companies [McMillan, 1994]. Thus, a prime motivation for Conitzer and Sandholm's automated mechanism design work Sandholm, 2002, 2003a] was to produce a framework for solving mechanism design problems computationally given arbitrary objectives and constraints. We likewise pursue this goal, and seek in addition to avoid reliance on direct truthful mechanisms. This reliance has at its core the Revelation Principle [Myerson, 1981], which states that the outcome of any given mechanism can still be achieved if we restrict the design space to mechanisms that induce truthful revelation of agent preferences. While theoretically sound, there have been criticisms of the principle on computational grounds, for example, those leveled by Conitzer and Sandholm [2003b]. It is also well recognized that if the design space is restricted in arbitrary ways, the revelation principle need not hold. While the computational criticisms can often be addressed to some degree within the spirit of direct mechanisms (e.g., by multistage mechanisms, such as ascending auctions, which implement partial revelation of agent preferences in a series of steps), idiosyncratic constraints on the design problem generally present a more difficult hurdle to overcome. In this work, we introduce an approach to the design of general mechanisms (direct or indirect) given arbitrary designer objectives and arbitrary constraints on the design space, which we allow to be continuous. We assume that mechanisms induce games of incomplete information in which agents have infinite sets of strategies and types. As in most mechanism design literature, we assume that the designer knows the set of all possible agent types and their distribution, but not the actual type realizations. Our main support for the usefulness of our framework comes from applying it to several problems in auction design which constrain the allocation and/or transfer functions to a particular functional form. In practice, of course, we cannot possibly tackle an arbitrarily complex design space. Our simplification comes from assuming that the designer seeks to find the best setting for particular design parameters. In other words, we allow the designer to search for a mechanism in some subset of an n-dimensional Euclidean space, rather than in an arbitrary function space, as would be required in a completely general setting. Furthermore, we believe that many practical design problems involve search for the optimal or nearly optimal setting of parameters within an existing infrustructure. For example, it is much more likely that policy-makers will seek an appropriate tax rate to achieve their objective than overhaul the entire tax system. In the following sections, we present our framework for automated mechanism design and test it out in several application domains. Our results suggest that our approach has much promise: most of the designs that we discover automatically are nearly as good as or better than the best known hand-built designs in the literature. Notation We restrict our attention to one-shot games of incomplete information, denoted by where I refers to the set of players and m = |I| is the number of players. A i is the set of actions available to player i ∈ I, with A = A 1 × · · · × A m representing the set of joint actions of all players. T i is the set of types (private information) of player i, with T = T 1 × · · · × T m representing the joint type space, and F (·) is the joint type distribution. We define a strategy of a player i to be a function s i : T i → R, and use s(t) to denote the vector (s 1 (t 1 ), . . . , s m (t m )). It is often convenient to refer to a strategy of player i separately from that of the remaining players. To accommodate this, we use a −i to denote the joint action of all players other than i. Similarly, t −i designates the joint type of all players other than i. We define the payoff function of each player i by u i : is the payoff to player i with type t i for playing strategy a i when the remaining players with joint types t −i play a −i . General Framework We can model the strategic interactions between the designer of the mechanism and its participants as a two-stage game [Vorobeychik et al., 2006]. The de-signer moves first by selecting a value θ from a set of allowable mechanism settings, Θ. All the participant agents observe the mechanism parameter θ and move simultaneously thereafter. Since the participants know the mechanism parameter, we define a game between them in the second stage as We refer to Γ θ as the game induced by θ. As is common in mechanism design literature, we evaluate mechanisms with respect to a sample Bayes-Nash equilibrium, s(t, θ). 1 We say that given an outcome of play r, the designer's goal is to maximize a welfare function W (r, t, θ) with respect to the distribution of types. Thus, given that a Bayes-Nash equilibrium, s(t, θ), is the relevant outcome of play, the designer's problem is to maximize Observe that if we knew s(t, θ) as a function of θ, the designer would simply be faced with an optimization problem. This insight is actually a consequence of the application of backwards induction, which would have us find s(t, θ) first for every θ and then compute an optimal mechanism with respect to these equilibria. If the design space were small, backwards induction applied to our model would thus yield an algorithm for optimal mechanism design. Indeed, if additionally the games Γ θ featured small sets of players, strategies, and types, we would say little more about the subject. Our goal, however, is to develop a mechanism design tool for settings in which it is infeasible to obtain a solution of Γ θ for every θ ∈ Θ, either because the space of possible mechanisms is large, or because solving (or approximating solutions to) Γ θ is computationally daunting. Additionally, we try to avoid making assumptions about the objective function or constraints on the design problem or the agent type distributions. We do restrict the games to two players with piecewise linear utility functions, but allow them to have infinite strategy and type sets. In short, we propose the following high-level procedure for finding optimal mechanisms: 1. Select a candidate mechanism, θ. 3. Evaluate the objective and constraints given solutions to Γ θ . 4. Repeat this procedure for a specified number of steps. Black-Box Optimizer Figure 1: Automated mechanism design procedure based on black-box optimization. 5. Return an approximately optimal design based on the resulting optimization path. We visually represent this procedure by a diagram in Figure 1. Designer's Optimization Problem We begin by treating the designer's problem as blackbox optimization, where the black box produces a noisy evaluation of an input design parameter, θ, with respect to the designer's objective, W (s(θ), θ), given the game-theoretic predictions of play. Once we frame the problem as a black-box optimization problem, we can draw on a wealth of literature devoted to developing methods to approximate optimal solutions [Spall, 2003]. While we can in principle select a number of these, we have chosen simulated annealing, as it has proved quite effective for a great variety of simulation optimization problems in noisy settings with many local optima. As an application of black-box optimization, the mechanism design problem in our formulation is just one of many problems that can be addressed with one of a selection of methods. What makes it special is the subproblem of evaluating the objective function for a given mechanism choice, and the particular nature of mechanism design constraints which are evaluated based on Nash equilibrium outcomes and agent types. Objective Evaluation As implied by the backwards induction process, we must obtain the solutions (Bayes-Nash equilibria in our case) to the games induced by the design choice, θ, in order to evaluate the objective function. In general, this is simply not possible to do, since Bayes-Nash equilibria may not even exist in an arbitrary game, nor is there a general-purpose tool to find them. To the best of our knowledge, the only solver for a broad class of infinite games of incomplete information was introduced by Reeves and Wellman [2004] (henceforth, RW). Indeed, RW is a best-response finder, which has successfully been used iteratively to obtain sample Bayes-Nash equilibria for a restricted class of infinite two-player games of incomplete information. Since the goal of automated mechanism design is to approximate solutions to design problems with arbitrary objectives and constraints and to handle games with arbitrary type distributions, we treat the probability distribution over player types as a black box from which we can sample joint player types. Thus, we use numerical integration (sample mean in our implementation) to evaluate the expectation of the objective with respect to player types, thereby introducing noise into objective evaluation. Dealing with Constraints Mechanism design can feature any of the following three classes of constraints: ex ante (constraints evaluated with respect to the joint distribution of types), ex interim (evaluated separately for each player and type with respect to the joint type distribution of other players), and ex post (evaluated for every joint type profile). When the type space is infinite we of course cannot numerically evaluate any expression for every type. We therefore replace these constraints with probabilistic constraints that must hold for "most" types (i.e., a set of types with large probability measure). Intuitively, it is unlikely to matter if a constraint fails for types that occur with probability zero. We conjecture, further, that in most practical design problems, violation of a constraint on a "small" set of types will also be of little consequence, either because the resulting design is easy to fix, or because the other types will likely not have very beneficial deviations even if they account in their decisions for the effect of these unlikely types on the game dynamics. We support this conjecture via a series of applications of our framework: in none of these did our constraint relaxation lead the designer much astray. Even when we weaken constraints based on agent type sets to their probabilistic equivalents, we still need a way to verify that such constraints hold by sampling from the type distribution. Since we can take only a finite number of samples, we will in fact verify a probabilistic constraint only at some level of confidence. The question we want to ask, then, is how many samples do we need in order to say with probability at least 1 − α that the probability of seeing a type profile for which the constraint is violated is no more than p? That is the subject of the following theorem. Theorem 1. Let B denote a set on which a probabilistic constraint is violated, and suppose that we have a uniform prior over the interval [0, 1] on the probability measure of B. Then, we need at least log α log(1−p) − 1 samples to verify with probability at least 1−α that the measure of B is at most p. The proofs of this and other results can be found in the appendix of the full version of this paper. We next describe three specific constraints employed in our applications. Equilibrium Convergence Constraint Given that our game solutions are produced by a heuristic (iterative best-response) algorithm, they are not inherently guaranteed to represent equilibria of the candidate mechanism. We can instead enforce this property through an explicit constraint. The problem that we cannot in practice evaluate this constraint for every joint type profile is resolved by making it probabilistic, as described above. Thus, we define a (1 − p)-strong equilibrium convergence constraint: Definition 1. Let s(t) be the last strategy profile produced in a sequence of solver iterations, and let s (t) immediately precede s(t) in this sequence. Then the (1−p)-strong equilibrium convergence constraint is satisfied if for a set of type profiles t with probability measure no less than 1 − p, |s(t) − s (t)| < δ for some a priori fixed tolerance level δ. Ex Interim Individual Rationality Ex-Interim-IR specifies that for every agent and type, that agent's expected utility conditional on its type is greater than its opportunity cost of participating in the mechanism. Again, in the automated mechanism design framework, we must change this to a probabilistic constraint as described above. Definition 2. The (1 − p)-strong Ex-Interim-IR constraint is satisfied when for every agent i ∈ I, and for a set of types t i ∈ T i with probability measure no less is the opportunity cost of agent i with type t i of participating in the mechanism, and δ is some a priori fixed tolerance level. Commonly in the mechanism design literature the opportunity cost of participation, C i (t i ), is assumed to be zero but this assumption may not hold, for example, in an auction where not participating would be a give-away to competitors and entail negative utility. Minimum Revenue Constraint The final constraint that we consider ensures that the designer will obtain some minimal amount of revenue (or bound its loss) in attaining a non-revenue-related objective. Definition 3. The minimum revenue constraint is satisfied if E t k(s(t), t) ≥ C, where k(s(t), t) is the total payment made to the designer by agents with joint strategy s(t) and joint type profile t, and C is the lower bound on revenue. Setup Consider the problem of two people trying to decide between two options. Unless both players prefer the same option, no standard voting mechanism (with either straight votes or a ranking of the alternatives) can help with this problem. Instead we propose a simple auction: each player submits a bid and the player with the higher bid wins, paying some function of the bids to the loser in compensation. We define a space of mechanisms for this problem that are all budget balanced, individually rational, and (assuming monotone strategies) socially efficient. We then search the mechanism space for games that satisfy additional properties. The following is a payoff function defining a space of games parametrized by the function f . where u() gives the utility for an agent who has a value t for winning and chooses to bid a against an agent who has value t and bids a . The ts are the agents' types and the as their actions. Finally, f () is some function of the two bids. 3 In the tie-breaking case (which occurs with probability zero for many classes of strategies) the payoff is the average of the two other cases, i.e., the winner is chosen by a coin flip. We now consider a restriction of the class of mechanisms defined above. Definition 4. SGA(h, k) is the mechanism defined by Equation 1 with f (a, a ) = ha + ka , h, k ∈ [0, 1]. For example, in SGA(1/2, 0) the winner pays half its own bid to the loser. More generally, h and k will be the relative proportions of winner's and loser's bids that will be transfered from the winner to the loser. We now give Bayes-Nash equilibria for such games when types are uniform. Theorem 2. For h, k ≥ 0 and types U [A, B] with B ≥ A + 1 the following is a symmetric Bayes-Nash equilibrium of SGA(h, k): For the following discussion, we need to define the notion of truthfulness, or Bayes-Nash incentive compatibility. Definition 5 (BNIC). A mechanism is Bayes-Nash incentive compatible (truthful) if bidding s(t) = t constitutes a Bayes-Nash equilibrium of the game induced by the mechanism. The Revelation Principle [Mas-Colell et al., 1995] guarantees that for any mechanism, there exists a BNIC mechanism that is equivalent in terms of how it maps preferences to outcomes. We can now characterize the truthful mechanisms in this space. According to Theorem 2, SGA(1/3, 0) is truthful for U [0, B] types. We now show that this is the only truthful design in this design space. From here on we restrict ourselves to the case of U [0, 1] types. Since SGA(1/3, 0) is the only truthful mechanism in our design space, we can directly compare the objective value obtained from this mechanism and the best indirect mechanism in the sections that follow. Automated Design Problems Minimize Difference in Expected Utility First, we consider as our objective fairness, or negative differences between the expected utility of winner and loser. Alternatively, our goal is to minimize We first use the equilibrium bid derived above to analytically characterize optimal mechanisms. By comparison, the objective value for the truthful mechanism, SGA(1/3, 0), is 2/9, twice as high as the minimum produced by an untruthful mechanism. Thus, the revelation principle does not hold for this objective function in our design space. We can use Theorem 4 to find that the objective value for SGA(1/2, 0), the mechanism described by Reeves [2005], is 2/9. Now, to test our framework, we imagine we do not know about the above analytic derivations (including the derivation of the Bayes-Nash equilibrium) and run the automated mechanism design procedure in blackbox mode. Table 1 presents results when we start the Parameters Initial Design Final Design h 0.5 0 k 0 1 objective 2/9 1/9 h random 0 k random 1 objective N/A 1/9 Table 1: Design that approximately maximizes fairness (minimizes difference in expected utility between utility of winner and loser) when the optimization search starts at a fixed starting point, and the best mechanism from five random restarts. search at random values of h and k (taking the best outcome from 5 random restarts), and at the starting values of h = 0.5 and k = 0. Since the objective function turns out to be fairly simple, it is not surprising that we obtain the optimal mechanism for specific and random starting points (indeed, the optimal design was produced from every random starting point we generated). Minimize Expected (Ex-Ante) Difference in Utility Here we modify the objective function slightly as compared to the previous section, and instead aim to minimize the expected ex ante difference in utility: E|u(t, s(t), t , s(t ), k, h|a > a ) − u(t, s(t), t , s(t ), k, h|a < a )|. While the only difference from the previous section is the placement of the absolute value sign inside the expectation, this difference complicates the analytic derivation of the optimal design considerably. Therefore, we do not present the actual optimum design values. The results of application of our AMD framework are presented in Table 2. While the objective function in this example appears somewhat complex, it turns out (as we discovered through additional exploration) that there are many mechanisms that yield nearly optimal objective values. 4 Thus, both random restarts as well as a fixed starting point produced essentially the same near-optima. By comparison, the truthful design yields the objective value of about 0.22, which is considerably worse. Maximize Expected Utility of the Winner Yet another objective in the shared-good-auction domain is to maximize the expected utility of the winner. Formally, the designer is maximizing We first analytically derive the characterization of optimal mechanisms. Theorem 5. The problem is equivalent to finding (h, k) that maximize 4/9 − k/[18(h + k)]. Thus, k = 0 and h > 0 maximize the objective, and the optimum is 4/9. Here again our results in Table 3 are optimal or very nearly optimal, unsurprisingly for this relatively simple application. Of the examples we considered so far, most turned out to be analytic, and only one we could only approach numerically. Nevertheless, even in the analytic cases, the objective function forms were not trivial, particularly from a blind optimization perspective. Furthermore, we must take into account that even the simple cases are somewhat complicated by the presence of noise, and thus we need not arrive at global optima even in the simplest of settings so long as the number of samples is not very large. Having found success in the simple shared-good auction setting, we now turn our attention to a series of considerably more difficult problems. We present results from several applications of our automated mechanism design framework to specific twoplayer problems. One of these problems, finding auctions that yield maximum revenue to the designer, has been studied in a seminal paper by Myerson [1981] in a much more general setting than ours. Another, which seeks to find auctions that maximize social welfare, has also been studied more generally. Additionally, in several instances we were able to derive optima analytically. For all of these we have a known benchmark to strive for. Others have no known optimal design. In all of our applications player types are independently distributed with uniform distribution on the unit interval. Finally, we used 50 samples from the type distribution to verify Ex-Interim-IR. This gives us 0.95 probability that 94% of types lose no more than the opportunity cost plus our specified tolerance which we add to ensure that the presence of noise does not overconstrain the problem. It turns out that every application that we consider produces a mechanism that is individually rational for all types with respect to the tolerance level that was set. Myerson Auctions The seminal paper by Myerson [1981] presented a theoretical derivation of revenue maximizing auctions in a relatively general setting. Here, our aim is to find a mechanism with a nearly-optimal value of some given objective function, of which revenue is an example. 5 However, we restrict ourselves to a considerably less general setting than did Myerson, constraining our design space to that described by the parameters q, k 1 , k 2 , K 1 , k 3 , k 4 , K 2 in (4). where We further constrain all the design parameters to be in the interval [0,1]. In standard terminology, our design space allows the designer to choose an allocation parameter, q, which determines the probability that the winner (i.e., agent with the winning bid) gets the good, and transfers, which we constrain to be linear in agents' bids. While our automated mechanism design framework assures us that p-strong individual rationality will hold with the desired confidence, we can actually verify it by hand in this application. Furthermore, we can adjust the mechanism to account for lapses in individual rationality guarantees for subsets of agent types by giving to each agent the amount of the expected loss of the least fortunate type. 6 Similarly, if we do find a mechanism that is Ex-Interim-IR, we may still have an opportunity to increase expected revenue as long as the minimum expected gain of any type is strictly greater than zero. Maximize Revenue Here we are interested in finding approximately revenue-maximizing designs in our constrained design space. First, we derive the following based on Myerson's feasibility constraints: Theorem 6. Optimal incentive compatible mechanism in our setting yields the revenue of 1/3, which can be achieved by selecting q = 1, k 1 ∈ [0, 0.5], and k 2 ∈ [0, 1], respecting the constraint that k 1 + 0.5k 2 = 0.5. In addition to performing five restarts from random starting points, we repeated the simulated annealing procedure starting with the best design produced via the random restarts. This procedure yielded an Ex-Interim-IR design with expected revenue of approximately 0.3. We used the RW solver to find a symmetric equilibrium of this design, under which the bids are s(t) = 0.72t − 0.73. 7 We have already shown that the best known design, which is also the optimal incentive compatible mechanism in this setting, yields a revenue of 1/3 to the designer. Thus, our AMD framework produced a design near to the best known. It is an open question what the actual global optimum is. Maximize Welfare It is well known that the Vickrey auction is welfare-optimal. Thus, we know that the welfare optimum is attainable in our design space. Before proceeding with search, however, we must make one observation. While we are interested in welfare, it would be inadvisable in general to completely ignore the designer's revenue, since the designer is unlikely to be persuaded to run a mechanism at a disproportionate loss. To illustrate, take the same Vickrey auction, but afford each agent one billion dollars for participating. This mechanism is still welfare-optimal, but seems a senseless waste if optimality could be achieved without such spending (and, indeed, at some profit to the auctioneer). To remedy this problem, we use a minimum revenue constraint, ensuring that no mechanism that is too costly will be selected as optimal. First, we present a general result that characterizes welfare-optimal mechanisms in our setting. Theorem 7. Welfare is maximized if either the equilibrium bid function is strictly increasing and q = 1 or the equilibrium bid function is strictly decreasing and q = 0. Furthermore, the maximum expected welfare in our design space is 2/3. Thus, for example, both first-and second-price sealed bid auctions are welfare-optimizing (as is well known). The result of our search for optimal design is an Ex-Interim-IR mechanism which allocated the object to the highest bidder. This mechanism yields expected revenue of approximately 0.2 to the designer. We verified using the RW solver that the bid function s(t) = 0.645t − 0.44 is an equilibrium given this design. Since it is strictly increasing in t, we can conclude based on Theorem 7 that this design is welfare-optimal. Vicious Auctions In this section we study a design problem motivated by the Vicious Vickrey auction [Brandt and Weiß, 2001]. The essence of this auction is that while it is designed exactly like a regular Vickrey auction, the players get disutility from the utility of the other player, which is a function of parameter l, with the regular Vickrey auction the special case of l = 0. We generalize the Vicious Vickrey auction design using the same parameters as in the previous section such that the Vicious Vickrey auction is a special case with q = k 2 = 1 and k 1 = k 2 = k 3 = k 4 = K 1 = K 2 = 0, and the utility function of agents presented in the previous section can be recovered when l = 0. We assume in this construction that payments, which will be the same (as functions of players' bids and design parameters) as in the Myerson auction setting, have a particular effect on players' utility parametrized by l. Hence, the utility function in (5). where In all the results below, we fix l = 2/7. Reeves [2005] reports an equilibrium for Vicious Vickrey with this value of l to be s(t) = (7/9)t + 2/9. Thus, we can see that we are no longer assured incentive compatibility even in the second-price auction case. In general, it is unclear whether there exist incentive compatible mechanisms in this design space, particularly because we constrain all our parameters to lie in the interval [0, 1]. In the applications below, we redefined the individual rationality constraint in terms of an agent's opportunity cost of participation in the auction. Maximize Revenue Our first objective is to (nearly) maximize revenue in this domain. Our AMD framework achieves an Ex-Interim-IR mechanism with expected revenue of approximately 0.44. By comparison, a Vicious Vickrey auction achieves revenue of 0.48, but is not Ex-Interim-IR. Maximize Welfare We now tackle the objective of maximizing welfare using our AMD framework. The result is a mechanism with expected welfare of approximately 0.54, which is not Ex-Interim-IR. However, the designer can pay each agent 0.065, thereby making the design individually rational, and still maintain positive revenue. Maximize Weighted Sum of Revenue and Welfare In this section, we present results of AMD with the goal of maximizing the weighted sum of revenue and welfare. For simplicity (and having no reason for doing otherwise), we set weights to be equal. Our framework found a mechanism which is welfare-optimal and yields revenue of 0.52 after an adjustment which makes it Ex-Interim-IR. Interestingly, we were much more successful in both revenue and welfare objectives by eliminating the hard minimum revenue constraint and instead making it a part of the objective. Indeed, we found here the best mechanism so far for both objectives we considered, suggesting that there is also synergy between the two objectives. Conclusion We presented a framework for automated design of general mechanisms (direct or indirect) using the Bayes-Nash equilibrium solver for infinite games developed by Reeves and Wellman [2004]. Results from applying this framework to several design domains demonstrate the value of our approach for practical mechanism design. The mechanisms that we found were typically either close to the best known mechanisms, or better. While in principle it is not at all surprising that we can find mechanisms by searching the design spaceas long as we have an equilibrium finding tool-it was not at all clear that any such system will have practical merit. We presented evidence that indirect mechanism design in a constrained space can indeed be effectively automated on somewhat realistic design problems that yield very large games of incomplete information. Undoubtedly, real design problems are vastly more complicated than any that we considered (or any that can be considered theoretically). In such cases, we believe that our approach could offer considerable benefit if used in conjunction with other techniques, either to provide a starting point for design, or to tune a mechanism produced via theoretical analysis and computational experiments.
2012-06-20T08:16:48.000Z
2007-07-19T00:00:00.000
{ "year": 2012, "sha1": "e16cf917b3b9ef83bedcee6626f410eaefdd23be", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1206.5288", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "91a62bb12fbb68f72a20c9cf0654c17c27187f06", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
204308121
pes2o/s2orc
v3-fos-license
Experimental Research on Mechanical Properties of Frozen Sands under Isotropic Compression Conditions The K-G model has better practicability and superiority than the E-μ model. Conventional triaxial tests of frozen sand samples were carried out at -10 C under confining pressures of 0.5 MPa, 1.0 MPa, 2.0 MPa, 4.0 MPa and 6.0 MPa. During the consolidation process, the samples behave an isobaric state. While the confining pressure σ3 is unchanged, axial pressure σ1 is gradually increased during the shearing process. The test results show that the volumetric strain increases nonlinearly with the increase of the volumetric stress in the isobaric state, showing a power function trend. An improved K-G model is proposed to describe strain softening of frozen sand during shear. The verification results show that the model has good applicability. Introduction Frozen soils, a kind of special composite material, are composed of soil skeleton, unfrozen water, gas and ice. The mechanical properties of frozen soils are not only affected by the physical properties of each component, but also significantly changed by temperature and pressure. In recent years, with the continuous construction of industry and agriculture in cold area, as well as the sustained development of underground mining projects, the importance of frozen soils research has become more and more obvious. Frozen soils as the foundation of building has been reasonably verified, however, with the increase of building height, the excavation depth will also be deeper, and new challenges will be encountered in geotechnical engineering. Therefore, the theoretical study of the constitutive relationship is a breakthrough to solve these problems. At present, these are few studies on the constitutive relationship of frozen soil, which are mainly based on the assumption of continuous medium, and the constitutive model is established or modified by analyzing the stress-strain curve through test data [1]. The nonlinear model of soil can be roughly divided into the E-μ model and the K-G model [2]. The predicted results calculated by the E-μ model is large [3], and the K-G model not only approximately reflects the stress path factors [4], but also its parameters can be directly determined by tests [5]. Therefore, based on the conventional triaxial tests, this paper studies the volume stress-strain relationship curve of frozen sand, and proposes a K-G model which can reflect the strain softening of frozen sand. Test conditions The test sample is sand along the Qinghai-Tibet Railway with a dry density of 1.75 g/cm 3 . The particle size gradation curve is shown in Figure 1. A series of triaxial tests on frozen sand were carried out by MTS-Landmark 370.10 frozen soil dynamic and static triaxial test system. The test temperature condition is -10°C, and the confining pressure conditions is 0.5 MPa, 1.0 MPa, 2.0 MPa, 4.0 MPa, and 6.0 MPa. During the consolidation process, the sample is in an isobaric state. the confining pressure σ 3 is unchanged and axial pressure σ 1 is gradually increased until failure. Test result The volume stress-strain curve in consolidation stage is shown in Figure 2. From Figure 2, it can be seen that the volume strain of specimen increases nonlinearly with the increase of volume stress, and finally tends to a stable value. The relationship is as follows: v c a bp Where ε v is the volume strain in the consolidation stage, p is the volume stress, and a, b, and c are test parameters. K-G model In the three-dimensional stress state, the stress is often decomposed into spherical stress and deviatoric stress, and the volume modulus K t and the shear modulus Gt reflect respectively the elastic properties Volume modulus Kt The consolidation process of the conventional triaxial test is equivalent to the isotropic test. According to the definition of the volume modulus, the test results of the consolidation phase are plotted in the p/p cε v /ε vc coordinate system. The stress-strain relationship is shown in Figure 3. The following formula is fitted: Where , and m denotes test parameters. Figure 3. The p/p c -ε v /ε vc relationship curve The test data is divided into two parts: the consolidation data and the shear data. By fitting the test data in consolidation data, the test parameters , and m can be obtained. Details are shown in Table 1. According to the definition of volume modulus Shear modulus Gt Generally, to obtain the shear modulus G t , a triaxial shear tests is needed under the condition of volume stress p being constant. During the shearing process, dp=0. However, its test equipment and test technology requirements are high. Liu Zudian elaborated the method of calculating shear modulus G t in conventional triaxial test [6]. To describe the strain softening of frozen sand, it is suggested that the expression of stress-strain relationship be as follows: In Formula 6, is the maximum deviation stress, sm ε denotes the corresponding shear strain, and i G is the initial shear modulus. Combined with formula (6), the shear modulus can be expressed as: When s 0 ε = , the initial shear modulus can be expressed as: According to the above solution method, the value of c obtained from the test data is smaller. Therefore, in order to verify the applicability of the model, c=0.0035 can be assigned and the relevant parameters in shear modulus can be obtained by substitution formula (7). Details are shown in Table 2. Figure 4. The comparison between calculated values and test results Conclusions (1) In the isostatic test, the volume strain of frozen sand increases nonlinearly with the increase of volume stress, which is roughly a power function law. (2) In conventional triaxial shear tests, the q-ε s curve of frozen sand has obvious strain softening characteristics, and the improved KG model can better describe the stress-strain relationship.
2019-09-19T09:09:49.113Z
2019-09-18T00:00:00.000
{ "year": 2019, "sha1": "6b80599ff13bc8f95f6958fb3a7818da54bf308f", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/304/5/052040", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "dfdfd8bc76f5d37a792ac8ef5618e97c871e119b", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Materials Science" ] }
238207981
pes2o/s2orc
v3-fos-license
Optimization Strategy of IoT Sensor Configuration Based on Genetic Algorithm-Neural Network This article carries out the overall design framework of the IoT sensor data processing platform and analyzes the advantages of using the integrated construction platform. The platform is divided into two parts, a web management platform and a data communication system, and interacts with the database by integrating the business layers of the two into one. The web management platform provides configurable communication protocol customization services, equipment information, personal information, announcement information management services, and data collection information monitoring and analysis services. The collected data is analyzed by the sensor data communication service system and then provided to the web management platform for query and call. This paper discusses the theoretical basis of the combination of genetic algorithm and neural network and proposes the necessity of improving genetic algorithm. The improved level involves chromosome coding methods, fitness function selection, and genetic manipulation. We propose an improved genetic algorithm and use an improved genetic algorithm (IGA) to optimize the neural network structure. The finite element method is adopted, the finite element model is established, and the shock piezoelectric response is numerically simulated. The genetic neural network method is used to simulate the collision damage location detection problem. The piezoelectric sensor is optimized, and the optimal sensor configuration corresponding to its initial layout is obtained, which provides guidance for the optimal configuration of the actual piezoelectric sensor. Introduction The Internet of Things is an emerging thing. By combining various wired and wireless networks with the Internet, information about objects can be transmitted in real time and accurately [1]. The information data collected by the sensors at the perception level of the Internet of Things needs to be transmitted through the network. Due to the huge amount of data, a huge amount of information is formed. In the process of transmission, in order to ensure the accuracy and real time of the data, it must be adapted to various differences [2]. The Internet of Things is the integration of all things in the world, and of course, it is also the integration of all networks. Among the underlying perception layers of the Internet of Things, the most important and most widely used one is the wireless sensor network. Different wireless sensor networks have different requirements for sensor nodes. The development of wireless sensors provides a material founda-tion for the progress of the Internet of Things. Wireless sensor networks are now widely used in smart industries, smart communities, smart furniture, smart transportation, and smart medical care. The wireless sensor network has a strong application type, and different applications use different sensor nodes, network protocols, and security architectures, which have strong particularities [3]. The Internet of Things is a ubiquitous network built on the Internet. Through the organic integration of existing networks, a unified whole is formed. The Internet of Things has been developed gradually with the progress of various networks, especially the improvement of the technology of the original wireless sensor. The wireless sensor network is the most important part of the Internet of Things [4]. The wireless sensor network is an important part of the perception layer of the Internet of Things. With the development of science and technology, the functions of wireless sensor nodes are more diverse. Different perception nodes can perceive all kinds of information that people need and send them to the final application in a timely and stable manner. It is a deep-level expansion of the original network and an important material basis for the development of the Internet of Things. The Internet of Things is an extension of the existing network. In the process of merging the existing network, a large number of new problems will arise, such as the four points mentioned above. These problems will directly affect the security and stability of the network [5]. Only protocols that meet the requirements of the Internet of Things and ensure the safety and reliability of wireless sensor networks can solve these problems [6]. This article analyzes the requirements of the IoT sensor data processing platform, puts forward the overall design ideas and framework of the platform on this basis, and divides the platform into a web management platform that provides web services and sensor data that provides data services and communication services. This article gives the necessity and feasibility of combining neural network and genetic algorithm. We use an improved genetic algorithm (IGA) to optimize the neural network, so that IGA can optimize both the structure of the network and the weight of the network. Specifically, the technical contributions of this article can be summarized as follows: First, the approximation of the nonlinear function is realized on the MATLAB platform, which proves that the method enhances the performance of the network and improves the generalization ability. In this paper, the structure of the wing box specimen is analyzed, using the finite element method, the finite element model of the wing box specimen is established, and the excitation voltage response is numerically simulated. Second, based on the problem of detecting the impact damage position, we optimize the piezoelectric sensor of the wing box section specimen and obtain the optimal configuration of the sensor in the initial layout mode. Compared with the case of selecting 42 sensors in the initial sensor layout, now only 24 sensors are needed, which reduces the cost. Although the damage detection error has been improved, the costs and benefits are integrated. Third, the simulation results show that for the initial deployment mode of more sensors, the genetic neural network method can effectively reduce the number of more sensors, thereby reducing costs. The simulation results can provide certain guidance for the optimal configuration of the actual piezoelectric sensor of the structural specimen. Related Work Related scholars have designed a model theorem for genetic algorithms [7]. In this model theorem, there is a prominent problem. It is difficult to calculate and analyze model fitness. Zhou et al. [8] developed an analysis tool using wash function and model conversion. In this way, in view of the above problems, the calculation of model fitness can be solved, but there are also some shortcomings. For example, the genetic process of genetic algorithm cannot be directly explained, and the accuracy of model fitness is not high. Regarding the code length, experts have done some research on this [9]. Akbas et al. [10] proved that if binary coding is used, the code length is related to the optimal number of individuals. Regarding the population size, the size of the population is difficult to calculate. To find a suitable population size, you need to start from the actual problem. The process of solving the problem is different, and the required population size is also different. There are two more important parameters in genetic operators (crossover probability P c and mutation probability P m ). So far, the selection and setting of P c and P m have not been guided by a whole set of theories. The crossover probability will be related to the genetic algorithm. The probability of mutation will be related to the diversity of individuals in the population. In many cases, the selection of P c and P m parameters is statically set according to the problem to be solved. Static setting will lead to blindness. If it can be set dynamically during the evolution process, this problem will be avoided, and it will also speed up the search for the best [11]. This article is to make the crossover probability and mutation probability dynamically adjust with the evolution process. When the individual's fitness is high, the crossover probability will become smaller; when the individual's fitness is low, the mutation probability will change. Regarding the number of evolutions, it is currently in the research stage [12]. The determination of the number of evolutions is based on personal experience and multiple experiments. Neural networks are based on the distributed storage, parallel processing, and adaptive learning of the biological nervous system, which enable the neural network to obtain preliminary intelligence. The expert system is similar to intelligence. The expert system stores relevant professional knowledge. When a problem is encountered, there is corresponding logical reasoning to solve the problem [13]. However, when the situation encountered is not stored in the expert system, the entire system will be paralyzed. The neural network has fault tolerance and self-learning ability. When encountering incomplete information or abnormal situations, the system will give a reasonable judgment and decision on complex problems based on the knowledge and experience learned in the past. Artificial intelligence is the organic combination of expert system and neural network, using the advantages of the two rules [14]. The field of artificial intelligence applications has been extensive, for example, password deciphering, predictive valuation, market analysis, system diagnosis, logical reasoning, and fuzzy judgment. The HEED protocol proposed by related scholars aims at the deficiencies of the LEACH protocol and proposes an improved clustering algorithm [15][16][17]. The algorithm selects cluster heads through two primary and secondary parameters. The primary parameter is the remaining energy of the node, and the secondary parameter is the proximity of the node or the density of the node. The cluster head selection of this algorithm is an iterative process [8,18,19]. When a candidate cluster head is found to be better than itself, it will join the cluster. The convergence speed of the HEED algorithm is faster, and the energy and location of the node are fully considered, which balances the energy consumption of the node and reduces the occurrence of conflicts. Relevant scholars put forward the concept of virtual clusters [20][21][22]. The synchronization of time between virtual clusters must be ensured, and each node will sleep and work periodically according to the schedule. This method solves the problem of channel conflicts, but due to the communication problems between the network and the nodes, how to achieve time synchronization is really difficult to solve [23,24]. This algorithm is more difficult to implement. Because time synchronization requires the cluster head node to continuously broadcast, it takes up a large number of channels [25]. A large number of facts show that the selection process is very important, and improper selection will lose the genes of excellent individuals, which will affect the convergence effect of genetic algorithms [26,27]. There are many ways to choose, about 20 kinds, for example, ratio selection, equally divided selection, and essence selection. The key role in the genetic algorithm is the crossover operator. Selection operator and crossover operator are the two operations that best embody genetic algorithms. At the same time, nature's "natural selection by nature, survival of the fittest" is also realized through these two operations. There are not many theories about crossover, and they are immature, but there are about 20 kinds of crossover technologies. Selection and crossover cannot solve all the problems in genetic algorithms. The mutation operator changes the nature of an individual in order to avoid "premature" convergence, that is, the change of an individual's genetic information. Different crossover operators are used to improve the convergence efficiency and global search ability of genetic algorithms, so as to obtain better genetic effects. The application of different mutation operators is to avoid the occurrence of "premature" phenomenon. In the process of genetic algorithm optimization, it is difficult to have both convergence efficiency and global search capability [28,29]. IoT Sensor Data Processing Platform Architecture 3.1. Overall Analysis of the Platform. The IoT sensor data processing platform designed in this paper needs to build a website as a client terminal management platform and a data communication system to complete various data communication services. An internal message loop is required between the two systems so that the status of the front-end equipment of the Internet of Things can be quickly reflected on the page through the platform, and at the same time, the client can send page requests to the front-end equipment through the platform. In terms of specific function realization, the basic function of the Internet of Things is to provide ubiquitous connections and services. Therefore, the basic functions provided by the platform should include providing support for the connection of users and devices, providing support for the online setting and management of device information, providing support for processing sensor data collected by the device, and providing support for user visualized operation communication data. The network transmission format of sensor data collected by the device is generally divided into two types: text transmission format and binary transmission format. The text transmission format is more readable, but because it contains redundant items such as tags, the transmission efficiency is relatively low compared to the binary transmission format, and it also takes up a relatively large volume when stored; the binary format is composed of simple bytes. The information volume is small and the transmission efficiency is high, but the readability and the convenience of encoding and decoding are relatively poor. The text transmission format and the binary transmission format have their own advantages and disadvantages in the transmission process, but the existing IoT platforms generally choose the text transmission format and a single transmission protocol for communication, which is not conducive to the expansion of the platform and has certain limitations. Therefore, in order to meet diversified transmission requirements, the IoT sensor data processing platform designed in this article needs to support both binary transmission format and text transmission format, so that users can choose the most suitable transmission according to the requirements of data transmission and the conditions of hardware equipment. The Overall Function and Framework Design of the Platform. We design the functions and framework of the IoT sensor data processing platform, as shown in Figure 1. The overall structure of IoT applications includes three parts: IoT device terminals, servers, and client terminals. The IoT sensor data processing platform designed in this article corresponds to the server part, providing support for the communication between IoT device terminals and client terminals. Therefore, this article divides the IoT sensor data processing platform into three parts: web services, communication services, and data services. It provides analysis, processing, and monitoring functions for the collected sensor data. The existing Internet of Things application platforms on the market generally provide user information management services, including user registration, login, information modification, and other functions. This article designs the platform functions on this basis. The platform's web service provides users with a visual and friendly operation interface. After logging in to the system, they can classify and manage personal information, device information, and sensor data information and can also perform simple message publishing to provide communication services within the platform. On this basis, the platform provides users with customized pages of communication protocols. Users can freely customize within the framework of communication protocols according to the needs of devices or applications. The customized communication protocols are output in the form of XML documents. 3.3. The Overall Technical Architecture of the Platform. The IoT sensor data platform designed in this paper uses simple and efficient Maven for project management and development. SSH2 and Easyui plug-ins are jointly responsible for the construction of the web management platform, and the MINA framework is responsible for the construction of data communication service system modules. At the same time, this article uses the XML language with good scalability to 3 Journal of Sensors construct the communication protocol for data transmission and uses XStream technology to analyze and operate according to the communication protocol. As the core framework, SSH and MINA frameworks provide a guarantee for the construction of the entire platform. The SSH framework is composed of the Spring framework, the Struts framework, and the Hibernate framework and is a relatively classic web application construction framework model. Struts2 is the final implementation framework of the MVC model, and its hierarchical structure makes it necessary to focus only on the implementation of the specific business logic layer when building the platform. At the same time, only a simple configuration in the Struts configuration file can make corresponding countermeasures to various abnormal conditions of the platform. The Hibernate framework is nonintrusive when applied. At the same time, the framework provides an object-oriented HQL language, which simplifies the workload of connecting to the database. The Spring framework provides various management services for the entire web platform and performs persistence operations through Spring-Dao to complete operations such as adding, deleting, modifying, and checking data. The MINA server framework is a Socket communication framework for building data communication services, which has good scalability. At the same time, MINA introduces an asynchronous nonblocking mechanism and a multithreaded mechanism in the IO operation, so that the system can communicate with a large number of clients at the same time, which improves the performance of the system. The filtering layer in the MINA framework can achieve hierarchical processing of the underlying communication and business logic. Therefore, the use of MINA to build a data communication service system does not consider the complexity of the network layer, making the development process simple and straightforward. By adding support for Spring in MINA, at runtime, the system will inject the relationship between the programs into the corresponding components according to the content of the configuration file, so as to achieve loose coupling between classes. Therefore, at the technical level, this article combines MINA and SSH to jointly build an IoT sensor data processing platform, integrates the business logic layer of the web management platform and the data communication service system through Spring, and realizes the efficient operation of the entire platform. Figure 2 shows the technical architecture diagram of the Internet of Things service platform. When the platform is started through the Tomcat server, the data communication service based on MINA and the web service based on SSH2 run at the same time and, finally, realize the communication service between the device terminal and the client terminal. Designing the structure of ANN is actually determining a combination of parameters suitable for solving a certain problem or a certain type of problem according to a certain performance evaluation criterion. When the problem to be solved is more complicated, it is more difficult to design ANN manually. The behavior of even small networks is difficult to understand. Large-scale, multilayer, nonlinear networks are even more mysterious, and there are almost no strict design rules. Under the conditions of a reasonable structure and appropriate weights, a three-layer feedforward network can approximate any continuous function, but the theorem does not give a method to determine the reasonable structure. The standard engineering design method is also powerless for the design of neural networks. The complex distributed interaction between network processing units makes the decomposition processing technology in modu-lar design infeasible, and there is no direct analysis and design technology to deal with this complexity. What is more difficult is that even if we find a network that is sufficient to complete a specific task, we cannot be sure that we have not lost a better-performing network. So far, people have spent a lot of time and energy to solve this problem, and the application of neural networks is also developing in large-scale and complex forms. The method of manually designing the network should be discarded. ANN needs an efficient and automatic design method, and GA provides a good way for it. The system exists in the form of a network, but in the description and modeling of many problems, a certain analytical method is often used. Network analysis in operation research is only used to solve a class of engineering problems with obvious network forms. Regarding the network as the general description method of the system, and establishing the corresponding model and solution strategy, it is a subject that people are working hard to study. It is true that it is unwise to find a general system network representation, but it is feasible to use the current mature research of multilayer feedforward neural network (BP) as a problem representation method for genetic search. Figure 3 shows the combination of genetic algorithm and neural network. The Optimization Method of Genetic Algorithm to Neural Network. We use genetic algorithm to optimize neural network connection weights. Its essence is a complicated continuous parameter optimization problem, and the result is the optimal connection weight. Traditional neural network weighting algorithms all adopt certain weight change rules and finally get a better weight distribution after continuous learning and training. This method takes a long time to train and may even fall into a local minimum and fail to obtain a proper weight distribution. This problem can be solved by using genetic algorithm to optimize the weight of neural network. The determination of the fitness function is a key factor in the genetic algorithm. During the operation of the genetic algorithm, the choice of the fitness function directly determines the evolution of the genetic algorithm and determines whether the genetic algorithm can find a better one. As a result, the optimal solution or suboptimal solution is found. Therefore, in the evolution process of the genetic algorithm, it is necessary to ensure that the genetic evolution evolves in the direction of increasing the fitness function value. If you need to solve the problem of maximizing the objective function f ðxÞ, you can assume the fitness function, Fðf ðxÞÞ = f ðxÞ; if you need to solve the problem, the objective function f ðxÞ is a problem of minimization, so it can be assumed that the fitness function Fð f ðxÞÞ = −f ðxÞ. This fitness function is relatively simple and easy to implement, but the fitness function in this way has the following two problems: one is that there may be some cases where the crossover probability that does not meet the requirements is negative, and the other is that the distribution of the function values solved in this way is relatively wide, so the average fitness obtained is not easy to express. If the objective function is the minimum problem, then In the formula, C max is the maximum estimated value of f ðxÞ. If the objective function is the biggest problem, then In the formula, C min is the minimum estimated value of f ðxÞ. This method is an improvement of the first method, called the "boundary construction method," but sometimes, there is a problem that it is difficult or inaccurate to estimate the threshold value in advance. If the objective function is the minimum problem, then If the objective function is the biggest problem, then C is a conservative estimate of the bounds of the objective function. In the operation of genetic algorithm, the Journal of Sensors choice of fitness function directly determines the evolution of genetic algorithm and determines whether the genetic algorithm can find better results. Therefore, fitness function is crucial in genetic algorithm. Important steps are related to the quality of genetic algorithms. We use genetic algorithm to optimize the network structure. The problem of structural optimization is transformed into a biological evolution process, and the optimal solution of structural optimization is obtained through various evolutionary methods. Then, we use traditional algorithms to train the weights of the optimized structure. IGA Is Applied to the Optimization of Neural Network Structure. We utilize the characteristics of the global search of genetic algorithm to find the most suitable network connection rights and network structure. For practical problems, the number of nodes n and m of the input layer and output layer of the feedforward multilayer neural network is determined by the problem itself. Therefore, the main task of network design is to determine the number of hidden layers of the network. Therefore, the problem is simplified to determine the number of nodes t in the hidden layer and the connection weight w. Generally, the maximum value of t is twice the number of nodes in the input layer. Chromosome Coding. Each chromosome corresponds to the topology of a neural network. The layer control gene is used to control the number of hidden layers of the neural network, the neuron control gene determines the neurons that are activated in each hidden layer, and the parameter gene is used to represent the connection weight and threshold of each neuron. Control genes are mainly used to control the structure of the entire network, including layer control genes and neuron control genes. Binary codes are generally used, with "1" indicating that the lower layer genes are in an active state and "0" indicating that the lower layer genes are in an inactive state. The parameter gene generally adopts the real number coding form, which can reduce the coding length of the chromosome. The population size is of great significance to the convergence of genetic algorithms. It is too small to obtain satisfactory results, and if it is too large, the calculation is complicated. Generally, the population size is 50-100. Crossover and Mutation. Layer control genes and neuron control genes generally use single-point crossover or multipoint crossover, which is mainly because one or more crossover points are randomly set in the individual code string, and then, the partial chromosomes of two or more paired individuals are exchanged at the crossover points. The parameter gene uses arithmetic crossover, which refers to the production of two new human bodies by the linear combination of two individuals. Assuming an arithmetic crossover between two individuals X A and X B , the two new individuals generated after the crossover are Crossover operation is the main way to generate new individuals in genetic algorithm. The crossover probability is generally larger, and the recommended range is 0.4 to 0.99. The mutation operation includes the mutation of the control gene and the parameter gene it controls. The mutation of the control gene changes the "1" in the control gene string to "0" or from "0" to "1" with a certain probability. The operation will change the structure of the hidden layer, and the favorable mutation will be preserved through the selection operation. Optimal Design of Neural Network Structure Based on IGA. Introducing genetic algorithm into neural network is a new direction of neural network research. The IGA algorithm proposed in this paper can optimize the structure and weight of the neural network at the same time. It is a random search algorithm that can converge to the global optimum in the sense of probability. A three-tier hierarchical structure is adopted. The second level is the layer control gene to control the number of hidden layers of ANN, and the first level is the neuron control gene, which determines the neurons that are activated in each hidden layer. Control genes are coded in binary, with "1" indicating that the underlying genes are in an active state and "0" indicating that they are in an inactive state. The parameter gene is used to represent the connection weight and threshold of each neuron, using real number coding. We calculate the fitness value of each individual and sort the individuals in the population from high to low fitness, where E = mean square error of training data + network complexity. The complexity function is the number of active connection weights in the network divided by the number of all connection weights in the network (including active and inactive). We take adaptive crossover: where f ′ is the larger fitness value of the two crossover strings, P c1 = 0:9 and P c2 = 0:6. We take adaptive mutation: where f is the fitness value of the mutation string, P m1 = 0:1 and P m2 = 0:001. The algorithm flow chart is shown in Figure 4. There are 42 piezoelectric sensors on the reverse side of the upper wall of the test piece (the side where the ribs are glued). Since the upper wall is symmetrical, the letter "L" is used to indicate the left half of the upper wall, and "R" is the right half of the upper wall, and each sensor is numbered separately, and the sensor number is specified to be consistent with its layout position number. The size of the arranged piezoelectric chip sensor is 60 mm × 40 mm × 0:25 mm, and its material is PZT-5 piezoelectric ceramics. Simulation Experiment and Result Analysis Due to the complex structure of the test piece, it is too cumbersome to model with solid elements. Therefore, this paper uses shell elements for modeling, that is, the upper wall plate adopts the SHELL99 shell element based on the classic laminate theory, and it is bonded to the upper wall plate. The piezoelectric sheet uses PLANE13 plane elements and is polarized in the y-direction; the wall panels and ribs use SHELL93 shell elements, and the difference in thickness is controlled by real constants. In order to further simplify the finite element model of the test piece in order to reduce the workload of calculation, the lower wall slab is ignored during modeling, and rigid boundary conditions are applied at the connection of the wall slab, the rib, and the lower wall slab; the structural symmetry of the symmetrical boundary condition DSYM is used to establish a half model of its structure. The distribution of finite element sensors on the right half of the test piece is shown in Figure 5. Using the impact load shown in Figure 6, the impact simulation test is performed on the right half of the model of the test piece, the transient response signals of each piezoelectric sensor are obtained, and the feature extraction is performed. Since the piezoelectric sheet uses the planar element PLANE13 and is polarized in the y-direction, the difference between the response signals of the upper boundary middle node (referred to as the upper node) and the lower boundary middle node (referred to as the lower node) of the piezoelectric sheet is used for feature extraction. The response signal of the piezoelectric sensor 8 is shown in Figure 7. It can be seen that the optimized value of IGA-ANN is relatively close to the expected value of the response signal, while the unoptimized value is very different from the expected value of the response signal. Optimal Configuration of Test Piece Sensors Based on Genetic Neural Network. The feature extraction is performed on the response signals of each sensor, respectively, and the different impact positions and the corresponding signal feature data of different sensors are obtained. This provides training and test samples for the use of IGA-ANN for impact damage location detection, so as to realize the optimal configuration of sensors based on damage detection. The optimal configuration of the test piece sensor takes time as shown in Figure 8. It can be seen that the optimized configuration time-consuming value of IGA-ANN is basically the same as the expected configuration time-consuming value, while the unoptimized configuration time-consuming value is larger. Regarding the layout mode of the wall plate sensor on the test piece, although the maximum possible arrangement of 42 sensors is expected to reduce the detection error, that is, it can improve the detection efficiency, but too many Journal of Sensors sensors increase the workload of signal processing, and it is used in conjunction with the sensor. The cost of data acquisition and processing equipment is high, and the sensor itself requires a certain cost, that is, the overall cost has increased. Therefore, considering the comprehensive cost and benefit, the layout mode of the sensor is not optimal. For this reason, it can be considered to appropriately reduce the benefits of detection and to reduce the cost as much as possible, that 9 Journal of Sensors is, to reduce the number of sensors as much as possible while increasing the detection error as little as possible, so as to achieve the maximum number and location of the test piece sensors. Using the genetic neural network method, based on the impact damage location detection problem, we optimize the sensor configuration of the right half of the test piece model and then realize the optimal configuration of the entire test piece sensor. The genetic neural network method is used to optimize the sensor placement when the number of sensors. In the optimization process, we take relevant data for different sensor placement modes to perform IGA-ANN training and testing and finally obtain the optimal placement positions of different numbers of sensors. These figures show that due to the symmetry of the structure and boundary conditions of the test piece, the optimal placement positions of the different numbers of sensors are also symmetrical about the structure. The detection error of the impact damage position of IGA-ANN with different numbers of sensors is shown in Figure 9. The results show that for the initial placement of more sensors, the IGA-ANN method can effectively reduce the detection error. This simulation result can provide a certain guiding basis for the optimal configuration of the actual piezoelectric sensor of the structure test piece. Conclusion This paper presents the overall design framework of the IoT sensor data processing platform and elaborates the advantages of using SSH and MINA to build the platform integratedly. The platform is divided into two parts, a web management platform and a data communication service system, and interacts with the database by integrating the business layers of the two into one. The web management platform provides services such as customizable communication protocol customization, equipment information, personal information, announcement information management, and data collection information monitoring and Journal of Sensors analysis. The sensor data communication service system is responsible for realizing the communication service between the client terminal and the device terminal and the data processing service of the sensor data collected by the device. The system we are facing is becoming more and more complex, and the system problems we deal with are becoming more and more complex. It is very difficult to express the regularity of complex systems with the string representation method in traditional GA. The tree structure is used in genetic planning, which can easily express the hierarchy in the problem. We adopt an excellent individual strategy and retain a few of the best (high fitness value) individuals in each generation not to participate in selection and mutation operations. We find the most adaptive individuals and the least adaptive individuals in the current population. If the fitness of the best individual in the current group is higher than the fitness of the total best individual so far, the best individual in the current group is taken as the new best individual. We replace the worst individual in the current population with the best individual so far. We use IGA to optimize the neural network, so that IGA can optimize both the structure of the network and the weight of the network. In this paper, the structure of the wing box test piece is analyzed, using the finite element method, the finite element model of the wing box test piece is established, and the shock piezoelectric response numerical simulation is carried out. Using the genetic neural network method, based on the impact damage location detection problem, the piezoelectric sensor of the test piece of the wing box section is optimized, and the optimal configuration of the sensor is obtained in the initial layout mode. Compared with the situation when all 42 sensors are selected in the initial sensor layout mode, only 24 sensors are now required, that is, the number of sensors is reduced by 18, and the cost is reduced. Although the damage detection error has been improved, the comprehensive evaluation index of cost and benefit is the best. This simulation result can provide a certain value for the optimal configuration of the actual piezoelectric sensor of the structure test piece. The current genetic algorithm learning human evolution process is only formal; it has not been able to characterize the evolution process of human itself, let alone the real learning process of neuronal thinking. These are reflected in the fact that the realization of the existing genetic algorithm is affected by many factors and fails to form a systematic theory. Therefore, the genetic algorithm needs more in-depth discussion in terms of its model. The current research on genetic algorithms is just the beginning. It is necessary to examine the current status of genetic algorithms from a higher height and broader perspective and to explore a new way for the future. When the genetic neural network method is used to optimize the placement of sensors, the feasibility analysis adopts engineering judgment and gives a certain explanation. However, the engineering judgment method is rough, and sometimes, it is necessary to delve into the theoretical basis behind the problem and seek the verification of the relevant theory. This is also the difficulty of the problem and needs to be studied in depth. Data Availability The data used to support the findings of this study are available from the corresponding author upon request.
2021-09-25T16:26:32.344Z
2021-08-22T00:00:00.000
{ "year": 2021, "sha1": "315e91123bef18647960111365a9bf9e8dbbaf1a", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/js/2021/5098013.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1625717f1a7b3c0f048a0989eb5f4e2b3360d702", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
59408230
pes2o/s2orc
v3-fos-license
New database for a sample of optically bright lensed quasars in the northern hemisphere In the framework of the Gravitational LENses and DArk MAtter (GLENDAMA) project, we present a database of nine gravitationally lensed quasars (GLQs) that have two or four images brighter than $r$ = 20 mag and are located in the northern hemisphere. This new database consists of a rich variety of follow-up observations included in the GLENDAMA global archive, which is publicly available online and contains 6557 processed astronomical frames of the nine lens systems over the period 1999$-$2016. In addition to the GLQs, our archive also incorporates binary quasars, accretion-dominated radio-loud quasars, and other objects, where about 50% of the non-GLQs were observed as part of a campaign to identify GLQ candidates. Most observations of GLQs correspond to an ongoing long-term macro-programme with 2$-$10 m telescopes at the Roque de los Muchachos Observatory, and these data provide information on the distribution of dark matter at all scales. We outline some previous results from the database, and we additionally obtain new results for several GLQs that update the potential of the tool for astrophysical studies. Introduction A quasar is a distant active galactic nucleus (AGN) of high luminosity powered by accretion into a super-massive black hole (e.g. Rees 1984). The UV thermal emission is generated by hot gas orbiting the central black hole: the continuum comes from tiny sources and shows variability over several timescales, while broad emission lines are produced in regions around the continuum sources (e.g. Peterson 1997;Krolik 1999). Only rarely is the same quasar seen at different positions on the sky. These positions are close together, and they are located around a massive galaxy acting as a lens. The gravitational field of the foreground galaxy bends the light from the background quasar and often produces two or four images of the distant AGN. Although a gravitationally lensed quasar (GLQ) is a rare phenomenon, observations of GLQs provide very valuable information about the structure of accretion flows, the distribution of mass in lensing galaxies, and the physical properties of the Universe as a whole (e.g. Schneider et al. 1992Schneider et al. , 2006. A significant part of the UV emission of quasars at redshift z > 1 is observed at optical wavelengths, and thus optical photometric monitoring of GLQs revealed a wide diversity of intrinsic flux variations. These variations were used, among other things, to determine accurate time delays between quasar images, which in turn led to constraints on the Hubble constant and the dark components of the Universe (e.g. Oguri 2007;Sereno & Paraficz 2014;Wei et al. 2014;Rathna Kumar et al. 2015;Yuan Tables 4−6, 8−11, and 13−16 are only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/vol/page & Wang 2015; Pan et al. 2016;Bonvin et al. 2017), as well as on lensing mass distributions (e.g. Goicoechea & Shalyapin 2010). Stars in lensing galaxies are also responsible for microlensing effects in optical light curves and spectra of GLQs, and the observed extrinsic variations and spectral distortions constrained the size of continuum and broad-line sources, the structure of emitting regions, the mass of super-massive black holes, and the composition of intervening galaxies (e.g. Shalyapin et al. 2002;Kochanek 2004;Richards et al. 2004;Morgan et al. 2008;Sluse et al. 2012;Guerras et al. 2013;Motta et al. 2017). Deep imaging and spectroscopy of GLQs are also key tools to discuss the distribution of mass, dust, and gas in lensing objects (e.g. Schneider et al. 2006). In addition, optical polarimetry may help to better understand the physical scenarios (e.g. Wills et al. 1980;Chae et al. 2001;Hutsemékers et al. 2010). Since 1998, the Gravitational LENses and DArk MAtter (GLENDAMA) project is planning, conducting, and analysing (mainly) optical observations of GLQs and related objects. In the first decade of the current century, the advent of a robotic 2m telescope (Steele et al. 2004) to the Roque de los Muchachos Observatory (RMO) represented a revolution on the observational side of GLQs. A main advantage is the possibility of a rapid reaction in observations scheduling with a variety of available instruments. Along with the installation of the robotic telescope, the start of the scientific operational phase of a 10m telescope (Alvarez et al. 2006) paved the way to ambitious gravitational lensing programmes at the RMO. We thus focused on the construction of a comprehensive database for a sample of ten GLQs with bright images (r < 20 mag) at 1 < z < 3. The selected lens systems have different morphologies and angular separations be-Article number, page 1 of 32 arXiv:1805.10336v1 [astro-ph.GA] 25 May 2018 A&A proofs: manuscript no. aa32737_astroph tween their images. In this paper, we introduce the current version of the database, including ready-to-use (processed) frames of nine targets. This astronomical material has been collected over 17 years, using facilities at the RMO, the Teide Observatory (TO), and space observatories (Swift and Chandra monitoring campaigns of the first lensed quasar; Gil-Merino et al. 2012). Our tenth and last target has been discovered in 2017 (PS J0147+4630; Berghea et al. 2017;Lee 2017;Rubin et al. 2017), and we are starting to observe this GLQ, in which three out of its four images are arranged in an arc-like configuration. We wish to perform an accurate follow-up of each target over 10−30 years, since observations on 10-to 30-year timescales are crucial to detect significant microlensing effects in practically all objects in the sample (Mosquera & Kochanek 2011). In addition to thousands of astronomical frames in a wellstructured datastore that is publicly available online, the website of the GLENDAMA project offers high-level data products (light curves, calibrated spectra, polarisation measures, etc). We remark that the GLENDAMA observing programme does not only focus on imaging lens systems and light curves construction. The robotic telescope allows us to follow up the spectroscopic and polarimetric activity of some targets, and additionally, we obtain deep near-infrared (NIR) imaging with several 2−4m telescopes. Here, we present new results for six of the nine targets. Results for the other three lens systems have been published very recently. Despite of the existence of high-resolution spectra of some images of GLQs in the Sloan Digital Sky Survey (SDSS) database (the SDSS spectroscopic database includes observations of the Baryon Oscillation Spectroscopic Survey − BOSS; Smee et al. 2013), we also conduct a programme with the very large telescope at the RMO to acquire spectra of unprecedented signal quality (e.g. Shalyapin & Goicoechea 2017). The paper is organised as follows: in Sect. 2, we present an overview of the global archive and then describe the GLQ observations in detail. In Sect. 3, we review relevant intermediate results and discuss their astrophysical impact. New light curves, polarisations, and spectra at optical wavelengths (and deep NIR imaging of QSO B0957+561) are also presented and placed into perspective in Sect. 3. The summary and future prospects appear in Sect. 4. Overview of the archive The global archive consists of a datastore of 40 GB in size, whose content is organised and visualised by using MySQL/PHP/JavaScript/HTML5 software 1 . A web user interface 2 (WUI) allows users to surf the archive, see all its content, and freely download any dataset. This interface is a three-step tool, where the first step is to select an object and then click 1 MySQL is a database management system that is developed, distributed and supported by Oracle Corporation. This software is available at http://www.mysql.com/. PHP is a general-purpose scripting language that is especially suited to web development, and is available at http://php.net/. JavaScript is an object-oriented computer programming language commonly used to create interactive effects within web browsers, developed by Mozilla Foundation at https://developer.mozilla.org/en-US/docs/Web/ JavaScript. HTML5 is the fifth version of the standard HTML markup language used for structuring and presenting web content, developed by the Word Wide Web Consortium at https://www.w3.org/ 2 http://grupos.unican.es/glendama/database/ the submit button to see the datasets available for the selected target. In this second screen, it is possible to select a dataset and press the retrieve button to view its details (telescope, instrument, file names, observation dates, exposure times, etc). In the third step of the WUI, the user can download the frames of interest 3 . The GLENDAMA datastore incorporates more than 7000 ready-to-use astronomical frames of 26 targets falling into two classes: GLQs, and non-GLQs (binary quasars, accretiondominated radio-loud quasars, and others). In spite of this, our observational effort was mainly concentrated on the construction of a GLQ database (see Fig. 1). The full sample of GLQs and the bulk of data are described in detail in Sec. 2.2. The GLENDAMA database covers the period 1999−2016 (it was updated on 1 October 2016), and we have used many telescopes and a varied instrumentation throughout the past 17 years. In addition to an X-ray monitoring campaign of a lensed quasar in 2010 (see Sect. 2.2), the archive incorporates frames (imaging, polarimetry, and spectroscopy) that were taken with facilities operating in the near-ultraviolet (NUV)-visible-NIR spectral region. Such facilities and some additional details (filters, grisms, gratings, etc) are given in Table 1. Users can also access information about air mass and seeing values (when available in file headers). Seeing values are not equally accurate through all the observations: for some instruments (e.g. RAT-Cam, IO:O, RINGO2, and RINGO3), the full-width at halfmaximum (FWHM) of the seeing disc is directly estimated from frames, and thus is a reliable reference. However, FWHM values in FRODOSpec and SPRAT files are estimated before spectroscopic exposures, so these foreseen values may appreciably differ from true values. For spectroscopic observations, we offer frames of the science target and a calibration star. These files for R. Gil-Merino et al.: New database of lensed quasars Sample of GLQs We focused on nine GLQs in the northern hemisphere (see Table 2). Every GLQ in our sample has two or four images with r < 20 mag. The source redshifts vary between 1.24 and 2.57 ( z ∼ 1.9), and the sample includes five relatively compact systems and four wide separation double quasars. These last GLQs have two images separated by ∆θ ≥ 2 . Over the first ten years of our follow-up observations (1999−2008), the target selections were based on the GLQs known in 1999. From 2009 on, we have also studied SDSS GLQs. Thus, we try to achieve a deeper knowledge of some classical targets, such as QSO B0909+532, FBQS J0951+2635, QSO B0957+561, QSO B1413+117, and QSO B2237+0305, and simultaneously, characterise other recently discovered systems (see Tables 2 and 3). We have also been involved in a search for new double quasars in the SDSS-III database with the purpose of "going the whole way": discovery, Article number, page 3 of 32 A&A proofs: manuscript no. aa32737_astroph Notes. (a) Source redshift ; (b) number of quasar images ; (c) angular separation between images for double quasars or typical angular size for quadruple quasars ; (d) r-band magnitudes of quasar images (these values should be interpreted with caution, since we deal with variable objects) ; (e) measured time delay for double quasars or the longest of the measured delays for quadruple quasars (1σ confidence interval) ; (f) classification (redshift) . ( ) Time delay in the g band. There is evidence of chromaticity in the optical delay, since it is 420.6 ± 1.9 d in the r band. ( ) Redshift from gravitational lensing data and a concordance cosmology. This measurement is in reasonable agreement with the photometric redshift of the secondary lensing galaxy and the most distant overdensity, as well as the redshift of one of the absorption systems References. (1) Kochanek et al. (1997); (2) Hainline et al. (2013) (see also Goicoechea et al. 2008); (3) Oscoz et al. (1997); (4) Lehár et al. (2000); (5) Lubin et al. (2000); (6) Kundić et al. 1997;Shalyapin et al. 2008); (13) Stockton (1980); (14) Young et al. (1980); (15) Huchra et al. (1985); (31) Yee (1988); (32) Vakulik et al. (2006). and subsequent characterisation (Sergeyev et al. 2016). After selecting three superb candidates through Maidanak Astronomical Observatory (MAO) deep imaging under good seeing conditions (i.e. quasar-companion pairs showing evidence for the existence of a near lensing galaxy, as well as parallel flux variations on a long timescale), Sergeyev et al. (2016) corfirmed the GLQ nature of SDSS J1442+4055 (see also More et al. 2016). This object is being intensively observed to unveil its physical properties. Follow-up observations of the second superb candidate (SDSS J1617+3827) also led to the discovery of a faint double quasar (Shalyapin et al. 2018, in preparation). However, in this paper, the newly discovered GLQ is treated as an unconfirmed GLQ and incorporated into the non-GLQ class (see Sect. 2.1). The third superb candidate (SDSS J1642+3200) turned out to be a system consisting of a quasar and a different AGN. After normal LT and GTC science operations at the RMO, most GLQ observations were carried out with these two telescopes. In a parallel effort, processing tools for some LT instruments (photometric pipelines for CCD imagers and specific software for FRODOSpec) and LT-GTC science products (several light curves and calibrated spectra) were made available to the community. However, the full set of NUV-visible-NIR frames of GLQs comes from a variety of observing programmes with the GTC, IAC80, INT, LT, NOT, TNG, and UVOT (see Table 3). In 2010, we also performed space-based observations of QSO B0957+561 with the Chandra X-ray Observatory 6 . In this Xray (0.1−10 keV) monitoring campaign, we used the Advanced CCD Imaging Spectrometer-S3 chip. The GLQ database consists of a total of 6557 processed frames, which are not homogeneously distributed among the nine objects (see Fig. 2) for different reasons. For example, we used the first lensed quasar as a pilot target to check the performance of the majority of the instruments involved in the project, and therefore 50% of the GLQ 6 http://chandra.harvard.edu/ frames correspond to QSO B0957+561. In general, dates of discovery, scientific aims, technical constraints, opportunities arising at some time periods, and decisions of time allocation committees are key factors to explain the pie chart in Fig. 2. Unfortunately, the EOCA monitoring campaign of QSO B0909+532 (Ullán et al. 2006), QuOC-Around-The-Clock observations of QSO B0957+561 (Colley et al. 2003) and the GLITP optical monitoring of QSO B2237+0305 could not be assembled in our disc-based storage for technical reasons. All frames in Table 3 are ready to use because they were processed with standard techniques (sometimes as part of specific pipelines) or more sophisticated reduction procedures. For example, the LT website (see notes to Table 1) (b) this programme is mainly focused on a long-term optical monitoring of GLQs with the LT ; (c) no i-band data in 2014−2016 ; (d) in addition to NUV-visible-NIR data, 12 X-ray (0.1−10 keV) frames were obtained as part of a monitoring campaign with (Barnsley et al. 2012) and the L2LENS reduction tool are described in Shalyapin & Goicoechea (2014b). We remark that the LT is a unique facility for photometric, polarimetric, and spectroscopic monitoring campaigns of GLQs. However, taking into account the spatial resolution (pixel size of ∼ 0 . 4−0 . 5) of RINGO2, RINGO3, and SPRAT, we are currently tracking the evolution of broad-band fluxes for almost all systems, whereas we are only obtaining spectroscopic and/or polarimetric data of the wide separation double quasars: QSO B0957+561 ( The long-slit spectroscopy (SPRAT, OSIRIS, ALFOSC, and IDS) was processed using standard methods of bias subtraction, trimming, flat-fielding, cosmic-ray rejection, sky subtraction, and wavelength calibration. The reduction steps of the Stan-Cam frames included bias subtraction and flat-fielding using sky flats, while the combined frames for deep-imaging observations with ALFOSC were obtained in a standard way. We also applied standard reduction procedures to the IAC80 original data, although only the final VR datasets contain WCS information in the FITS headers. These most relevant data were also corrected for cosmic-ray hits on the CCD. The NICS frames were processed with the SNAP software 8 , and different types of instrumental reductions are applied to Swift and Chandra observations before data are made available to users. These space-based observatories perform specific processing tasks that are outlined in dedicated websites. Previous results Through frames in the database, as well as through those that could not be incorporated into the datastore for technical reasons (see Sect. 2.2), we obtained light curves and spectra that led to many astrophysical outcomes. Most of these are grouped into Sect. 3.1.1, Sect. 3.1.2, Sect. 3.1.3, and Sect. 3.1.4. Quasar accretion flows The RATCam/r light curves of QSO B0909+532 in 2005−2006 indicated that symmetric triangular flares in an accretion disc (AD) is the best scenario (of those tested by us) to account for the variability of the MUV (∼ 2600 Å) continuum emission from the quasar ). In addition, combining RATCam/gr and United States Naval Observatory (USNO)/r data of QSO B0909+532 over the period 2005−2012, Hainline et al. (2013) found prominent microlensing events and constrained the size of the MUV continuum source in the AD, deriving a typical half-light radius of r 1/2 ∼ 20−50 Schwarzschild radii 9 . Regarding QSO B0957+561, old IAC80/R data, the RATCam/r brightness records spanning 2005−2007, and the USNO/r dataset in 2008−2011 were used by Hainline et al. (2012) to detect a microlensing event and measure the size of the continuum source emitting at ∼ 2600 Å (see, however, Shalyapin et al. 2012). Their 1σ interval for the size (r 1/2 ) of this MUV source was 10 16 − 10 17 cm (inclination of 60 • ). There is also strong evidence supporting the presence of a centrally irradiated AD in the heart of QSO B0957+561. A Chandra-UVOT-LT monitoring campaign from late 2009 to mid-2010 suggested that a central EUV source drives the variability of the first GLQ, so EUV flares originated in the immediate vicinity of the black hole are thermally reprocessed in the AD at 20−30 Schwarzschild radii from the dark object 10 Goicoechea et al. 2012). Interpreting the reverberationbased size of the 2600 Å source as a flux-weighted emitting radius (e.g. Fausnaugh et al. 2016), we obtained r 1/2 = (1.1 ± 0.2) × 10 16 cm (1σ interval), and thus the source size from the microlensing analysis of Hainline et al. (2012) is marginally consistent with this measurement. Our accurate value of r 1/2 is in good agreement with the overlapping region between the Hainline et al. interval and the microlensing-based constraint obtained by Refsdal et al. (2000). RATCam-IO:O light curves and OSIRIS spectra of SDSS J1339+1310 indicated that this system is likely the main microlensing factory discovered so far (Shalyapin & Goicoechea 2014a;. Thus, data of SDSS J1339+1310 are very promising tools to reveal fine details of the structure of its accretion flow. In particular, we have shown how microlensing magnification ratios of the continuum can be used to check the structure of the AD, and we have reported some physical properties of broad line emitting regions: the Fe iii region is more compact than the Fe ii region, while the C iv region has an anisotropic structure and a size probably not much larger than the AD. There is also clear evidence that high-ionisation regions have smaller sizes than low-ionisation regions. This was found using high-ionisation emission lines in OSIRIS spectra of SDSS J1339+1310 (Si iv/O iv and C iv; and SDSS J1515+1511 (C iv and He ii; Shalyapin & Goicoechea 2017), in good agreement with the results in Guerras et al. (2013) from a sample of 16 GLQs. We also showed that the GLITP light curve of a microlensing high-magnification event in the A image of QSO B2237+0305 , alone or in conjunction with data from the OGLE collaboration (Woźniak et al. 2000), can be used to prove the structure of the inner accretion flow in the distant quasar (Shalyapin et al. 2002;Goicoechea et al. 2003;Gil-Merino et al. 2006). This accurate microlensing curve (from October 1999 to early February 2000) has been discussed by several other groups (e.g. Kochanek 2004;Bogdanov & Cherepashchuk 2004;Vakulik et al. 2004;Moreau et al. 2005;Udalski et al. 2006;Koptelova et al. 2007;Alexan-drov & Zhdanov 2011;Abolmasov & Shakura 2012;Mediavilla et al. 2015). Lensing mass distributions Deep I-band imaging (ALFOSC) and spectroscopic (OSIRIS) observations of SDSS J1339+1310 allowed us to reliably reconstruct the mass distribution acting as a strong gravitational lens (Shalyapin & Goicoechea 2014a). Using a singular isothermal ellipsoid (SIE) to model the mass of the main lensing galaxy (e.g. Koopmans et al. 2006), we obtained an offset between light and mass position angles. This misalignment suggests that SDSS J1339+1310 is affected by external shear γ arising from the environment of the main lens (early-type galaxy) at z = 0.61 (e.g. Gavazzi et al. 2012). We then considered an SIE + γ mass model, where the SIE was aligned with the light distribution of the main lens. Although the uncertainty in the SIE mass scale was below 10%, new observational constraints on the macrolens flux ratio and the time delay (e.g. ) must produce a much more accurate SIE + γ solution. A cross-correlation analysis of the RATCam light curves of the four images of QSO B1413+117 yielded three independent delays, which were also used to improve the lens solution for this system and to estimate the previously unknown lens redshift (Goicoechea & Shalyapin 2010). The mass model consisted of an SIE (main lensing galaxy), a singular isothermal sphere (secondary lensing galaxy), and external shear (MacLeod et al. 2009), and we derived a lens redshift of z = 1.88 +0.09 −0.11 (1σ interval; see also Akhunov et al. 2017). Additionally, from OSIRIS spectroscopy of field objects in the external shear direction, we identified an emission line galaxy at z ∼ 0.57 that is responsible for < 2% of γ ∼ 0.1 (Shalyapin & Goicoechea 2013). Very recently, IO:O light curves and OSIRIS-SPRAT spectra of SDSS J1515+1511 have been used to obtain strong constraints on its time delay and its macrolens flux ratio (Shalyapin & Goicoechea 2017). Inada et al. (2014) tentatively associated the main lensing galaxy with an Fe/Mg absorption system at z = 0.74 (intervening gas), therefore we have assumed the existence of intervening dust at this redshift to measure the macrolens flux ratio. Our observational constraints practically did not modify the previous SIE + γ solution (Rusu et al. 2016). Moreover, the redshift of the lensing mass was found to be consistent with z = 0.74, which confirmed the putative value of z for the main lens (edge-on disc-like galaxy). From the OSIRIS data, we also extracted the spectrum of an object that is ∼ 15 away from the quasar images. This early-type galaxy at z = 0.54 may account for < 10% of the large external shear (γ ∼ 0.3). Dust and metals in main lensing galaxies We probed the intervening medium along the lines of sight towards the two images A and B of QSO B0957+561 with great effort. The light rays associated with these images pass through two separate regions within a giant elliptical (lens) galaxy at z = 0.36 (see Table 2). Although there is no evidence of Mg ii absorption at z = 0.36 (Young et al. 1981b), we studied the possible presence of dust in the cD galaxy during a long quiescent phase of microlensing activity (e.g. Shalyapin et al. 2012). Using continuum delay-corrected flux ratios B/A from Hubble Space Telescope (HST) spectra and GLITP/VR photometric data in 1999−2001, we found a chromatic behaviour resembling extinction laws for galaxies in the Local Group (Goicoechea et al. 2005). While the macrolens flux ratio is 0.75 (e.g. Garrett et al. 1994), the continuum ratios were greater than 1, indicating that the A image is more affected by dust. We obtained a differential visual extinction ∆A AB (V) = A B (V) − A A (V) ∼ −0.3 mag, which can be interpreted in different ways. For example, the simplest scenario is the presence of a dust cloud in front of the image A, at ∼ 26 kpc from the centre of the cD galaxy. This cloud must be compact enough to produce a negligible extinction over the broad line emitting regions, since emission-line flux ratios agree reasonably well with B/A ∼ 0.75 (e.g. Schild & Smith 1991;Goicoechea et al. 2005). Time-domain observations of QSO B0957+561 were even more intriguing than those made in the spectral domain. RATCam/gr light curves in 2008−2010 showed well-sampled, sharp intrinsic fluctuations with an unprecedentedly high signalto-noise ratio. These allowed us to measure very accurate g-band and r-band time delays, which were inconsistent with each other: the r-band delay exceeded the 417-d delay in the g band by about 3 d . In two periods of violent activity, we also detected an increase in the continuum flux ratios B/A, as well as a correlation between B/A values and flux level of B. This posed the question whether the dust cloud affecting the A image might be responsible for all these time-domain anomalies. Shalyapin et al. (2012) naively suggested that chromatic dispersion (e.g. Born & Wolf 1999) might account for a three-day lag between g-band and r-band signals crossing a dusty region. However, it is hard to reconcile an interband lag of some days with a structure belonging to the giant elliptical galaxy and containing standard dust. In addition, the increase in the flux ratios (diminution of A relative to B) during violent episodes was associated with highly polarised light passing through a dust-rich region with aligned elongated dust grains. This light may suffer from a higher extinction than that of weakly polarised light in periods of normal activity. In Sect. 3.2.3 (Overview), we revise our previous crude interpretation for the chromatic time delay and the continuum flux ratios between the two images of QSO B0957+561. The main lens in SDSS J1339+1310 is an early-type galaxy at z = 0.61, and SDSS-OSIRIS spectra of both quasar images display Fe ii, Mg ii, and Mg i absorption lines at the lens redshift. These metals are not uniformly distributed inside the galaxy, since the Mg ii absorption is stronger in the A image. From OSIRIS spectra of A and B, we also inferred a typical value ∆A AB (V) = −0.27 mag for the differential visual extinction in the system . Hence, we find that A is the most reddened image, supporting the notion that the more metal-rich the gas, the higher the dust content. Inada et al. (2014) also carried out observations of the A and B images of SDSS J1515+1511 with the DOLORES spectrograph on the TNG. Their data revealed the existence of strong Mg ii absorption at the lens redshift in the spectrum of B. This finding was corroborated by our OSIRIS data of the B image, displaying Fe i, Fe ii, and Mg ii absorption in the edge-on disc-like galaxy at z = 0.74. Such absorption features were not detected in the OSIRIS spectrum of the A image. We consistently obtained that B is affected more by dust extinction than A, and ∆A AB (V) = 0.130 ± 0.013 mag (1σ interval; Shalyapin & Goicoechea 2017). We also note that Elíasdóttir et al. (2006) studied the differential visual extinction in ten lensing galaxies at z ≤ 1, reporting many values ranging from 0.1 to 0.3 mag. Cosmology A time delay of a GLQ enables us to measure the current expansion rate of the Universe (the so-called Hubble constant H 0 ), provided the lensing mass distribution and its redshift are reasonably well constrained through additional data (e.g. Jackson 2015). Thus, we obtained accurate time delays (with a mean error of 3−4 d) between the images of 5 GLQs: QSO B0909+532, QSO B0957+561, SDSS J1339+1310, QSO B1413+117, and SDSS J1515+1511 (see Cols. 7−8 in Table 2), which can potentially be used to determine H 0 . Our first time delay estimation of QSO B0909+532 (Ullán et al. 2006) was used by Oguri (2007) and Paraficz & Hjorth (2010) to find H 0 values around 66−70 km s −1 Mpc −1 for a flat universe. They performed a simultaneous analysis of 16−18 GLQs, adopting a flat universe model with standard amounts of matter (Ω M ) and dark energy (Ω Λ ) that satisfy Ω M + Ω Λ = 1. Sereno & Paraficz (2014) confirmed these H 0 values using weaker constraints on the matter and dark energy parameters, while Rathna Kumar et al. (2015) also inferred H 0 = 68.1 ± 5.9 km s −1 Mpc −1 (Ω M = 0.3, Ω Λ = 0.7) from 10 GLQs with relative astrometry, lens redshift, and time delays sufficiently accurate, as well as with a simple lensing mass. This last study was partially based on the LT delays of QSO B0909+532 and QSO B0957+561. In addition to the determination of H 0 , our time delays have also been used to discuss different cosmological and gravity models (e.g. Tian et al. 2013;Wei et al. 2014;Yuan & Wang 2015;Pan et al. 2016). Recently, we have determined the time delay in the two double quasars SDSS J1339+1310 and SDSS J1515+1511 (see Table 2), and it is easy to probe the impact of these delays on the estimation of H 0 via gravitational lensing. For example, assuming a self-consistent lens redshift z = 0.742 in SDSS J1515+1511, and a flat universe with Ω M = 0.27 and Ω Λ = 0.73, the best-fit value for H 0 was 72 km s −1 Mpc −1 (Shalyapin & Goicoechea 2017). Taking an external convergence κ ext ∼ 0.015 (due to a galaxy that is ∼ 15 away from the quasar images) and Sereno & Paraficz 2014) into account, the Hubble constant is decreased by ∼ 1.5% until it reaches about 71 km s −1 Mpc −1 . Therefore, the time delay of SDSS J1515+1511 leads to H 0 values supporting previous estimates from other lens systems. Our delay database has a size and quality similar to those of the COSMOGRAIL collaboration (e.g. Bonvin et al. 2016, and references therein), which is a complex effort involving several 1−2m telescopes at different sites. The LT monitoring offers a unique opportunity to obtain homogeneous and accurate light curves of GLQs, and thus time delays with an uncertainty of a few days. New photometric, polarimetric, and spectroscopic results In this section, we introduce new results for six objects in the GLQ database. For three objects that have been updated in recent papers, i.e. SDSS J1339+1310, SDSS J1442+4055, and SDSS J1515+1511, we do not include science data derived from frames in the database. QSO B0909+532 The RATCam photometry in the r band over the period between January 2005 and June 2011 (198 epochs) has been published in Goicoechea et al. (2008) and Hainline et al. (2013). Here, we present additional r-band photometric data of the two quasar images A and B, which were obtained from RATCam frames in February-April 2012 (18 epochs), as well as using the IO:O camera in the period spanning from October 2012 to June 2016 (128 epochs; see Table 3). This new camera has a 10 × 10 field of view and a pixel scale of ∼ 0 . 30 (binning 2×2), and we set the exposure time to 200 or 150 s. After some initial processing tasks, including cosmic-ray removal and bad pixel masking, a crowded-field photometry pipeline produced magnitudes of A and B for every IO:O frame. Our pipeline relies on IRAF 11 packages and the IMFITFITS software (McLeod et al. 1998). As the lensing galaxy is not apparent in optical frames of QSO B0909+532, a simple photometric model can describe the crowded region associated with such GLQ. This model only consists of two close stellar-like sources, where each source is described by an empirical point-spread function (PSF). To perform the PSF fitting of the double quasar, we mostly considered the 2D profile of the "b" field star as the PSF, after removing the local background to clean its distribution of instrumental flux. However, when this bright star was saturated in certain frames, the PSF was derived from the profile of the "c" field star (e.g. Kochanek et al. 1997). To obtain the IO:O light curves, we removed magnitudes when the signal-to-noise ratio (S /N) of the "c" field star fell below 30. This star has a brightness close to that of the A image, and the S /N was measured through an aperture of radius equal to twice the FWHM of the seeing disc. By visual inspection of the pre-selected brightness records, we then found that the magnitudes of A and B at a few epochs strongly deviate from adjacent data. These outliers were also discarded. The whole selection procedure yielded a rejection rate of about 6% (8 out of 136 epochs). In a last step, assuming the root-mean-square deviations between magnitudes on consecutive nights as errors, the uncertainties were 0.011 and 0.017 mag for A and B. The RATCam-IO:O light curves of A and B covering the period 2005 to 2016 are available in tabular format at the CDS 12 : Table 4 includes r-SDSS magnitudes and their errors at 344 epochs. Column 1 lists the observing date (MJD−50 000), Cols. 2 and 3 give photometric data for the quasar image A, and Cols. 4 and 5 give photometric data for the quasar image B. Thus, we combined all our r-band measurements in a machine-readable ASCII file, using MJD−50 000 dates instead of JD−2 450 000 ones. Now, in all the GLENDAMA light curves, the origin of the time axis is MJD−50 000. The r-band data collected by us and the USNO group during a 12-year period are also displayed in the top panel of Fig. 3. The new 146 epochs of magnitudes (after day 5959) lead to new microlensing variability in the difference light curve (DLC; see the middle and bottom panels of Fig. 3). Although this variability has an amplitude of ∼ 0.1 mag, is not as strong as in the previous period between days 4000 and 5400 (see the bottom panel in Fig. 3 of Hainline et al. 2013). The new extrinsic signal might better constrain the size of the continuum source emitting at ∼ 2600 Å (see Sec. 3.1.1). Jakobsson et al. (2005) monitored the double quasar FBQS J0951+2635 soon after its discovery in 1998 (Schechter et al. 1998), measuring a time delay of about 16 d and reporting evidence for microlensing in the period 1999−2001 (see also Paraficz et al. 2006). We also presented R-band light curves of To construct the DLC, the data of the A image have been shifted by −50 d (time delay), and then binned around the dates of the B image (using bins with a semisize of 10 d). Only bins including two or more data have been taken into account to compute differences between A and B. the two images of the GLQ (Shalyapin et al. 2009). These last records (37 epochs), based on Uzbekistan MAO observations be- tween 2001 and 2006, indicated the existence of a long-timescale microlensing fluctuation. The MAO monitoring programme was conducted by an international collaboration of astronomers from Russia, Ukraine, Uzbekistan, and other countries. Here, we analyse new LT photometric observations made during an 8-year period (2009−2016), which allow us to draw the evolution of the extrinsic variation over this century. Our database contains 72 frames in the r band, divided into two groups (see Table 3): 29 RATCam frames in 2009−2012 (for each monitoring night, we usually obtained three consecutive 300 s exposures) and 43 IO:O frames in 2013−2016 (typically, two consecutive 250 s exposures per monitoring night). To fill the LT gap in 2010, 3×300 s ALFOSC exposures of the lens system were taken with the Bessel R filter on 8 February 2010. We also analyse these frames, which are not included in the database. FBQS J0951+2635 The lensing galaxy is too faint to be detected with a red filter, and thus the system was described as two stellar-like objects, that is, two PSFs. The S1 field star was used to estimate the PSF, whereas we considered the S3 field star to check the reliability of the quasar brightness fluctuations (see the finding chart in Fig. 1 of Shalyapin et al. 2009). We used IMFITFITS to obtain PSF-fitting photometry for the two quasar images and the field stars. Most of the LT frames (64 of 72) led to reasonable photometric results, and these usable frames were then combined on a nightly basis to produce r-band magnitudes at 28 epochs. For each object, the typical photometric error for an individual exposure was determined from the intra-night scatter of the magnitude values measured on the individual frames. These intranight scatters were 0.007 mag (A), 0.025 mag (B), and 0.033 mag (S3); B is fainter than A in ∼ 1.3 mag (and only 1 . 1 away from the brightest image A), and S3 is fainter than B in ∼ 1.2 mag. The errors for combined frames were reduced by a factor of N 1/2 , where N = 2−3 is the number of individual exposures. After constructing the LT r-band brightness records, we merged this new dataset and the NOT R-band data at day 5236 (MJD−50 000; derived from the ALFOSC exposures on 8 February 2010) using an r − R NOT offset of 0.153 mag. We also found an r − R MAO offset of 0.489 mag, and merged the LT-NOT and the MAO data in 2001−2006. We remark that both magnitude offsets were calculated from the records of the non-variable star S3. The top panel of Fig. 4 shows the LT-NOT-MAO r-band light curves of the double quasar and the comparison star S3. The brightness changes of A and B are significantly greater than the observational noise level in the record of S3, which is appreciably fainter than both quasar images. In addition, the almost parallel behaviour of A and B indicates the presence of intrinsic variations. In Table 5 at the CDS 12 , using the same format as Table 4, we include the r-SDSS magnitudes of A and B (and their errors) at 66 epochs over the period 2001−2016. Column 1 contains the observing dates (MJD−50 000), Cols. 2 and 3 give the magnitudes and magnitude errors of A, and Cols. 4 and 5 give the magnitudes and magnitude errors of B. Regarding the extrinsic signal in the r band, the DLC and the single-epoch differences are shown in the bottom panel of Fig. 4. It is apparent that the DLC values basically agree with single-epoch differences close to them. However, it is not clear whether the microlensing variation that was observed in the period 1999−2006 (Jakobsson et al. 2005;Paraficz et al. 2006;Shalyapin et al. 2009) is completed or not. Although the DLC in 2009−2016 is roughly consistent with a quiescent phase of microlensing activity, the single-epoch differences suggest an active phase, in which the current r-band flux ratio could be similar to the single-epoch Mg ii flux ratio as measured in 2001 by Jakobsson et al. (2005). QSO B0957+561 Optical photometry We observed QSO B0957+561 in red passbands from 1996 to 2016, that is, for 21 years. We used the IAC80 Telescope during the first observing period (1996−2005), while we monitored the double quasar with the LT from 2005 to 2016. The IAC80-CCD frames in the R band for the period 1996−2001 were previously processed using the PHO2COM photometric task (Serra-Ricart et al. 1999;Oscoz et al. 2002). Here, we focus on the most recent IAC80-CCD R-band frames taken between January 1999 and November 2005, which are included in our database (see Table 3). The CCD camera covered an area of about 7 × 7 on the sky, with a pixel scale of ∼ 0 . 43. This field of view allowed us to simultaneously image the AB components of the GLQ and the YXRFGEDH field stars (see the top panel of Fig. 5). The typical exposure time was 300 s, although longer combined exposures of 900−1200 s were also used. We selected 515 R-band frames with a reasonable size of the seeing disc, and then performed PSF photometry on the field stars and lens system with the IMFITFITS software. As usual, the clean 2D profile of the H star was considered as the empirical PSF and the lens system was modelled as two stellar-like sources (i.e. two PSFs) plus a de Vaucouleurs profile convolved with the PSF. For these R-band frames, we found evidence of inhomogeneity over the field of view and carried out a frame-toframe inhomogeneity correction based on the idea by Gilliland & Brown (1988). When we performed photometry on frames of any lens system, we paid special attention to colour and inhomogeneity terms, as well as other atmospheric and instrumental effects (e.g. Shalyapin et al. 2008). We thus obtained new photometric data of QSO B0957+561 from 515 IAC80-CCD frames in the R band over a seven-year period in 1999−2005. After an S /N > 70 selection (the S /N values were calculated within circles of 7-pixel radius centred on the quasar image A), the number of frames was reduced to 441. To estimate typical photometric errors, we computed magnitude differences between consecutive nights. These night-tonight brightness variations led to an uncertainty of 0.014 mag for both A and B, meaning that photometry to 1.4% was achieved for the lensed quasar. If there were several measurements on the same night, they were grouped to get more accurate light curves. This grouping produced 367 magnitudes for each quasar image. We also rejected outliers, so that the new IAC80 dataset contains 347 epochs. We note that it is important to merge the old IAC80 data (PHO2COM light curves in Fig. 1 of Oscoz et al. 2002) and the new IAC80 brightness records, as well as the new IAC80 data and the LT r-band light curves over 2005−2010 . Before constructing a global IAC80 database in the R band, we applied an outlier detection, data cleaning, and intranight grouping method to the old photometry. Thus, we found small R new − R old offsets of 0.004 mag in A and 0.031 mag in B, and merged the old (1996−2001) and the new (1999−2005) data. The shifted old magnitudes in 1999−2001 were then replaced by our new photometric data in that period. In addition, to transform the R-band magnitudes (R new ) into the r band of the SDSS photometric system, we found similar r − R new offsets of −0.236 mag in A and −0.233 mag in B. After building the light curves in the r band until 2010 (942 epochs), the next step was to incorporate all the available data for the period 2011−2016. We fully processed and analysed the RATCam r-band frames between 2011 and 2014 (RATCam was decommissioned at the end of February 2014), which have provided magnitudes in 34 additional nights. We also obtained photometric outputs from 95 IO:O frames in the r band. Because this optical camera has a high sensitivity, the brightest stars in its field of view are often saturated. Hence, we used the H star to build the PSF in two-thirds of the frames, while the Y star was used in the remaining one-third of the frames (see the PSF1 and PSF2 stars in the top panel of Fig. 5). Following the standard selection procedure of IO:O data, we removed quasar magnitudes for three frames in which the S /N of image A was below 30. Fortunately, we did not detect any outlier, and thus added 91 new magnitude epochs (two frames were taken on the same night). The IO:O monitoring is characterised by a mean sampling rate of about one frame every 7−10 d, and this precludes an estimation of the variability on consecutive nights. For a given quasar image, we made trios of consecutive data within time intervals shorter than 14.5 d. We also performed a linear interpolation to the initial and final magnitudes in each trio to generate an interpolated magni- tude at the same epoch as that of the central data point, and we then derived a typical photometric error by comparing measured and interpolated central magnitudes. This method produces reasonable errors of 0.010 mag in A and 0.013 mag in B, which are consistent with the uncertainty in the magnitude of the control star R (0.011 mag; A, B, and R have similar brightness). The IAC80-LT r-band light curves over the 21-year period 1996−2016 are available at the CDS 12 : the format of Table 6 is similar to those of Tables 4 and 5, including r-SDSS magnitudes and errors of A and B at 1067 epochs (MJD−50 000). These records are shown in the middle and bottom panels of Fig. 5, which indicate that the variability of the GLQ has increased over the last data decade. While the quasi-constancy of the delay-corrected flux ratio B/A in red passbands between 1987 and 2007 (dates of the trailing image B) is well known, for B/A ∼ 1.03 (e.g. Oscoz et al. 2002;Shalyapin et al. 2008), there is some controversy on the behaviour of the flux ratio in more recent years when higher variability occured (Hainline et al. 2012;Shalyapin et al. 2012). Using LT-USNO r-band data, Hainline et al. (2012) suggested the existence of a slow increase in B/A over days 4100−5700 (epochs of B). However, a reanalysis of LT-USNO data revealed an oscillating behaviour between days 4100 and 5400 (see the rectangle with dashed sides in the top panel of Fig. 6) that calls into question the presence of a microlensing gradient during these epochs . To try to remedy this problem, we derived the r-band flux ratio in 20 time segments of B covering the full photometric monitoring campaign from the IAC80-LT light curves (see Appendix A.1). In the top panel of Fig. 6, we present the long-term evolution of B/A (see Table A.1), where the t B values are the average epochs for the overlapping periods between the time-delay shifted flux record of A and the time segments of B. The error bars represent formal 2σ confidence intervals, and the grey highlighted rectangle is the 2σ measurement of the r-band B/A from HST spectra in 1999−2000 (Goicoechea et al. 2005). Although we detect a low-amplitude variability over the first 5000 days of data, the values of B/A are outside the HST band from day 6000. Hence, although the DLC in Hainline et al. (2012) likely has a biased shape, the new results support their claim that a microlensing event occurred in recent years. An extrinsic event (decrease in B/A) with similar amplitude has only been detected in the first years of the 1980s (e.g. Pelt et al. 1998), so this new accurately measured fluctuation offers a unique opportunity to unveil physical properties of the source and the primary lensing galaxy. In the bottom panel of Fig. 6, we also show the lack of correlation between B/A and the average flux B . Based on four measurements of B/A from LT observations in 2005−2010 (see the rectangle with dashed sides in the top panel), we have previously found evidence of a correlation between flux ratio and variability of B. However, from this larger collection of data, we did not find any clear B/A − σ B relationship, where the σ B values are the standard deviations of the flux of B in the overlapping periods between A(+420 d) and B. Imaging polarimetry As part of a pilot programme to probe the suitability of the main instruments on the LT for studying GLQs, we also conducted polarimetric and spectroscopic monitorings of QSO B0957+561 with the 2m robotic telescope. For polarimetric follow-up observations, we used the imaging polarimeters RINGO2 and RINGO3. The basic idea behind these instruments is to take eight consecutive exposures of the same duration for eight different rotor positions of a rapidly rotating polaroid. The data are then stacked for each rotor position to produce eight final frames in a given optical band (e.g. Jermak 2016, and references therein). Combining photometric measurements in the eight frames allows determining the polarisation (Clarke & Neumayer 2002). RINGO2 saw first light in June 2009 and was decommissioned in October 2012. This optical polarimeter used an EMCCD composed of 512×512 pixels (pixel scale of ∼ 0 . 45) and a hybrid V+R filter covering the wavelength range 460 to 720 nm. We obtained 8×200 s frames on each of the first two nights of observation (21 December 2011 and 13 January 2012), and performed slightly longer imaging-polarimetry observations (8×300 s frames) on 23 January 2012 and 26 March 2012. The data on 13 January 2012 are not usable because the PSF is very elongated along a specific direction as a result of tracking problems. The RINGO3 multicolour polarimeter was brought into service in January 2013, and it incorporates a pair of dichroic mirrors that split the light into three beams to simultaneously obtain exposures in three broad-bands using three different 512×512 pixel EMCCDs (Arnold et al. 2012): B (350−640 nm), G (650−760 nm), and R (770−1000 nm). We obtained useful data (8×300 s frames with each EMCCD) on 16 out of 18 ob-serving nights over the period 2013−2017. As polarimetric observations have been interrupted in late February 2017, we analysed all available frames, even those not yet included in our GLQ database. In Appendix A.2, we present details on the reduction of RINGO2 and RINGO3 observations of QSO B0957+561. After removing main instrumental biases, the Stokes parameters (q A , u A ) and (q B , u B ) at different observing epochs are depicted in Fig. A.3 and Fig. A.5. Although the construction of polarisation curves of the two quasar images is a very attractive possibility, we should firstly check whether the scatters in these q − u diagrams are caused by true variability. Accordingly, scatters in parameter distributions of the quasar images were compared to scatters in distributions of Stokes parameters for the non-variable field stars E and D (see the top panel of Fig. 5). We concentrated on RINGO2 and RINGO3/B data, which are based on the best observations in terms of S /N, and deduced that deviations from the mean values in Fig. A.3 and the top panel of Fig. A.5 are essentially due to random noise. Thus, for a given observational configuration (polarimeter and optical band), the polarisation of each image is characterised by mean values (q,ū) and standard errors (σq, σū). The polarisation degree and polarisation angle were derived as PD = (q 2 +ū 2 ) 1/2 and PA = 0.5 tan −1 (ū/q), respectively (e.g. Clarke & Neumayer 2002). We also estimated a common random error forq andū of both images (σ pol ) through the average of the four standard errors, and obtained σ PD = σ pol and σ PA = 0.5 (σ pol /PD) from a standard propagation of uncertainties. Table 7 includes our main results for the two quasar images in the four observational configurations: RINGO2, RINGO3/B, RINGO3/G, and RINGO3/R. We note two important details: first, there are only three individual observations from RINGO2, and with just three data points (q, u) for each image, the σ pol value is 50% uncertain. To account for this extra uncertainty, the errors in the first data row of Table 7 are increased by 50%. Second, as the PD of weakly polarised sources is systematically overestimated (e.g. Simmons & Stewart 1985), we also report the corrected polarisation degree (PD corr ). For PD/σ PD lower than or similar to 1, the best estimate of the actual polarisation amplitude is zero: PD corr = 0 (e.g. Simmons & Stewart 1985). Otherwise, we use the estimator described by Wardle & Kronberg (1974). The results in Table 7 indicates that the polarisation of QSO B0957+561 has remained at low levels during the 5.2-year polarimetric follow-up. Contrary to what proposed to explain certain time-domain observations of the first GLQ (see Sec. 3.1.3), we have not found evidence for high-polarisation states in epochs of violent activity. While the RINGO3 data of B are consistent with zero polarisation (or PD ≤ 0.3% from the weighted average over the three bands), the data of A suggest a polarisation amplitude of about 0.5%. The detection of this 0.5% polarisation in A (which could depend on wavelength; see the PD corr values in the third column of Table 7) deserves more attention. Before this work, Wills et al. (1980) conducted polarimetric observations of QSO B0957+561 using unfiltered white light. They reported PD = 0.7 ± 0.4% (PD corr = 0.6%) for A and PD = 1.6 ± 0.4% (PD corr = 1.5%) for B. Therefore, the current polarisation degree of image A agrees well with the 1980 value of Wills et al., while the current PD of image B does not. Dolan et al. (1995) also studied the polarisation of A and B in the UV. However, their HST data led to large uncertainties of ∼ 1.5% in the PD of both images, and no reliable detection was obtained. Spectroscopy Our spectroscopic monitoring of QSO B0957+561 includes many observing epochs with FRO-DOSpec on the LT. This spectrograph is equipped with an integral field unit that consists of 12×12 lenslets each 0 . 83 on sky, covering a field of view of 9 . 84×9 . 84. However, the FRODOSpec programme in 2010−2014 was not as successful as expected, and only ∼ 25% of the 2500−2700 s exposures led to usable spectra of both quasar images. The difficulty in placing in a robotic mode two sources separated by 6 . 1 within a square of side ∼ 10 was one of the main reasons for a relatively low efficiency of the IFS monitoring. We obtained 16 reasonably good individual exposures with FRODOSpec, and one of them (on 1 March 2011) was exhaustively analysed by Shalyapin & Goicoechea (2014b). This paper addressed the whole processing method we used to obtain flux-calibrated spectra of sources in crowded fields from FRODOSpec observations 13 . For both the IFS and the LSS (in this subsection and in Sect. 3.2.4), we almost always compared quasar spectral fluxes averaged over the g and/or r passbands with corresponding concurrent fluxes from RATCam/IO:O frames. This comparison has permitted us to check the initial calibration of spectra and recalibrate them when required. Here, we concentrated on the Mg ii emission at 2800 Å observed at red wavelengths because the red grating of the integral-field spectrograph provides the highest S /N values. The red grating spectra of A, B, and the primary lensing galaxy (GAL) are available at the CDS 12 : Table 8 includes wavelengths (Å) along with fluxes of A, B, and GAL (10 −17 erg cm −2 s −1 Å −1 ) for each of the 16 observing dates (yyyymmdd). We conducted additional LSS, with the long slit in the direction joining A and B. At each wavelength bin, the spectroscopic data along the slit were fitted to an 1D model consisting of two Gaussian profiles with a fixed separation between them. This procedure provided spectra for A and B. As the data were not taken along the parallactic angle, differential atmospheric refraction (DAR) produced chromatic offsets of both quasar images across the slit (Filippenko 1982), and thus wavelengthdependent slit losses. We assumed that the two sources were exactly centred on the slit at ∼ 6200 Å (acquisition frame in the r band), and then derived DAR-induced slit losses and corrected original spectra. Using g-band and/or r-band fluxes from RAT-Cam/IO:O frames (see above), we also accounted for weak spectral contaminations by GAL. Observations with SPRAT at five epochs between 2015 and 2017 (the last two are not included in the current version of the GLQ database) were used to study the Mg ii line at 21 epochs. The SPRAT spectra show the Mg ii and C iii] emission lines, as well as several absorption features (see Fig. 7). Table 9 at the CDS 12 is structured in the same manner as Table 8, but incorporating the fluxes of A and B from the observations with SPRAT. Although NOT/ALFOSC spectroscopic data in 2009−2013 (four epochs) contain Mg ii, C iii] and C iv emission features, the Mg ii line is near the red edge of 13 The associated L2LENS software is available at http://grupos. unican.es/glendama/LQLM_tools.htm the NOT/ALFOSC/#7 spectra. We were not able to accurately calibrate the NOT/ALFOSC spectroscopy on 29 January 2009, therefore we only extracted usable spectra at three epochs. These NOT/ALFOSC data are presented in Tables 10 (grism 7) and 11 (grism 14) at the CDS 12 , using the same format and units as Table 9. In addition, we were unfortunately unable to infer reliable results for any emission line from the INT/IDS spectroscopy on 31 March 2008 because of poor atmospheric seeing. In Appendix A.3, we analyse the profiles, fluxes and singleepoch flux ratios of the Mg ii, C iii], and C iv emission lines. The single-epoch Mg ii flux ratios are marked with red circles in Fig. 8. We also show their average value (dashed red line) and standard deviation (red band). The average flux ratio is B/A Mg II = 0.77 ± 0.02, in good agreement with the macrolens (radio core) flux ratio (0.75 ± 0.02; Garrett et al. 1994) and the first estimation of the delay-corrected Mg ii flux ratio (0.75 ± 0.02; Schild & Smith 1991). Although the red circles in Fig. 8 come from fluxes of A and B that are not separated by the time delay between the two images, the distribution of single-epoch flux ratios can be used to determine the delay-corrected value of B/A. The unaccounted line variability yields biases in both directions (underestimates and overestimates), and thus generates a random noise. As we only have three measures of the C iv flux ratio (see the blue triangles in Fig. 8), B/A C IV = 0.91 ± 0.09 could be a biased estimator of the delay-corrected value. However, the statistical result based on about ten observing epochs of the C iii] line is noteworthy (see the green squares, the dashed green line, and the green band in Fig. 8) (see Table A.2 and Table A Motta et al. 2012), as well as the macrolens flux ratio of ∼ 0.75. Therefore, although the behaviour of the C iv flux ratio during the initial phase of the ongoing microlensing event in the continuum is not a clear matter (see above), there is strong evidence that the Mg ii and C iii] emitting regions have not been affected by dust extinction and microlensing during the past 30 years. Deep NIR imaging NIR frames of QSO B0957+561 were obtained on 28 December 2007 with the TNG using the instrument NICS in imaging mode. All frames were taken with the small field camera, which provides a pixel scale of 0 . 13 and a field of view of 2 .2×2 .2. We also used three different filters JHK covering the spectral range of 1.27−2.20 µm, that is, ∼ 5260−9110 Å in the quasar rest frame, and combined individual exposures in each passband to produce final frames with subarcsec spatial resolution. The left panels of Fig. 9 show the strong-lensing region encompassing the three science targets A, B, and GAL. The total exposure times are 3120, 2200, and 4500 s in the J, H, and K bands, respectively. Each combined frame of QSO B0957+561 incorporates the lens system and the bright reference star H (which does not appear in Fig. 9), so that the PSF can be finely sampled and an accurate PSF-fitting photometry of the lens system can be performed. Notes. Here, r eff , e = 1 − b/a and θ e are the effective radius, ellipticity, and position angle of the major axis (it is measured east of north) of the de Vaucouleurs profile of the lensing galaxy, and GAL, A, and B are the calibrated brightnesses of the galaxy and the quasar images. As usual, the lens system was modelled as two PSFs (A and B) plus a de Vaucouleurs profile convolved with the PSF (GAL). We then determined the structure parameters of the galaxy by setting the positions of B and GAL (relative to A) to those derived from HST data in the H band (Keeton et al. 2000). These IMFITFITS structure parameters are shown in Table 12. The Jband size (r eff ) is similar to the optical size (Keeton et al. 1998), and the galaxy is more compact at longer wavelengths. Additionally, the e and θ e values in the NIR almost coincide with previous optical estimates at isophotal radii > 1 (Bernstein et al. 1997;Fadely et al. 2010). We note that our solution in the H band differs from the H-band photometric structure reported by Keeton et al. (2000), since we obtain higher values of r eff , e, and θ e . In Fig. 9, we display the residual instrumental fluxes after subtracting A+B (middle panels) and A+B+GAL (right panels), and the right panels contain arc-like residues resembling the host-galaxy light distribution from HST H-band observations. Using the JHK s magnitudes of the H star in the Two Micron All Sky Survey (2MASS; Skrutskie et al. 2006) database, we also inferred magnitudes for the galaxy and the two quasar images (see the last three rows in Table 12). Unfortunately, we failed to obtain additional NIR data in early 2009, and thus delay-corrected flux ratios could not be probed. However, Keeton et al. (2000) measured a single-epoch flux ratio (B/A) H = 0.93 from observations in 1998, and using data acquired about ten years later, we obtain (B/A) H = 0.94 (see Table 12). This suggests that the single-epoch flux ratio in the H band is stable on long timescales and might be a rough estimator of the delay-corrected value of B/A. Based on the magnitudes in Table 12, the NIR flux ratios of QSO B0957+561 vary from 1 (J band) to 0.9 (K band), decreasing as the wavelength increases. For comparison purposes, we also analysed mid-IR (MIR) observations in the data archive of the Spitzer Space Telescope. We found two Spitzer combined frames including QSO B0957+561 14 , each corresponding to a 202 s exposure with MIPS at 24 µm, and the fluxes of the quasar images in both frames were extracted using the MOPEX package 15 . The average fluxes A = 12135 µJy and B = 8932.5 µJy lead to a 24 µm flux ratio of 0.74, in very good agreement with radio and emission-line flux ratios. We remark that the radiation observed at 24 µm is emitted at ∼ 10 µm from a dusty torus surrounding the AD and broad line emitting regions (Antonucci 1993), and passes through the lens galaxy with a wavelength of 17.6 µm. Thus, this radiation is insensitive to extinction and microlensing. Overview After accumulating data for many years, we are gaining a clearer perspective of the physical processes at work on QSO B0957+561. From a 5.5-year optical monitoring of the two quasar images, we found oscillating behaviours of the delaycorrected flux ratios in the g and r bands, with maximum values of B/A when the flux variations are greater ). This result was crudely related to the presence of dust along the line of sight towards image A (within the lensing galaxy) and the emission of highly polarised light during episodes of violent activity. Here, we present r-band light curves covering 21 years of observations (from 1996 to 2016), which allow us to better understand the long-term evolution of (B/A) r . Based on these longer-lasting brightness records, (B/A) r does not seem correlated with flux level or flux variation, meaning that violent activity is not the unique driver of changes in (B/A) g and (B/A) r (see below). In addition, a 5.2-year optical polarimetric follow-up does not show any evidence for high polarisation degrees when large flux variations occur. In fact, PD < 1% during the entire monitoring period. Shalyapin et al. (2012) also reported a chromatic time delay between A and B. They used a standard cross-correlation in the g and r bands, and their results were interpreted as being due to chromatic dispersion of the A image light by a dusty region inside the lensing galaxy. However, while the chromaticity of the time delay from standard techniques does not seem arguable, its interpretation is very likely incorrect (see the second paragraph in Sec. 3.1.3). Very recently, Tie & Kochanek (2018) proposed that measured delays of a GLQ may contain microlensinginduced contributions of a few days, which would depend on the position of the AD across microlensing magnificatiion maps. A microlensing-based interpretation for the approximately threeday difference between the delays in the g and r bands is un-likely, however. The time delays of 417 d (g band) and 420 d (r band) are consistent with data from two independent experiments separated by ∼ 15 years (Kundić et al. 1997;, and thus microlensing does not seem to play a relevant role. A more plausible scenario may account for a few observed "anomalies" in QSO B0957+561 without a need to invoke highly polarised emission phases, the existence of exotic dust, or complex microlensing effects that remain over decades. The UVvisible-NIR continuum observed in the quasar comes from the direct UV-visible emission of the AD and the diffuse UV-visible light emitted by broad-line clouds (BLC), and this last contribution could be relatively significant (e.g. Korista & Goad 2001, and references therein). The continuum of the BLC includes scattered (Rayleigh and Thomson) and thermal (Balmer and Paschen recombination) radiation, and high-density gas clouds are particularly efficient in producing a diffuse component (Rees et al. 1989). From a wider perspective, heavily blended iron lines also produce a pseudo-continuum in quasar spectra (e.g. Wills et al. 1985;Maoz et al. 1993), and recent research has provided evidence for two different emitting regions in QSO B1413+117 (Sluse et al. 2015). The compact emission of this quasar is probably scattered by electrons and/or dust in an extended region. HST spectra of QSO B0957+561 in April 1999 and June 2000 allowed us to construct delay-corrected continuum flux ratios B/A at UV-visible-NIR wavelengths during a period of low quasar activity and microlensing quiescence (Goicoechea et al. 2005). These data and HST emission-line ratios are consistent with a simple picture: the direct light of the A image is affected by a compact dusty region in the intervening cD galaxy (see the first paragraph in Sect. 3.1.3 and new results in this section), which is adopted here for further discussion. As a result, regarding the continuum observed in the two quasar images, the diffuse contribution plays a more important role in A because its direct light is partially extinguished by dust. In the absence of extended diffuse light, the continuum flux ratio at the observed wavelength λ is given by B λ (t)/A λ (t − ∆t) = 0.75/ λ , where 0.75 is the macrolens ratio and λ is the dust extinction law. However, assuming that T λ is the light-travel time (in the observer's frame) between the direct compact source and the clouds emitting the observed radiation, as well as a diffuse-to-direct emission ratio δ λ << 1, λ should be replaced by an effective extinction eff . During low activity periods, B λ (t − T λ )/B λ (t) ∼ 1 and spectral anomalies in B/A are due to peaks in δ λ , i.e. eff λ ∼ λ (1 − δ λ ) + δ λ . Thus, the apparent distortion of the 2175-Å extinction bump (observed around 2960 Å) is reasonably related to the Lyα Rayleigh scattering feature, while the flattening at NIR wavelengths would be (at least partially) due to a large amount of diffuse emission at ∼ 3000−4000 Å (around the Balmer jump; e.g. Korista & Goad 2001). The toy model outlined in the previous paragraph can also be used to discuss some time-domain anomalies. Considering a central epoch in the time segment TS4 (e.g. the day 5300 in such episode of violent variability; see Appendix A.1), we estimated B r (t − T r )/B r (t) ∼ 0.90−0.95 if T r ∼ 50−100 d (e.g. Guerras et al. 2013). As the diffuse light contribution to eff r decreases by ∼ 5−10% (with respect to low activity periods), this may produce the 4% increase observed in (B/A) r at day 5300. Moreover, the diffuse-light term in the effective extinction decreases by about 9−17% at the r-band flux peak, in reasonable agreement with the measured increase of a 9% in the flux ratio at day 5370. Therefore, the model is also able to explain the flux ratio anomalies during sharp intrinsic variations of flux. Despite this sucess of the AD + BLC scenario in accounting for some local fluctuations in (B/A) r , a microlensing-induced variation is taking place in recent years. The time-evolution of the flux ratios in the gr bands is thus a powerful tool to constrain the sizes of the g-and r-band continuum sources, and compare microlensing-based measures with reverberation mapping ones. A central EUV light source most likely drives the variability of QSO B0957+561 . Although this ionising radiation cannot be observed directly (e.g. Michalitsianos et al. 1993), its variations are thermally reprocessed within the AD to generate fluctuations in the UV emission that are observed in the visible continuum. These EUV and UV variations are also reprocessed in the BLC, but extended regions respond later and less coherently than compact ones. Even though detailed simulations are required to obtain a realistic description of the BLC emissivity (e.g. using CLOUDY models; Ferland et al. 1998;Ferland 2003), we again used the toy model to obtain some insights into expected delays in the AD + BLC scenario. For instance, in the g band, the flux of A at t − ∆t can be related to the fluxes of B at t and t − T g , that is, A g (t − ∆t) ≈ 1.33 g B g (t) + 1.33(1 − g )δ g B g (t − T g ). However, when estimating the time delay ∆t from a standard cross-correlation, we are implicitly assuming a linear relationship A g (t − ∆t) = αB g (t) + β. The key point is that the contamination by diffuse light from the gas clouds does not produce a constant added (β), but a variable one β(t) ∝ B g (t − T g ). The linear law does not hold here, so we indeed derive an effective time delay ∆t g = ∆t − τ g instead of the true achromatic value. The amount of deviation (τ g > 0) depends on the relative weight and the degree of variability of the delayed signal β(t). As the relative extinction (1 − )/ and the variability of B increase toward shorter wavelengths, it is not at all surprising to measure a lag ∆t r − ∆t g = τ g − τ r ∼ 3 d. SDSS J1001+5027 The COSMOGRAIL collaboration monitored the two images of SDSS J1001+5027, measuring a time delay of about four months (Rathna Kumar et al. 2013). We therefore interrupted the LT photometric monitoring of the GLQ (whose primary goal is determining the time delay) and started spectroscopic follow-up observations separated by four months. The first successful spectra of SDSS J1001+5027 were obtained using FRODOSpec on 7 November 2013, 8 February 2014, and 26 March 2014. We did not obtain data in early June 2014, and thus, by means of FRO-DOSpec observations, only one comparison between A and B is possible at the same emission time. We took a single 3000 s exposure on each of the three nights. The blue grating signal was very noisy, therefore only red grism data were considered of astrophysical interest. We also observed the GLQ with SPRAT on several nights. The 1 . 8 (∼ 4 pixel) wide slit was oriented along the line joining A and B, and we used the blue grating mode. We initially checked the feasibility of the SPRAT programme by taking 3×300 s exposures on 26 February 2015. After analysing these tentative data, we decided to use longer exposures (4×600 s) per observing run. We then obtained spectroscopic frames at four additional epochs: 2 December 2015, 5 April 2016, 6 December 2016, and 5 April 2017, but data on the last epoch are not usable. Spectra of the two quasar images were extracted by setting its angular separation to that reported by Rusu et al. (2016) and fitting two 1D Gaussian profiles (corrections for DAR-induced spectral distortions and flux calibrations are outlined in the subsection on spectroscopy in Sect. 3.2.3). The FRO-DOSpec red-grating spectra can be downloaded from the CDS 12 : Fig. 10. FRODOSpec red-grating and SPRAT spectra of SDSS J1001+5027 in 2013−2016. We display the FRODOSpec spectra after a three-point smoothing filter is applied. Our global dataset contains information on the C iv, C iii], and Mg ii emissions at z = 1.838 (Oguri et al. 2005). Table 13 includes wavelengths (Å) along with fluxes of A and B (10 −17 erg cm −2 s −1 Å −1 ) for each of the three observing dates (yyyymmdd). The SPRAT data appear in Table 14 at the CDS 12 , using the same data structure as Table 13. All usable LT spectra at seven different epochs are also shown in Fig. 10, where the FRODOSpec spectral energy distributions are smoothed with a three-point filter to reproduce the 4.6-Å bins of SPRAT. Using previous simultaneous spectra of A and B, Oguri et al. (2005) noted that the C iv flux ratio (B/A) C IV was significantly higher than the continuum flux ratio at 1549 Å in the quasar rest frame. Moreover, the single-epoch continuum flux ratios were higher for the longer wavelengths. In Appendix B, we analyse the new spectroscopy of SDSS J1001+5027, highlighting the results on the Mg ii, C iii], and C iv emissions, and the continuum fluxes at the rest-frame wavelengths of the three emission lines (see Tables B.1, B.2, and B.3). From the single-epoch flux ratios in Table B.3, we infer B/A C IV = 0.78 ± 0.07 and B/A cont = 0.52 ± 0.01 at 1549 Å. We also report that B/A cont grows from 0.52 at 1549 Å to 0.78 at 2800 Å, and this growing trend becomes clearly apparent in Fig. 11 (coloured circles). The new continuum and emission-line flux ratios (coloured triangles represent the results for the emission lines) essentially agree with the findings of Oguri et al. (2005). When statistical studies are made with only a few data points, the standard deviation of the standard deviation (i.e. the uncertainty in the uncertainty) is large. Here, to account for this additional uncertainty when N = 4 (carbon lines) and N = 3 (magnesium line), the standard deviations of the means were increased by 40 and 50%, respectively. These error bar enlargements are also useful to account for possible variability effects. Hence, we consider the coloured symbols (graph markers) in Fig. 11 as a proxy to the delay-corrected flux ratios. At present, our observations on 7 November 2013, 26 March 2014, 2 December 2015, and 5 April 2016 lead to a single value for each delay-corrected ratio, which is not depicted in Fig. 11 because it might be strongly biased. In emission-line flux ratios are in reasonably accord with the continuum flux ratios at the longest wavelengths, that is, (B/A) cont at 2800 Å and (B/A) K , so that the macrolens flux ratio may lie within the grey rectangle depicted in this figure. For this scenario, the behaviour of the continuum flux ratios at shorter wavelengths (smaller emitting regions) requires a convincing explanation. The observed positive slope in (B/A) cont (see the solid line in Fig. 11) could be primarily due to chromatic microlensing (e.g. Mediavilla et al. 2009) Fig. 11) if the R-band magnitudes of the B image are contaminated by an unknown source with a non-variable flux. Then, assuming that contamination increases with wavelength, (B/A) cont could be almost constant at 1500−3000 Å (dotted horizontal line). The origin of the observed flux ratios of SDSS J1001+5027 merits further study. QSO B1413+117 The RATCam r-band photometry in February-July 2008 (33 epochs) was presented in Table 1 of Goicoechea & Shalyapin (2010). In this section, we describe additional LT r-band observations, outline the most relevant processing tasks, and obtain new light curves of the four quasar images (A-D). The LT data archive includes usable frames for three observing nights with RATCam in May-June 2006 16 . For each of these nights, we found 6−9 decent 100 s exposures. While the quasar images are bright (r ∼ 18 mag), the main lensing galaxy is a very faint source (r ≥ 23; Kneib et al. 1998). Therefore, the fluxes in the crowded region of each individual frame were extracted using the IMFITFITS software and setting a simple photometric model consisting of four close PSFs. The empirical PSF was derived from the S45 field star, which was also taken as reference for differential photometry (see the finding chart in Fig. 1 of Goicoechea & Shalyapin 2010). For each quasar image and for the control star S40, we combined as a last step the individual magnitudes on each night into a single mean value and inferred its error from the standard deviation of the mean. Our GLQ database also contains acceptable-quality IO:O observations obtained on 59 nights throughout the period 2013−2016. The two consecutive 250 s exposures per observing night were processed separately with IMFITFITS, and averaged magnitudes of quasar images (and the star S40) were then computed. To construct accurate light curves in 2013−2016, the selection criteria were the same as those applied to the previous monitoring campaign in 2008: FWHM < 1 . 5 and S /N > 150. This selection procedure removed 29 out of 59 epochs, leaving 30 epochs with high-quality data. We note that the statistical properties of the seeing ( FWHM = 1 . 17 and σ FWHM = 0 . 15) and the S /N of S40 ( S /N = 207 and σ S /N = 42) roughly coincide with the corresponding means and standard deviations for the 33 data epochs selected in 2008 ( FWHM = 1 . 16, σ FWHM = 0 . 17, S /N = 201 and σ S /N = 25), so that we have adopted the typical uncertainties in our previous brightness records as average photometric errors. We then used weighting factors S /N /S /N to estimate errors on every night (e.g. Table 15 at the CDS 12 : Column 1 lists the observing date (MJD−50 000), and Cols. 2−3, 4−5, 6−7, and 8−9 give the magnitudes and magnitude errors of A, B, C, and D, respectively. LT r-band brightness records of the quadruple A&A proofs: manuscript no. aa32737_astroph quasar QSO B1413+117 and the field star S40 are also illustrated in Fig. 12, which shows an ∼ 0.3 mag intrinsic brightening of the four quasar images between the periods 2006−2008 and 2013−2016. We also built DLCs of QSO B1413+117. Using the time delays and the magnitude offsets between the fainter images (B-D) and A (Goicoechea & Shalyapin 2010), we first derived magnitude-and time-shifted light curves of B-D. To obtain the DLCs in the top panel of Fig. 13, the original light curve of image A was then subtracted from these shifted brightness records. We only computed magnitude differences from BA, CA, and DA pairs separated by ≤ 7 d. In 2006, the DLC for the images D and A shows an evident deviation of ∼ 0.1 mag from its zero mean level in 2008. Although this could be interpreted as a typical microlensing gradient of about 10 −4 mag d −1 (e.g. Gaynullina et al. 2005;Fohlmeister et al. 2007;Shalyapin et al. 2009), data in Fig. 7 of Akhunov et al. (2017) make it possible to carry out a more detailed analysis. In 2013−2016, we also clearly detect an average deviation of ∼ 0.1 mag in the DLC for the images B and A. Thus, we find evidence of microlensing activity be-tween 2008 and 2013−2016. The recent DLCs, after subtracting their mean values, are plotted in the bottom panel of Fig. 13. A prominent gradient between days 7100 and 7200 is simultaneously observed in the three difference curves, which indicates the existence of a significant microlensing variation in the image A during the first half of 2015. Unfortunately, there is a long gap around day 7000, so we do not have information about the overall shape of this microlensing event. The DLCs only put constraints on its amplitude (≥ 0.1 mag) and duration (ranging from one month to one year). While previous studies have reported microlensing episodes in the optical continuum of the image D (e.g. Østensen et al. 1997;Anguita et al. 2008;Sluse et al. 2015;Akhunov et al. 2017), our DLCs demonstrate that other quasar images have also been affected by microlensing over the last ten years. QSO B2237+0305 QSO B2237+0305 has been monitored at optical wavelengths over more than 20 years by several large collaborations (Corrigan et al. 1991;Østensen et al. 1996;Woźniak et al. 2000;Alcalde et al. 2002;Schmidt et al. 2002;Vakulik et al. 2004;Udalski et al. 2006;Eigenbrod et al. 2008). To obtain a better perspective on the variability of QSO B2237+0305, we analysed additional r-band frames collected at the LT in 2006. These publicly available materials are not incorporated in the current version of the GLENDAMA archive and correspond to an independent, shortterm LT programme 17 . The star α (Corrigan et al. 1991) was used to estimate the S /N in all frames, as well as the PSF in many of them. However, when the brighter star γ was within the field of view and not saturated, we took this star to describe the PSF (the star γ is located 95 south from the lens system and is also called star 1 in Moreau et al. 2005). We performed PSF-fitting photometry on the lens system and field stars. In the strong-lensing region, the photometric model consisted of four point-like sources (A-D) and a de Vaucouleurs profile convolved with the PSF (lensing galaxy). We set the positions of B-D and the galaxy (relative to A) to those derived from HST data in the H band (e.g. Table 1 of Alcalde et al. 2002), and applied the IMFITFITS code to the best frames in terms of seeing and S /N. The mean values obtained for the parameters of the de Vaucouleurs profile (r eff = 4 . 72, e = 0.40 and θ e = 64 • ) were in close agreement with the structure parameters from the best GLITP frames in the R band (Table 2 of Alcalde et al. 2002). In a last step, we applied the code to all frames, setting the relative positions and the galaxy properties. The individual photometric results were then averaged on a nightly basis to calculate r-band magnitudes at 185 selected epochs: 16 in 200616 in , 94 in 200716 in −200916 in , and 75 in 201316 in −201616 in (including the complete 2016. Throughout the period from 2007 to 2016, we only discarded 25 epochs (nights) in which our quality requirements (FWHM < 1 . 75 and S /N > 100) were not met. Hence, we have robotically observed QSO B2237+0305 with an efficiency reaching almost 90%. Typical errors in the light curves of A-D and field stars were estimated following the procedure at the end of Sect. 2 of Goicoechea & Shalyapin (2010). This led to uncertainties of 0.011, 0.027, 0.053, and 0.026 mag for the A, B, C, and D images, respectively. Errors at every epoch were computed by using two weighting factors. Apart from the S /N /S /N ratio (e.g. Howell 2006), we also considered the ratio between the flux of each source and its mean value, so the error in a high-flux state is less than that in a low-flux state. In more detail, the physically-motivated flux factor was ( FLUX /FLUX) 1/2 , and it played a significant role in determining uncertainties in the brightness records of C and D (see Fig. 14). The LT r-band dataset covering the period 2006 to 2016 (there is a gap of about 1300 d between the end of the first monitoring phase with RATCam and the beginning of the second phase with IO:O) is available in Table 16 at the CDS 12 : Column 1 lists the observing date (MJD−50 000), and Cols. 2−3, 4−5, 6−7, 8−9, and 10−11 present the magnitudes and magnitude errors of A, B, C, D, and the control star, respectively. These photometric results are also displayed in Fig. 14. Depending on the star used as PSF tracer and photometric reference (γ or α), we took α or β (Corrigan et al. 1991) as control star. Therefore, Table 16 (Col. 10) and Fig. 14 (black circles) show the magnitudes of the star α and the shifted magnitudes of the star β (taking an offset α r − β r ∼ 0.6 mag into account). Time delays in QSO B2237+0305 are shorter than four days (Vakulik et al. 2006), which allows for a direct comparison between the curves of the four quasar images. We find sharp, uncorrelated brightness variations that are thought to be due to stars in the central bulge of the face-on spiral galaxy acting as a gravitational lens. These individual events and the full light curves including additional microlensing variations can be used to prove, among other things, the structure of the inner accretion flow in the distant quasar and the average stellar mass in the bulge of the nearby spiral galaxy (e.g. Shalyapin et al. 2002;Kochanek 2004). From 2009 onwards, the LT light curves are particularly relevant, since the OGLE collaboration stopped offering photometric data of the quadruple quasar at the beginning of the 2009 season. Summary and prospects We are constructing a publicly available database of ten optically bright GLQs in the northern hemisphere. This is fed with materials from a long-term observing programme started in 1999. The central idea behind the observational effort is to perform an accurate follow-up of each lens system over 10−30 years, using mainly the cameras, polarimeters, and spectrographs on the GTC and LT facilities at the RMO (see Table 1). Such a long programme is required, among other things, to find periods of high microlensing activity in most targets (Mosquera & Kochanek 2011). The database currently incorporates ∼ 6600 processed frames of 9 GLQs in the period 1999−2016 (see Tables 2 and 3), and we intend to reach the 10 000 frames in the next update of the archive (late 2019/early 2020). We remark that this is a singular initiative in the research field of GLQs, since other groups do not offer freely accessible well-structured archives with a variety of astronomical materials this rich. In addition to frames that are ready for astrometric and photometric tasks, spectral extractions, or polarisation measurements, this paper also presents high-level data products for six of the ten objects in the GLQ sample: QSO B0909+532, FBQS J0951+2635, QSO B0957+561, SDSS J1001+5027, QSO B1413+117, and QSO B2237+0305. We have published results for two objects in the sample in the last year. GTC-LT data provided evidence of two extreme cases of microlensing activity in two double quasars at z ∼ 2 (SDSS J1339+1310 and SDSS J1515+1511; Shalyapin & Goicoechea 2017). In addition, PS J0147+4630 has been discovered very recently (Berghea et al. 2017;Lee 2017;Rubin et al. 2017) and is being monitored with the LT since August 2017. We are also completing a detailed analysis of several physical properties of SDSS J1442+4055 (e.g. the time delay between quasar images and the redshift of the primary lensing galaxy; see Table 2), which will be presented in a subsequent paper. Our main results for the six individual objects are listed below. Table 4 at the CDS. In recent years (2012−2016), we detected microlensing variations with an amplitude of ∼ 0.1 mag, which might be useful to improve the Hainline et al. constraint on the size of the r-band continuum source. We note that our database also contains recent g-band frames (see Table 3). -FBQS J0951+2635: Table 5 at the CDS incorporates LT-NOT-MAO r-band light curves covering 2001 to 2016. Despite the relatively low cadence of observations and the existence of a gap of ∼ 1000 d, these data are critical to trace the long-timescale microlensing event in the system Shalyapin et al. 2009 (Jakobsson et al. 2005). -QSO B0957+561: Table 6 at the CDS shows IAC80-LT r-band brightness records spanning about 21 years (1996−2016). These records reveal the presence of an ongoing microlensing event, which offers a long-awaited opportunity to simultaneously measure the size of the g-and r-band continuum sources (the database includes frames in both passbands; see Table 3), as well as the mass distribution in the cD galaxy acting as a gravitational lens. We are now starting to use our two-colour light curves of the first GLQ to obtain information on the structure of sources and lens. The future constraints on size of sources will be compared to current results from microlensing analyses (e.g. Refsdal et al. 2000;Hainline et al. 2012) and reverberation mapping techniques Goicoechea et al. 2012). The main results from LT polarimetric observations are presented in Table 7. The optical polarisation degree of the two quasar images is < 1% over the monitoring campaign from late 2011 to early 2017, where the A image has a larger polarisation amplitude of ∼ 0.5% (for the B image, the amplitude is consistent with zero; see, however, very early measures by Wills et al. 1980). In Sect. 3.2.3 (Overview), we proposed a scenario that may explain this "polarisation excess" in A in addition to other observed "anomalies". Tables (Oguri et al. 2005), NIR magnitudes (Rusu et al. 2016) and spectra in the SDSS database favour the presence of a compact dusty cloud along the line of sight of B, although other scenarios cannot be completely ruled out. The ongoing spectroscopic programme will be continued in the coming years to try to distinguish contamination, dust extinction, and macro-and micro-lens magnification effects. Table 15 at the CDS. These complement previous r-band magnitudes spanning several months in 2008 (Goicoechea & Shalyapin 2010 Table 3) are promising tools to improve knowledge on the structure of the accretion disc in the distant quasar and the composition of the bulge of the spiral lens galaxy at z < 0.1 (e.g. Kochanek 2004). The second data interval (2013−2016) has a special relevance because there are no OGLE V-band magnitudes in recent years. The final GLQ database in the second half of the 2020s will help astronomers to delve deeply into the structure of distant active galactic nuclei, the mass distribution in galaxies at different redshifts and the cosmological parameters (e.g. Schneider et al. 1992Schneider et al. , 2006. In addition to frames for the ten targets in the GLQ sample, the GLENDAMA global archive will also include observations of binary quasars and other non-lensed objects, and even of newly discovered GLQs where one or more images are fainter than r = 20 mag (e.g. SDSS J1617+3827). In addition to the current telescopes at the RMO, we will try to use the successor of the LT (LT2; Copperwheat et al. 2015) and the World Space Observatory-Ultraviolet (WSO-UV; Shustov et al. 2014). These two new facilities will be operational in the first half of the next decade, and the UV space telescope may provide details on continuum sources in the surroundings of supermassive black holes. So far, the optical polarimetry has mainly been focused on a widely separated GLQ (QSO B0957+561). However, our database also contains broad-band polarimetric observations with the LT of three other systems with ∆θ ∼ 2−3 : SDSS J1001+5027, SDSS J1339+1310, and QSO B2237+0305, and we are developing a new method for reducing these frames (pixel size of ∼ 0 . 45 and normal seeing conditions). We will also try to measure polarisations with better spatial resolution at telescopes other than the LT. Whereas V-band polarisation degrees of some unresolved GLQs were obtained by Hutsemékers et al. (1998) and Sluse et al. (2005), Chae et al. (2001) and Hutsemékers et al. (2010) determined V-band polarisations of the four images of QSO B1413+117 through observations at high spatial resolution. We add a remark on the expected role of our GLQ database in cosmological studies. Seventy percent of the targets is being photometrically monitored in an intensive way at certain periods, which will allow users to determine delays between images in different time segments throughout the entire duration of the project. This procedure is useful for checking time-segmentdependent biases (e.g. Tewes et al. 2013;Tie & Kochanek 2018) and obtaining unbiased measures of time delays for cosmology. As shown by Shalyapin et al. (2012) (see also Kundić et al. 1997), chromatic biases are also possible, that is, different time delays at different wavelengths, and thus the final archive will incorporate data to check for chromaticity in delays of at least three systems. Robustly measured delays from the new database (see current results in Table 2) and other ongoing monitoring campaigns (e.g. the COSMOGRAIL project 18 ) will shed light on an unbiased value of H 0 and additional cosmological quantities. Despite this optimistic perspective, some problems remain, and they need to be fixed. For example, the spectroscopic redshift of the main lens in QSO B1413+117 is unknown. One must also avoid the bias introduced by unaccounted mass along GLQ sightlines, since H 0 can be noticeably overestimated when lineof-sight deflectors are ignored (e.g. Wilson et al. 2017). Finally, the overall experience gained from ongoing projects will be a basic tool to decide on future time-domain observations of large collections of GLQs, which will lead to robust estimates of H 0 , as well as the amount of dark matter and dark energy in the Universe (e.g. Oguri & Marshall 2010;Treu & Marshall 2016). Acknowledgements. We thank the anonymous referee for carefully reading our long manuscript and for several useful comments. We also thank the Universidad de Cantabria (UC) web service for making it possible the GLENDAMA global archive. We acknowledge A. Ullán for doing observations with the Telescopio Nazionale Galileo (TNG) and processing raw frames of the Gravitational Lenses International Time Project. We are indebted to C.J. Davis, J. Marchant, C. Moss and R.J. Smith for guidance in the preparation of the robotic monitoring programme with the Liverpool Telescope (LT). We also acknowledge the staff of the LT for their development of the Phase 2 User Interface (which allows users to specify in detail the observations they wish the LT to make) and data reduction pipelines. The LT is operated on the island of La Palma by Liverpool John Moores University in the Spanish Observatorio del Roque de los Muchachos (ORM) of the Instituto de Astrofísica de Canarias (IAC) with financial support from the UK Science and Technology Facilities Council. We thank the support astronomers and other staff of the observatories in the Canary Islands (J.A. Acosta, C. Alvarez, T. Augusteijn, R. Barrena, A. Cabrera, R.J. Cárdenes, R.Corradi, J. García, T. Granzer, J. Méndez, P. Montañés, T. Pursimo, R. Rutten, M. R. Zapatero and C. Zurita, among others) for kind interactions regarding several observing programmes at the ORM and the Observatorio del Teide (OT). Based on observations made with the Gran Telescopio Canarias, installed at the Spanish ORM of the IAC, in the island of La Palma. This archive is also based on observations made with the Isaac Newton Group of Telescopes (Isaac Newton and William Herschel Telescopes), the Nordic Optical Telescope and the Italian TNG, operated on the island of La Palma by the Isaac Newton Group, the Nordic Optical Telescope Scientific Association and the Fundación Galileo Galilei of the Istituto Nazionale di Astrofisica, respectively, in the Spanish ORM of the IAC. We also use frames taken with the IAC80 and STELLA 1 Telescopes operated on the island of Tenerife by the IAC and the AIP in the Spanish OT. We also thank the staff of the Chandra X-ray Observatory (CXO; E. Kellogg, H. Tananbaum and S.J. Wolk) and Swift Multi-wavelength Observatory (SMO; M. Chester and N. Gehrels) for their support during the preparation and execution of the monitoring campaign of QSO B0957+561 in 2010. The CXO Center is operated by the Smithsonian Astrophysical Observatory for and on behalf of the National Aeronautics and Space Administration (NASA) under contract NAS803060. The SMO is supported at Penn State University by NASA contract NAS5-00136. This publication makes use of data products from the Two Micron All Sky Survey (2MASS), which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the NASA and the National Science Foundation. We also used data taken from the Sloan Digital Sky Survey (SDSS) database. SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration. The SDSS web site is www.sdss.org. Funding for the SDSS has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, and national agencies in the U.S. and other countries. SDSS acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. We are grateful to both collaborations (2MASS and SDSS) for doing those public databases. The construction of the archive has been supported by the GLENDAMA project and a few complementary actions: PB97-0220-C02, AYA2000-2111-E, AYA2001-1647-C02-02, AYA2004-08243-C03-02, AYA2007-67342-C03-02, AYA2010-21741-C03-03 and AYA2013-47744-C3-2-P, all them financed by Spanish Departments of Education, Science, Technology and Innovation, "Lentes Gravitatorias y Materia Oscura" financed by the SOciedad para el DEsarrollo Regional de CANtabria (SODERCAN S.A.) and the Operational Programme of FEDER-UE, and AYA2017-89815-P financed by MINECO/AEI/FEDER-UE. RGM acknowledges grants of the AYA2010-21741-C03-03 and AYA2013-47744-C3-2-P subprojects to develop the core software of the database. This archive has been also possible thanks to the support of the UC. Instead of the magnitudes in Table 6, we used the corresponding fluxes (in mJy) and a time delay of 420 d to study the delaycorrected flux ratio B/A in the r band. To evaluate this ratio at different epochs, we compared the fluxes of the B image and the fluxes of the A image shifted by +420 d. Because the shifted epochs of A generally do not coincide with those of B, we made bins in A around the epochs of B. These bins had semisizes α = 1−12 d. Shalyapin et al. (2012) considered four time segments (observing seasons) of B that were called TS1, TS2, TS3, and TS4, and in this paper, we extend our previous analysis by incorporating 16 additional segments (see Table A.1). We note that TS0 corresponds to the season 2005/2006, in which the transition from IAC80 Telescope to LT took place. Notes. (a) Time segment and observing season of B ; (b) reduced chisquare for the best fit (dof = degrees of freedom) ; (c) formal 2σ confidence interval ; (d) average epoch of B (MJD−50 000) in the overlapping period between A(+420 d) and the time segment . ( ) Two bin semisizes lead to fits of similar quality, so we cite the average values of χ 2 0 /dof, B/A and t B using both semisizes. We used a χ 2 minimisation to find the flux ratio for each time segment. In Table A.1, we give the best solutions and their reduced chi-square values. This Table A.1 also contains the 2σ intervals for B/A, where each interval includes all values of B/A satisfying the condition χ 2 ≤ χ 2 0 + 4. We obtained χ 2 0 /dof ∼ 2−3 for the segments TS4, TS7, and TS10, and thus the formal uncertainties for these periods should be taken with caution. The AB comparisons for the 20 time segments, that is, from TS-9 to TS10, are shown in the panels of Fig. A.1. Taking the best solutions of B/A to amplify/reduce the time-delay shifted signal A, both A and B signals are compared to each other in these panels (A = filled circles and B = open red circles). If we exclusively focus on the LT photometry during the last ten years, our simple scenario (B/A is constant within each segment) does not work on TS4, TS7, and TS10. In addition to best solutions associated with reduced chi-square values ranging from 2 to 2.8, we see some anomalies in these periods. The simple scenario does not convincingly explain the observations, since the variations in A seem to be smoother than those in B (see Fig. A.1). To carry out photometric and polarimetric reduction of RINGO2 data (three epochs; see imaging polarimetry in Sec. 3.2.3), we firstly extracted instrumental fluxes (in counts) of objects of interest in each of the eight stacked frames at each epoch. The fluxes were extracted on the 24 frames using IMFITFITS. The IMFITFITS software produced PSF fitting photometry of the two quasar images and several field stars. Although the RINGO2 re-imaging optics causes a PSF depending on the position in the field (Steele et al. 2017), this effect does not seem very relevant in our study. The PSF star and the fitted objects are separated by only ≤ 1 , and some aperture photometry tests with field stars led to polarisation parameters similar to those from PSF fitting. In general, aperture photometry is the best method to extract fluxes, but here we study a crowded region. In a second step, the eight fluxes per object at each epoch were used to calculate the corresponding normalised Stokes parameters (q = Q/I, u = U/I). We combined measurements according to the equations in Clarke & Neumayer (2002) and Jermak (2016). However, these Stokes parameters must be corrected from instrumental (device-dependent) biases to obtain the true polarisation. For example, Jermak (2016) and Steele et al. (2017) reported on different mean instrumental polarisations over four periods of time, and our observations were performed during the third period they studied. After comparing the (q, u) values for the H field star and parameter distributions of standard zero-polarised stars, we reasonably assumed that the H star is an unpolarised object. At each epoch, the instrumental polarisation (q H , u H ) was then subtracted from the (q, u) values of the quasar images. There is a second main instrumental effect: depolarisation of the signal. To account for this additional issue, we analysed RINGO2 data of the standard polarised star VICyg12 (e.g. Schmidt et al. 1992). Available sets of (eight) frames for different sky position angles (ROT S KY PA values) allowed us to plot the q − u diagram in the top panel of Fig. A.2. After subtracting the instrumental polarisation and removing an elliptical distortion in the distribution of shifted (q, u) values, corrected data were spread along a ring centred at the origin of the q − u plane (see the middle panel of Fig. A.2). The radius of this ring yielded the (measured) polarisation degree PD meas for the polarised star and led to a depolarisation factor F = PD meas /PD true = 0.76 (see also the Jermak PhD thesis). When rotating a set of frames by an angle φ, the associated polarisation will appear rotated through an angle 2φ, that is, q φ = q cos(2φ) + u sin(2φ) and u φ = −q sin(2φ) + u cos(2φ). Hence, all data were de-rotated using the known values of ROT S KY PA (φ = −ROT S KY PA; see the bottom panel of Fig. A.2). The measured polarisation angle (PA meas ) did not coincide with the true one, and we found PA true = PA meas + K, where K = 42 • (e.g. Steele et al. 2017, who estimated K = 41 ± 3 • in the period of interest). As a last step in the reduction of RINGO2 observations of QSO B0957+561, we have corrected the depolarisation bias in our science data (quasar images). More specifically, after removing the instrumental polarisation bias, the (q, u) values of A and B were de-rotated through angles 2φ = −2(ROT S KY PA + K), and then divided by F (see Fig. A.3). The reduction procedures of RINGO3 data were similar to those used to reduce RINGO2 observations. However, RINGO3 is a three-band optical polarimeter, so we obtained three sets of eight stacked frames at 16 observing epochs (see Sec. 3.2.3). The three bands are labelled B (blue), G (green), and R (red), and they differ from standard ugriz and U BVRI passbands. In each optical band, the photometric outputs for a given object led to . All data points are distributed in a ring around the instrumental polarisation (cross). In the middle panel, the centre of the distribution is shifted to the origin (0, 0) by subtracting the instrumental polarisation (an elliptical distortion is also corrected). In the bottom panel, the data are de-rotated using the sky position angles associated with them. Fig. A.3. RINGO2 data of QSO B0957+561. The Stokes parameters (q, u) of A and B at the three observing epochs are corrected for instrumental polarisation and depolarisation effects (see main text). We also display the means and root-mean-square deviations of both parameters for each quasar image (crosses and ellipses), as well as dashed open circles representing three different polarisation degrees: 1%, 2% and 3%. its instrumental Stokes parameters at all epochs. Unfortunately, the RINGO3 hardware was changed four times during our polarimetric follow-up between 2013 and 2017, which forced us to split the data into five periods and analyse the instrumental biases within each individual period. To discuss the instrumental polarisations, we used available observations of the standard zero-polarised stars G191B2B and BD+28 • 4211 (e.g. Schmidt et al. 1992), and our data for the unpolarised field star H. As both standard stars showed a similar behaviour, we focused on the comparison between G191B2B and H. In Fig. A.4, we illustrate the time evolution of the instrumental polarisation in each band (see also Jermak 2016). The vertical solid lines represent epochs in which there were hardware updates, while the vertical dotted lines correspond to the observing dates. In the three last periods, our estimates of instrumental polarisations from G191B2B data (red circles and blue squares) agree well with the Stokes parameters for H (magenta and cyan stars) and the instrumental polarisation offsets in the Jermak PhD thesis (horizontal dotted lines). However, in the two first periods (before fitting a depolarising Lyot prism in December 2013), there appear discrepancies between results from the G191B2B and H stars. The (q H , u H ) values are the best tracers of the polarisation bias, since the H field star was observed in the same conditions as quasar images. In order to remove the instrumental polarisation bias at a given epoch, we subtracted (q H , u H ) for each of the three bands of the RINGO3 polarimeter in this epoch from the instrumental Stokes parameters of the quasar images. To correct for elliptical distortion, a multiplicative factor of 1.14 was also applied to the shifted values of q A and q B . In addition, we studied depolarisation factors (and K values) using RINGO3 data of standard polarised stars, and we confirmed the Jermak results for the last three periods. Therefore, averaging over the three bands, we took K = 55, 115.5, and 125 • in the third, fourth, and fifth time segments in Fig. A.4. Averaging over the three bands and these three time segments, F was also taken to be equal to 0.96. Słowikowska et al. (2016) proved that the values of K and F for the two first periods are very similar to those for the third period (using a method different from ours; see below), and consequently, we adopted K = 55 • and F = 0.96 in the two initial Fig. A.4. RINGO3 data of the standard zero-polarised star G191B2B and the unpolarised field star H. The top, middle, and bottom panels show results in the blue, green, and red bands, respectively. We depict the observing dates (vertical dotted lines) and when hardware updates occurred (vertical solid lines), as well as the Stokes parameters for G191B2B (filled circles and squares) and H (filled stars). The horizontal dashed lines represent average parameters of G191B2B (these would be equal to zero for an ideal instrument), and the horizontal dotted lines are the instrumental polarisation offsets reported in Jermak (2016).
2018-05-25T19:25:56.000Z
2018-05-25T00:00:00.000
{ "year": 2018, "sha1": "9d7631c411815f00f71888e0f00d57c2b35f6981", "oa_license": null, "oa_url": "https://www.aanda.org/articles/aa/pdf/2018/08/aa32737-18.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "9d7631c411815f00f71888e0f00d57c2b35f6981", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
260276524
pes2o/s2orc
v3-fos-license
The Effects of Peat Swamp Forest Patches and Riparian Areas within Large Scale Oil Palm Plantations on Bird Species Richness It is well established that oil palm is one of the most efficient and productive oil crops. However, oil palm agriculture is also one of the threats to tropical biodiversity. This study aims to investigate how set-aside areas in an oil palm plantation affect bird biodiversity. The research area includes two set-asides areas: peat swamp forest and riparian reserves and two oil palm sites adjacent to reserved forest sites. A total of 3,074 birds comprising 100 species from 34 families were observed in an oil palm plantation landscape on peatland located in the northern part of Borneo, Sarawak, Malaysia. Results showed that efforts by set-asides forest areas in large scale of oil palm dominated landscapes supported distinct bird species richness. High percentage of the canopies and shrub covers had a positive effect on bird species richness at area between oil palm and peat swamp forest. Herbaceous cover with height less than 1 m influenced the abundance of birds in the plantation closed to the peat swamp forest. The set-aside areas in oil palm plantations are essential in supporting bird’s refuges and should be part of oil palm landscape management to improve biodiversity conservation. Thus, provided the forest set-aside areas are large enough and risks to biodiversity and habitat are successfully managed, oil palm can play an important role in biodiversity conservation. INTRODUCTION Oil palm (Elaeis guineensis) is one of the most efficient and productive oil crops globally, with a production span of about 25 years (Parveez et al. 2020). It is a key contributor to the world edible oils and fats (World Economic Forum 2018) and plays an important role as feedstock and biofuels (Ngando-Ebongue et al. 2012). The impact of oil palm plantations on the environment is also significant, and the industry has faced many challenges relating to biodiversity loss and climate change (Meijaard et al. 2018). The Malaysian oil palm industry has countered the challenges by increasing scientific research on biodiversity conservation in oil palm production areas through set aside areas (Mohd-Azlan et al. 2019a;2019b). Malaysia is also promoting sustainable practices by obliging producers to comply with the Malaysian Sustainable Palm Oil Certification Scheme (MSPO 2021). Over the last decade, many studies have shown that oil palm agricultural expansion and intensification affected biodiversity and associated ecosystem services (Emmerson et al. 2016;Dislich et al. 2017;Guillaume et al. 2018). Biodiversity studies in this ecosystem often compare oil palm plantations with forests and other agriculture crops (Azhar et al. 2011;Yue et al. 2015;Hawa et al. 2016;Rajihan et al. 2017;Mitchell et al. 2018;Amit et al. 2021). In addition, other studies have focussed on different oil palm production systems and indicated that biodiversity levels are higher in smallholdings than large-scale plantations (Azhar et al. 2014;Syafiq et al. 2016;Razak et al. 2020). Oil palm agroecosystems biodiversity levels are affected by habitat quality which is related to a range of factors such as structural complexity, heterogeneity of vegetation cover, availability of food resources, alteration of microclimate and the human activities that lead to changes to the soil physical and chemical properties (Mariau 2001;Koh et al. 2009;Turner & Foster 2006;Foster et al. 2011;Jambari et al. 2012;Azhar et al. 2013;Drescher et al. 2016;Meijide et al. 2018). These findings showed that the negative impact of oil palm development on biodiversity could partly be mitigated by integrating the oil palm landscape with nature. Some of the recommendations to improve the level of biodiversity and its ecosystem functions includes protecting the remaining natural habitat (Phalan et al. 2011), increasing the structural complexity of the crop systems (Tscharntke et al. 2005), increasing the ground vegetation diversity of the crop landscapes (Azhar et al. 2013), adopting polyculture crop management strategies Ghazali et al. 2016) and establishing set-asides of forest patches or riparian area in oil palm landscape (Mitchell et al. 2018;Scriven et al. 2019). Recent efforts on establishing set-asides areas (e.g., wildlife corridor, forest patches, riparian area) within oil palm plantations are part of the biodiversity conservation initiatives to save wildlife (Lucey et al. 2014). Birds are part of the biodiversity commonly studied and found in oil palm plantations (Yudea & Santosa 2019). Birds are sensitive to any habitat disturbance and are widely used for environmental changes indicators in biodiversity conservation evaluation studies (Zakaria et al. 2005;Jambari et al. 2012;Alexandrino et al. 2016). Several studies have been conducted on bird diversity and its population crossing two different habitats plantation to forests, as reported by Mohd-Azlan et al. (2019b). However, to our knowledge, there are not much studies that examined how the vegetation structure between set-aside forest reserve and riparian within oil palm landscape affect bird diversity (Mitchell et al. 2018;Atiqah et al. 2019). Little is known about the efficiency of set-aside areas such as peat swamp forest and riparian reserve within oil palm landscapes to conserve tropical biodiversity. This study investigates how bird species richness and abundance differed between an oil palm plantation and two set aside areas (peat swamp forest and riparian areas) in the same landscape. The study also investigates how bird species richness and abundance are associated with vegetation characteristics across a gradient of forest-edge-plantations. This paper attempts to provide detailed investigations on the importance of maintaining or creating forest areas in oil palm landscapes toward providing safe passage and refuge for birds and other wildlife while improving agriculture production. Study site The study is located in Sabaju oil palm plantation (SOPP) situated in Bintulu, Sarawak, northern Borneo (N 03° 09.535'' E 113° 24.640''), which belongs to Sarawak Oil Palms Berhad (SOPB). The SOPP landscapes consist of five estates: Sabaju 1, Sabaju 2, Sabaju 3, Sabaju 4 and Sabaju 5 which covers 8,116.36 ha (Fig. 1). Within this estate, two areas were set aside for conservation purposes: a peat swamp forest (PSF) and a riparian area (RP). This oil palm planting started in 2008, with the most recent planting done in 2016 (4 years after planting during sampling was carried out). SOPP is mainly located at peatland, but with some small areas on mineral soil located at Sabaju 1 and Sabaju 2. Plantation management is according to peatland standard operating procedure (exclude mineral soil at Sabaju 1), whereby artificial drainage networks have been established to control water levels within plantations. Plantations keep their lower ground area covered with natural vegetation and minimise the use of chemicals to control the weeds. Palms located close to the riparian area were marked with red colour, indicating that this zone is free from fertiliser and chemical application. Harvesting activities of palm oil takes place twice a month inside the plantations. Sampling was conducted in 2020 at the two set-aside areas and adjacent OP plantations (Fig. 2): 1. Site A is a patch 343 ha of conserved PSF located at the south-west part of SOPP. This forest previously was logged PSF, but now this forest was preserved and protected from any other development for scientific research and conservation efforts by the plantation owner. The adjacent OP is Sabaju 4, which covers an area of 1,831 ha, planted between 2010 to 2011 and the study site for OP-A located at palm aged 9 years old after planting during sampling was carried out. Ground vegetation of OP-A consists mostly of ferns. 2. Site B is a small (< 50 ha) patch of RP situated at the eastern part of SOPP. This forest grows on the banks of the Sujan River that crosses the SOPP. These forests are conserved as a buffer zone between SOPP and the river. The buffer zone is implemented to avoid leakage of fertiliser and chemicals into the river's ecosystem, thus preserving the water quality. The RP is located in the Sabaju 2. Sabaju 2 (OP-B) covered an area of 2,282 ha, and the palm ranged from 7-12 years after planting. The herbaceous cover at sampling site for OP-B is dominated mainly by grasses and has wetter conditions with more irrigation canals running through the plantations. Habitat measurements Vegetation parameters measured at each plot according to Rodwell (2006) showed at Table 1. Shrub cover The woody vegetation layer between 1 m in height and the tree canopy The shrub cover was estimated by selecting four sampling plots measuring 4 m × 4 m in the observation plot. The average shrub growth in each sampling plot is used as the total shrub coverage for the observation plot. Herbaceous cover The non-woody vegetation layer less than 1 m tall The herbaceous cover was estimated with the use of four sampling plots measuring 1 m × 1 m. the average coverage of the sampling plots is used as the overall coverage of herbaceous vegetation in each observation plot. Bird observations The distance point count technique was used at each plot to observe the bird species (Buckland et al. 2001;Zakaria et al. 2009 To avoid time-of-day biases, the plots were visited four times in alternate order for each transects. Data on the transects are not independent and have been analysed for trends linked with distance from forest edge. However, the distance between transects was more than 100 m and we assume that individual birds recorded at 1 transect were not recorded at the other transect. Only species heard and sighted within the 20 m radius from the points' counts plot were recorded as present. Bird calls were recorded using Rode VideoMic GO attached to a Nikon 137 D3300 camera. The information on present bird species was obtained from the Birds of Borneo handbook by Myers (2009). Bird vocalisations were used to locate birds and to aid identification. Statistical Analysis One-way analysis of variance (ANOVA) and linear regression were computed using JASP Version 0.16.2 (JASP Team 2022). One-way ANOVA was used to compare bird species richness and abundance among sites. In testing for significant differences between sites RP (4 plots) was excluded from the analysis due to unequal plots in comparison to the other three sites (12 plots). However, we maintain RP in the boxplot figure as it shows mean of sites. In addition, this analysis was also used to compare the percentage of canopy cover, shrub cover and herb cover between sites. Tukey post hoc test was used to explore multiple comparison of mean differences of measures parameters (species richness, species abundance, percentage of canopy cover, percentage of shrub cover and percentage of herb cover) between sites. The linear regression was used to correlate between bird communities and vegetation structure in Sabaju oil palm plantation. Overall Bird Species Richness and Abundance in Oil Palm Landscape A total of 3,074 birds belonging to 100 species and 32 families were observed in SOPP (Table 2). Seventy-seven species were recorded in the PSF, 45 species in the RP, 31 species in the OP-A, and 30 species in the OP-B, including one peat swamp forest species, Hook-billed Bulbul (Setornis criniger) and one endemic to Borneo (Dusky Munia (Lonchura fuscans). Overall, twenty conservation priority species (Table 1) were recorded in the SOPP whereby the PSF recorded 15 species, RP with five species, OP-A with four species and OP-B with three species. One-way ANOVA showed a significant effect of the different habitat types in oil palm landscape on bird (F (3, 36) = 50.24, p < 0.001, ω² = 0.787). Post hoc testing using Tukey's revealed that set-asides areas PSF (mean = 25 species) significantly (p < 0.001) high species richness than in oil palm areas [OP_A (mean = 13) and OP_B (mean = 12)]. RP site has been excluded from testing using one-way ANOVA to compare species richness and abundance due to unequal sampling points with PSF, OP_A and OP_B. There was no significant different bird species richness between OP_A and OP_B (P = 0.998). Hence, set-aside areas do support high number of bird species richness in oil palm plantation. Also, the abundance of birds was showed a significant effect of the different habitat landscape in the oil palm plantations (F (3, 36) = 13.26, p < 0.001), ω² = 0.479). Post hoc testing showed that OP_A showed significantly more bird abundance than PSF and OP_B (p < 0.001). Scatter plots showed that setasides areas recorded high number of species with less bird abundance while for oil palm areas recorded high bird abundance with less number of species (Fig. 3). This was followed by Family Nectariidae (spiderhunters and sunbirds) with 13 species. More than 90% of species from these three families were recorded at the PSF. For RP, Family Nectaridae was the most dominant family with seven species, followed by Timaliidae with five species. OP-A that closed to the peat swamp forest showed a high number of species from the family Nectariidae (sunbirds) with eight species followed by Dicaeidae (flowerpeckers) with four species. OP-B that closed to the riparian area recorded mostly from the family Ardeidae (egrets, bitterns) with five species followed by Cisticolidae (prinias and tailorbirds) with four species. Good presentation of family Ardeidae related to the presence of waterbody or river along the riparian area. Bird and Vegetation Structure at Forest and Plantation in Relation to Distance to The Forest Edge In this study, six species have been recorded in both sites (Site A and Site B) with different distance from forest edge ( In terms of vegetation structure (Fig. 5), percentage of canopy (F (5, 18) = 12.758, p = 0.000), shrub (F (5, 18) = 4.299, p = 0.009) and herb cover (F (5, 18) = 5.147, p = 0.004) had significantly differed among different distance of PSF and OP-A to the edge. PSF revealed high percentage of canopy and shrub cover but low percentage of herbaceous cover than in any distance of OP-A to the edge. PSF at 50 m closer to the edge recorded high percentage of canopy (mean: 50 m = 69.5%), while shrub cover was recorded high at 100 m to the edge (mean = 60%). OP-A recorded high percentage of herbaceous cover but low percentage of canopy and shrub cover recorded. There were no significant difference in terms of percentage of canopy (F (3, 12) = 0.862, p = 0.487), shrub (F (3, 12) = 2.725, p = 0.091) and herb (F (3, 12) = 1.794, p = 0.202) cover among different distances of RP and OP-B TO the edge. RP recorded high percentage of shrub cover (mean = 52.5%) but low percentage of canopy (mean = 35.25%), and herbaceous cover (mean = 33.25%). In term of OP-B, high percentage of canopy (mean in the range between 50 m-150 m to the edge = 46.25%-55%) and herbaceous (mean between 50 m-150 m to the edge = 53.75%) but low percentage of shrub cover (mean in the range between 50 m-150 m to the edge = 26.25%-33.75%) were recorded. Effect of Peat Swamp Forest and Riparian Reserves in Oil Palm Agroecosystem to Bird Community Our study indicated a strong relationship between the vegetation variables and the species richness and abundance of birds at Site A which across gradient from forest edge to interior of plantation or peat swamp forest (Fig. 6). The percentage of canopy (R 2 = 0.389, F (2,21) = 6.672, p = 0.006), shrub (R 2 = 0.357, F (2,21) = 5.822, p = 0.01) and herbaceous cover (R 2 = 0.348, F (2,21) = 5.609, p = 0.011) had significant effect on the bird species richness and abundance. The regression results indicate that bird species richness was positively related to canopy and shrub cover but negatively related to herbaceous. The results showed that bird species richness in Site A increased with a high percentage of canopy or shrub cover and decreased in the percentage of herbaceous cover. Bird species abundance indicated positive relation with herbaceous cover and negative relation with canopy and shrub cover. Bird species abundance increased with a high percentage of herbaceous cover and decreasing percentage of canopy and shrub cover. DISCUSSION Overall species richness showed that PSF recorded a high number of species with 77 species (mean = 25 species/point) followed by RP with 45 species (mean = 22 species/point) than plantations (OP-A: 31 (mean = 13 species/point) species and OP-B: 30 species (mean = 12 species/point)). The high number of species in PSF than RP might be due to the size of PSF which is broad in shape and connected with adjacent forest while RP is narrow-line-shaped along the river line. Narrow-linear shaped forest areas within the oil palm landscape did not support high species richness (Mohd-Azlan et al. 2019b). Connecting habitat with the nearby forest is crucial (Hawa et al. 2016; to create wildlife landscape connectivity to support high biodiversity value within oil palm landscapes. This study showed that set-asides areas recorded high number of species with less bird abundance while for oil palm areas recorded high bird abundance but low number of species and this finding like Aratrakorn et al. (2006) and Amit et al. (2021) found less to high abundance of birds across the conversion of lowland forest or logged peat swamp forest to oil palm plantation. Fragmented and isolated forest patches are often assumed to have low conservation value because their species communities are depauperate, and important ecosystem services may be reduced or absent in small fragments (Miller-Rushing et al. 2019). However, based on the results of this study, set-aside areas: RP and PSF recorded high number of bird species richness within the oil palm dominated landscape. Results for RP is consistent with what was found in a previous study by Mitchell et al. (2018), stating that riparian reserves help protect forest bird communities in oil palm dominated landscapes. Forested sites (PSF, and also RP) supported greater bird diversity than the plantation sites: this may well be linked to the complexity of the forest structure, supporting greater flora diversity and providing better habitat and greater food resources than the plantation sites (Turner & Foster 2009;Yule 2010;Azhar et al. 2011;Posa 2011;Hawa et al. 2016). Mansor and Sah (2012) study showed that forest patches key factor to provide foraging opportunities for birds even though bird's foraging behaviour may show differential responses in this habitat due to compete more intensely with each other for the remaining resources. However, this finding contradicts with the findings by Mohd-Azlan et al. (2019b) through the mist-netting method. The narrow linear shape forest area within oil palm dominated landscape could not support bird species diversity, and it is similar for plantation area as compared to forest edge which recorded higher bird species diversity. Interestingly, these set-asides areas, PSF and RP within oil palm landscape provide refuge for threatened species, specialist species to PSF, and bird species endemic to Borneo hence indirectly some of these species such as Short-tailed Babbler, Black-throated Babbler, Black Hornbill and Dusky Munia also recorded in oil palm plantation area. This finding was consistent with previous studies that also recorded threatened species in oil palm plantations (Sheldon et al. 2010;Azhar et al. 2011;Mohd-Azlan et al. 2019b;Amit et al. 2021). Plantations that recorded threatened, migratory, forest and wetland species have some conservation value within oil palm dominated landscape (Azhar et al. 2011). These results indicated how forest patches give conservation values in oil palm dominated landscapes. It plays an important role as a bird diversity hotspot and refuge for threatened species, thus sustaining biodiversity in Borneo. Mansor et al. (2019) also mentioned that continuous forest has critically important characteristics that need to be conserved similar goes to forest patches are also important as ecological movement corridors and foraging ground for birds. PSF recorded a high number of threatened species in the world as compared to RP. The primary contributor was the location of PSF connecting with adjacent forest. Scriven et al. (2019) noted that habitat connectivity is important to support threatened and forest-dependent species for improving tropical biodiversity conservation. Due to its richness in PSF, Family Nectariniidae also recorded higher in plantations close to it but not for Family Timaliidae. Zakaria et al. (2005) mentioned that the Family Nectariidae are known as habitat colonisers and common in disturbed areas, while Family Timaliidae is mostly forest birds that are more sensitive to habitat disturbance (Moradi & Mohamed 2010). PSF continued to support and provide habitat for birds that are sensitive to disturbance such as forest birds like the Family Timaliidae, even though this area is located within the oil palm plantation. However, in comparison with plantations close to RP, wetland birds from the Family Ardeidae were the richest and attracted to wetland close to the river. These results are in line with the previous study by Azhar et al. (2013) who showed that wetland birds move more in plantations due to the presence of wetland habitats such as ponds and drainage as an aquatic habitat that provides food resources to these birds. Hence, wetland birds are also attracted to riparian reserves within the oil palm dominated landscapes. Study of Birds for Gradient Distance from Forest to Oil Palm Plantation In this study, at Site A bird species richness and species abundance were significantly different among all distances of 50 m, 100 m and 150 m from the edge to the interior peat swamp forest or oil palm plantation whereby high species richness at forest closed to the edge (PSF 50 m) and species abundance recorded higher interior oil palm plantation (OP-A at distance 100 m and 150 m). Even though the present study uses the point count method, the findings seem to be consistent with other research using the mist-netting method, which found significant differences in bird diversity across the gradient from interior forest and plantation to the edge (Mohd-Azlan et al. 2019b). Mohd-Azlan et al. (2019b) study was conducted at the edge and different distances of 100 m, 200 m and 300 m from the edge to the interior plantation or forest. Results from that study showed that the edge and interior forest recorded higher species richness than the interior plantation. This study (Mohd-Azlan et al. 2019b) showed that edge and interior forest recorded higher species richness than interior plantation due to the edge effect. Edge effect is defined as changes that occur at the abrupt transition between adjacent habitats resulting from the juxtaposition of contrasting ecosystems on either side of the discontinuity (Sammalisto 1957). The edges contain increased biodiversity since they attract species which are able to exploit both sides of the discontinuity in addition to those species' characteristics of either side (Clapham 1973). The presence of bird species that can exploit both or either side are influenced by food specialisation and habitat association which include humidity, light intensity and temperature (Maina 2002) and canopy density (Stone et al. 2018). However, this study observation was not conducted at the edge site, but the results showed that forest at 50 m from the edge recorded higher species richness than interior forest (100 m and 150 m). This finding indicates that edge effect might still apply at 50 m forest from the edge, as compared to 100 m and 150 m of forest to the edge, with lower species richness. The Effect of Vegetation Structure to Bird Species Richness and Abundance at Different Distances to the Forest Edge This high percentage of canopy and shrub cover supported greater bird species richness in PSF than in interior plantation reflects the findings of Azhar et al. (2013). This study also suggested that the higher density of shrub covers in RP supported a higher value of species richness and abundance of birds. This finding is similar to previous research (Mitchell et al. 2018), suggesting that RP in oil palm plantations supported distinct bird communities. Due to the structure of the canopy layer, PSF and RP attracted forest bird species such as trogons, iora, barbet and broadbills and arboreal birds such as pigeons and bee-eater. These findings are in line with the results of Peh et al. (2006), Hawa et al. (2016) and Beskardes et al. (2018). In addition, some species prefer higher canopy closer to the forest edge for predation such as eagles, dollar birds, hornbill and falconet. These species prefer taller trees as a lookout for prey (Andersson et al. 2009). During observation, black hornbill was spotted feeding on beetles on the top of tall trees in RP. Furthermore, the canopy of the PSF provides sufficient sunlight and space for the development of shrub layers such as lianas, epiphytes and hemiepiphytes which may attract forest birds such as babblers, prinias, bulbuls and sunbirds that utilise the different vegetation strata (Gaither 1994;Peh et al. 2006;Azhar et al. 2013;Hawa et al. 2016;Mansor & Ramli 2017;Mansor et al. 2019). Study by Mansor et al. (2019) reported the important of aerial curled dead leaves within the aboveground vertical vegetation layers in the forest as a foraging area for a group of babblers. Negative relation between bird species richness and abundance with herbaceous cover at Site B might be due to herbaceous habitat structure at OP-B closed to RP was dominated by grass and has wetter condition which attract more wetland birds such as egrets, bitterns and waterhen. However, this study showed that the increasing percentage of herbaceous cover and decreasing percentage of canopy and shrub cover has a higher abundance of birds in the interior of the plantation than in PSF. Oil palm stands on peat that was nine years (OP-A) have open canopy cover which provides direct sunlight for the development of ground vegetation layers such as herbaceous mostly at oil palm circles, frond pile and harvesting paths, while shrub cover was along the field drain. A bit different from oil palm plantation close to RP, the percentage of herbaceous and canopy cover was higher than in OP-A. High canopy cover was due to the palm age (11-year-old). The increased canopy cover still provides enough sunlight for the development of herbaceous cover. The low percentage of shrubs than herbaceous in oil palm plantations might be due to the systematic weeding practices, and this finding is consistent with Azhar et al. (2011). The abundance of birds in plantation is due to the presence of dominant species such as Yellow-vented Bulbul, Oriental Magpie Robin, Malaysian Pied Fantail, Plain Sunbird, and Ashy Tailorbird. The results of this study are consistent with Hawa et al. (2016), who reported these birds' species in oil palm plantations. Amit et al. (2015) also mentioned that the most dominant species in oil palm plantation was Yellow-vented Bulbul and stated that this species feed on oil palm pollinators weevil such as Elaeidobius kamerunicus. The abundance of these birds relies on the thickness of the vegetation cover, which provides refuge from predators which possibly related to their anti-predator strategies and provides food resources (such as arthropods and seeds) (Azhar et al. 2011;Tamaris et al. 2017;Ashton-Butt et al. 2018). Also, these species provide essential ecosystem services for plantations to control pests such as bagworms (Chennon & Susanto 2006;Koh 2008;Hawa et al. 2016). Hence, it is important to maintain vegetation cover to support the survival of these birds' species in oil palm plantations. CONCLUSION Our results demonstrate a strong effect of set-asides areas; peat swamp forest and riparian area supported bird species richness and abundance in overall oil palm dominated landscapes. High canopy and shrub cover by maintaining forest patches in oil palm landscape provides habitat for the forest, wetland, endemic, predator and threatened species of birds. Nevertheless, we found that a high percentage of herbaceous cover may result in high abundance of birds in the oil palm area closed to peat swamp forest. Requirements of protecting and conserving the concerned species are the most important strategies that should be supported through better management of the set-aside areas within oil palm dominated landscape hence producing sustainable palm oil production. Set-aside areas should be linked and connected with nearby forests to improve wildlife landscape connectivity. Future research should highlight how bird species in set-aside areas provide ecosystem services to the interior of the oil palm landscapes.
2023-07-28T15:39:23.543Z
2023-06-01T00:00:00.000
{ "year": 2023, "sha1": "040410224fa8368ff4606c2df9e1b7798c13dc25", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "040410224fa8368ff4606c2df9e1b7798c13dc25", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
53594066
pes2o/s2orc
v3-fos-license
Functional grouping and establishment oF distribution patterns oF invasive plants in china using selF-organizing maps and indicator species analysis In the present study, we introduce two techniques – self-organizing maps (SOM) and indicator species analysis (INDVAL) – for understanding the richness patterns of invasive species. We first employed SOM to identify functional groups and then used INDVAL to identify the representative areas characterizing these functional groups. Quantitative traits and distributional information on 127 invasive plants in 28 provinces of China were collected to form the matrices for our study. The results indicate Jiangsu to be the top province with the highest number of invasive species, while Ningxia was the lowest. Six functional groups were identified by the SOM method, and five of them were found to have significantly representative provinces by the INDVAL method. Our study represents the first attempt to combine selforganizing maps and indicator species analysis to assess the macro-scale distribution of exotic species. INTRODUCTION Invasive species have become a serious socioeconomic problem worldwide.The costs of controlling invasive species have been increasing in recent years (Pimentel et al., 2005).A clear understanding of mechanisms the occurrence of invasive species in the environment (Usseglio-Polatera et al., 2000) would therefore be advantageous in creating efficient measures for the better management of invasive species. In general, the more diverse a community is, the higher the chance that a particular taxon lives in a given habitat with various combinations of species traits (Usseglio-Polatera et al., 2000).Species traits therefore will provide useful information about species invasiveness, although there are some technical issues, e.g., local idiosyncrasies and phylogenetic constraints (Lloret et al., 2005).A growing body of research suggests that the success of invasive plants is controlled by a series of key processes or traits (Theoharides and Dukes, 2007).Consequently, some researchers recommend identifying groups of organisms with similar relationships among their species traits. Functional groupings (Fox and Brown, 1993;Gitay and Noble, 1997;Wilson, 1999) denote species sharing common attributes.They have been used in different fields of ecological research, including vegetation studies, conservation, and so on (Salmaso and Padisak, 2007 and the references therein).Irrespective of the organisms studied, a common assumption within such a grouping is that the characteristics of a community can be better understood if species are grouped into classes that possess similar characteristics or behave similarly (Salmaso and Padisak, 2007). Functional grouping analysis has some advantages in examining traits on an individual level because many traits manifest advantages only when acting synergistically (Lambdon et al., 2007).There are many works applying the concept of functional groups, for example, ones dealing with conservation Functional grouping and establishment oF distribution patterns oF invasive plants in china using selF-organizing maps and indicator species analysis target setting (Zhu et al., 2004) and invasive species management (Lososova et al., 2007;Statzner et al., 2007). China is a vast country with rich biodiversity and regarded as a hotspot around the world (http://www.biodiversityhotspots.org/).however, it is also a very vulnerable country suffering from the ill effects of invasive species due to economic development (Ding et al., 2008).There is a long history of introduction of exotic species to China, especially those having economical benefits (Yan et al., 2001).however, negative feedbacks brought by exotic species have in recent decades been increasingly reported owing to global human activities and climate change.China is also a highly concerned nation (Yan et al., 2001;Ding et al., 2006;huang et al., 2008). Understanding the functional grouping and distributional patterns of invasive species will be beneficial in establishing relevant controlling strategies and rapid assessment methods.In the present study, we apply two methods -self-organizing maps (SOM) and indicator species analysis (INDVAL) -to identify the functional groups of invasive plant species of China and select representative distributional provinces. Distribution data The provincial distributional information of 127 invasive plants of China was compiled from an online source (CSIS; China Species Information Service; http://www.chinabiodiversity.com/)and previously published papers (e.g., Yan et al., 2001;huang et al., 2008).Table 1 lists all the invasive plants and their corresponding numbers for subsequent analysis. Physiological traits The following physiological traits have been mentioned in previous works (e.g., Lososova et al., 2006Lososova et al., , 2007;;Lambdon et al., 2007;huang et al., 2008) as revealing the functional groups of invasive plants.For each attribute, its different states were quantitatively encoded in a series of binary variables. Functional grouping analysis We used self-organizing maps to group all species based on their functional similarity.After obtaining the resulting map, a cluster analysis using euclidean distance was performed to identify the final functional groupings. SOM model Self-organizing maps represent an artificial neural network model (Kohonen, 1982) aiming to classify high dimensional data, performing a non-linear projection of multidimensional data space onto two-dimensional space (Lek et al., 1996;Park et al., 2003aPark et al., , 2003b)).The SOM neural network consists of two layers of neurons: the input layer and the output layer.The output layer is represented by a map or a rectangular gird with l x m neurons (or cells), laid out in a hexagonal lattice (Worner and Gevrey, 2006). We used a batch algorithm for SOM analysis (Worner and Gevrey, 2006).Details of the algorithm and its theoretical basis are given by Kohonen (2001).The software we used for implementing the SOM method was the Matlab programming language (Mathworks, 2001) and the SOM toolbox (version 2.0 beta), which was developed by the Laboratory of Information and Computer Science, helsinki University of Technology (http://www.cis.hut.fi/projects/somtoolbox/documentation/somalg.shtml).The geographic maps were generated by the software ArcView 3.2 (eSRI; http://www.esri.com/). Cluster analysis Sites that are neighbors on the grid are expected to be more similar to each other, whereas sites remote from each are expected to be distant in the feature space (Worner and Gevrey, 2006).To detect cluster boundaries on the map, cluster analysis was applied to the SOM model output (Park et al., 2003a(Park et al., , 2003b)).hierarchical cluster analysis can give cluster boundaries that are crisper than in the unified-matrix approach.A simple bootstrapping method was used to justify the choice of the number of clusters (hernandez et al., 2005). Indicator species analysis The indicator species analysis (INDVAL) method aims to identify representative species which can characterize groups of samples (Dufrene and Legendre, 1997).This analysis was performed using the INDVAL for PC package (http://biodiversite. wallonie.be/outils/indval/home.html).herein we use INDVAL to select representative areas for each functional invasive group.Significance was calculated using the method of computing the weighted distance between randomized values and the observed value (t-test).The Monte Carlo test was run using 50000 random iterations and five seeds per random number generator.The significance level was set at P<0.05 (Casazza et al., 2008). ReSULTS AND DISCUSSION We ascertained that Jiangsu had the highest richness out of 28 provinces, followed by Yunnan, Anhui, and Zhejiang.In contrast, Ningxia had the lowest richness.'hotspot' provinces containing more than 50 species include Jiangsu, Yunnan, Anhui, Zhejiang, Jiangxi, Fujian, and Liaoning.Most of these provinces are circumlittoral areas (Fig. 1). The initial SOM model grouped species in a grid cell system (8×7) according to their trait similarities (Fig. 2).After applying hierarchical cluster analysis on the basis of the initial SOM map (Fig. 3a), our study revealed six functional groups based on their biological traits (Fig. 3b).Indicator species analysis identified representative invaded provinces for each functional group.Given below are detailed discussions of each group and relevant representative areas. Group I included 12 species, which mostly had the flowering season in winter and the seed and vegetative reproduction mode.The representative species were Abutilon crispum, Lantana camara, and Wedelia trilobata.The conducted INDVAL analysis demonstrated that this group is distributed in the southern part of China and does not occur in Northern China.Representative provinces having significant indicator values (InV) were Fujian (InV=17.9,P<0.05), Guangdong (InV=29.8,P<0.05), Guangxi (InV=19.8,P<0.05), hainan (InV=24.8,P<0.05), and Taiwan (InV=20.8,P<0.05). Group II was composed of 27 species.The life mode of most species in this group was the annual grass mode.This group has the flowering season in spring.Representative species were Axonopus compressus, Cassia tora, and Hordeum jubatum.The conducted INDVAL analysis indicated that this group has no biased geographic distribution and is homogeneously invasive throughout the whole nation.No representative provinces were found. Group III consisted of 18 species, which are generally distributed throughout the whole nation and have a range that is broader than for other groups.The typical species in this group were Mirabilis jalapa, Talinum paniculatum, and Ipomoea purpurea.The representative areas selected by INDVAL analysis were Southwestern China, principally including Yunnan Province (InV=18.1,P<0.05). Group IV was composed of 20 species, which have a dispersal mode related to abiotic factors, for example, hydrochory or anemochory.Representative species were Erigeron annuus, Senecio vulgaris, and Amaranthus albus.Typical distribution provinces were mainly located in the western part of China, including two provinces: Inner Mongolia (InV=12.5, P<0.05) and Xizang (InV=12.9,P<0.05). Group V consisted of 24 species, most of which have been documented by published reports to have strong invasiveness and to be dangerous for eco- From the above analysis, we find that the SOM technique is a simple way to illustrate the associations of studied objectives through reducing the data dimensions.Its advantage is its ability to group objectives at a high speed compared to conventional cluster analysis.Thus, the SOM method is suitable for dealing with large data sets. In our study, we not only ascertained functional groups of invasive species on the basis of their physiological traits, but also tried to understand the geographic patterns of these groups.By implementing the INDVAL method, we can better understand the distributional biases of functional groups of China's invasive plants.As far as we know, this represents the first attempt to combine SOM and INDVAL in application to distribution patterns of invasive species.In further studies including native species, the distributional overlapping and associated resource partition (Lambdon et al., 2007) between functional groups of exotic and native species could be compared using the methods introduced here. a) Life span: annual grass, perennial grass, liane, shrub, and arbor.b) Invasive seriousness: common, middle, and strong.c) Regions of origin: Americas, europe, Australasia, Asia, or Africa.d) Introduction mode: accidental, human, fruit, ornamental, officinal, or pasture.e) Propagation habitat: human-related areas, wild natural areas, or both.f) Flowering season: spring, summer, autumn, winter, or all year.g) Reproduction mode: seed or seed and vegetative.h) Suitable range size: provincial or national. Fig. 1 . Fig. 1.Highlighted provinces of China are ones that contained more than 50 invasive plants. Fig. 2 . Fig. 2. Basal grouping of invasive plants based on similarities of trait characteristics.The species numbers correspond to the Latin names in Table1. Fig. 3 . Fig.3.The final six functional groups determined by hierarchical cluster analysis.a) The dendrogram generated by cluster analysis with the optimal cluster line using a bootstrapping technique.b) The corresponding SOM map (species occupying each cell can be identified in Fig.2). Table 1 . Latin names of 127 invasive species of China and associated numbers assigned to them for purposes of analysis. systems.examples include Ambrosia artemisiifolia, Lepidium virginicum, and Agrostemma githago.The conducted INDVAL analysis demonstrated that this group has biased distribution in the northern part of China, principally in Jilin Province (InV=10.1,P<0.05).
2018-11-11T00:48:57.734Z
2009-01-01T00:00:00.000
{ "year": 2009, "sha1": "7b9bfd78593bb589995380785226c51d7a935cb7", "oa_license": "CCBY", "oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0354-46640901071W", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7b9bfd78593bb589995380785226c51d7a935cb7", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
12301053
pes2o/s2orc
v3-fos-license
Optimal phase response curves for stochastic synchronization of limit-cycle oscillators by common Poisson noise We consider optimization of phase response curves for stochastic synchronization of non-interacting limit-cycle oscillators by common Poisson impulsive signals. The optimal functional shape for sufficiently weak signals is sinusoidal, but can differ for stronger signals. By solving the Euler-Lagrange equation associated with the minimization of the Lyapunov exponent characterizing synchronization efficiency, the optimal phase response curve is obtained. We show that the optimal shape mutates from a sinusoid to a sawtooth as the constraint on its squared amplitude is varied. Efficiency of the stochastic synchronization is usually quantified by the Lyapunov exponent averaged over noise, which measures the mean exponential growth (or decay) rate of small differences between the oscillator states subjected to the common noise. For a limit-cycle oscillator undergoing regular periodic oscillations, the Lyapunov exponent can be calculated from the phase response curve (PRC) [18][19][20][21][22], which is a fundamental quantity that characterizes the oscillator dynamics and which has been measured experimentally in many rhythmic elements [23][24][25][26][27]. It can be shown that for limit-cycle oscillators driven by weak Gaussian or Poisson noise, the Lyapunov exponent is always negative irrespective of the precise shape of the PRC, ensuring that synchronization always takes place [10][11][12][13]. What PRC shape yields the best synchronization? For weak Gaussian driving noise, Abouzeid and Ermentrout [28] obtained the optimal functional shape of the PRC by minimizing the Lyapunov exponent with constraints on its amplitude and smoothness, which was nearly sinusoidal with a pair of positive and negative lobes (called Type-II, a normal form of the PRC near the Hopf bifurcation [21]). However, the optimal functional shapes may differ for other driving signals. Here we consider Poisson random impulsive signals with low frequency, which can also induce synchronization of limit cycles [10][11][12][13]. When the intensity of the impulse is weak, the linear Gaussian approximation holds and the optimal PRC can be shown to be sinusoidal, but for stronger impulses, the optimal solution may take different shapes. By using the shooting method [29] to numerically solve the Euler-Lagrange equation [30] associated with the minimization of the Lyapunov exponent, we show that the optimal PRC gradually deviates from the sinusoid and approaches a sawtooth as the constraint on its squared amplitude is varied. Correspondingly, the Lyapunov exponent becomes more negative and tends to diverge. Our result implies the importance of nonlinearity in the phase response and may provide insights into real-world oscillators such as spiking neurons. This article is organized as follows: In Sec. II, some basic facts on synchronization of limit cycles by common Poisson noise are presented. In Sec. III, we solve the optimization problem and show the gradual transition of the optimal solution between sinusoidal and sawtoothed shapes. Section IV summarizes the article with discussions on possible relevance of the results to phase response curves of spiking neurons. The appendix gives details on wavenumber, symmetry, and phase-plane behavior of the optimal solutions. We also show the optimal PRCs for stochastic desynchronzation. A. Poisson-driven oscillators A pair of non-interacting identical limit-cycle oscillators driven by common Poisson noise can be described by the following phase model [10,11]:θ under the assumption that the inter-impulse intervals are sufficiently large such that the oscillator orbit perturbed by an impulse relaxes back to the original limit cycle before receiving the next impulse. Here, θ 1,2 ∈ [0, 1) are phase variables of the oscillators, ω is their natural frequency, N (t) is a Poisson process of rate λ, {t 1 , t 2 , · · · } are arrival times of the Poisson impulses, {c 1 , c 2 , . . . } are intensities of the impulses (including negative values representing opposite directions) independently drawn from an identical probability density function (PDF) P (c), and G(θ, c) is the PRC of the oscillators. The PRC G(θ, c) quantifies the asymptotic phase difference of the orbit that is perturbed at phase θ by an impulse of intensity c from the unperturbed orbit [19]. We assume that the PRC G(θ, c) is a sufficiently smooth function with continuous derivatives G ′ (θ, c) = ∂G(θ, c)/∂θ, G ′′ (θ, c) = ∂ 2 G(θ, c)/∂θ 2 , · · · , all of which are periodic in θ, i.e., (1) is stochastic and should be interpreted in the Ito sense [31]. Namely, on arrival of an impulse at phase θ, the phase discontinuously jumps from θ to θ + G(θ, c) [11]. B. Lyapunov exponent In Refs. [10,11], the phase equation (1) is derived from general limit-cycle models by the phase reduction method [18][19][20]. The Lyapunov exponent Λ, which quantifies the exponential growth rate of small phase differences between the oscillators ∆θ(t) = θ 1 (t) − θ 2 (t), is given in terms of the PRC as where P (θ) is a stationary PDF of the phase θ given by a stationary solution of the Frobenius-Perron equation corresponding to Eq. (1) [10][11][12]. The phase difference |∆θ(t)| grows as |∆θ(t)| ≃ |∆θ(0)| exp(Λt) when it is small, so that the two oscillators tend to synchronize if the Lyapunov exponent Λ is negative. We assume that the impulses are sparse, i.e., the Poisson rate λ is small. It can then be shown that the stationary PDF of the phase θ can be approximated as P (θ) = 1 + O(λ/ω), so that we may put P (θ) = 1 when λ is small enough. Thus, the Lyapunov exponent is approximately given by [10,11] Moreover, for a sufficiently smooth PRC satisfying G ′ (θ, c) > −1, Eq. (3) can be bounded from above as by using the inequality ln(1 + x) ≤ x and the periodicity of the PRC, so that Λ is always negative (equality holds only for non-physical constant PRCs). Thus, the two oscillators subjected to weak common Poisson noise always tend to synchronize. Hereafter, we try to find the optimal PRC that gives the most negative Lyapunov exponent. Note that when λ is not sufficiently small, we may consider perturbation expansion of the stationary PDF from the uniform distribution like P (θ) = 1 + (λ/ω)P 1 (θ) + (λ/ω) 2 P 2 (θ) + · · · to calculate higher-order corrections for the Lyapunov exponent, as performed in [10,11,28]. For simplicity, we focus only on the case with sufficiently small λ in the present study. C. Linear Gaussian approximation When the derivative of the PRC G ′ (θ, c) is sufficiently small, we may expand Eq. (3) as where we used the periodicity of the PRC. Also, if the impulse intensity c is sufficiently weak, the PRC G(θ, c) can be linearly approximated by using the phase sensitivity function Z(θ), which gives the linear response coefficient of the phase to infinitesimal perturbations [18][19][20], as so that Λ can be approximated as where c 2 = P (c)c 2 dc. This expression coincides with the Lyapunov exponent of the phase oscillators driven by weak common Gaussian-white noise [6][7][8]. The same result can also be derived more rigorously as the diffusion limit of the Poisson noise in which the impulses tend to be weak (c → 0) and frequent (λ → ∞) while keeping λ c 2 constant and small [11]. In this diffusion limit, P (θ) can be approximated as P (θ) = 1 + O λ c 2 /ω [28]. Therefore, fixing the average impulse intensity small such that λ c 2 ≪ ω is satisfied, P (θ) ≃ 1 holds even for high-frequency impulses with large λ. A. Euler-Lagrange equation The Lyapunov exponent Λ is a functional of the PRC G(θ, c) or the phase sensitivity function Z(θ) as given in Eq. (3) or Eq. (8). We try to obtain the optimal shape of G(θ, c) or Z(θ) for synchronization by minimizing Λ with appropriate constraints. Let us omit the dependence of the PRC G(θ, c) on c for the moment. We try to find the minimum of the action [30] where Λ[G] is the Lyapunov exponent, J[G] and K[G] are two independent constraints on the PRC and its derivatives (we consider up to the 2nd order), µ and ν are Lagrange multipliers, and L(G(θ), G ′ (θ), G ′′ (θ)) is a Lagrangian. The corresponding Euler-Lagrange equation is given by where the periodicity of the PRC and the derivative, G(θ + 1) = G(θ) and G ′ (θ + 1) = G ′ (θ), are used to eliminate the surface terms. When we consider optimization of Eq. (8), the PRC G(θ) in the above equations is replaced by the phase sensitivity function Z(θ). B. Linear Gaussian approximation We here briefly explain the optimal Z(θ) under linear approximation. See Abouzeid and Ermentrout [28] for a detailed analysis with various constraints. From Eq. (8), the Lyapunov exponent of the oscillator in this case is given by where D = λ c 2 corresponds to the intensity or variance of the driving noise. We calculate the optimal shape of Z(θ) by minimizing Λ 0 [Z] under the following constraints: The first constraint Eq. (12) fixes the squared amplitude of Z(θ) to be B 0 , which excludes the possibility of non-physical divergent Z(θ) yielding arbitrarily negative Lyapunov exponents. The second constraint Eq. (13) with parameter C 0 restricts the overall smoothness of Z(θ). In most realistic finite-dimensional limit-cycle oscillators, the first Fourier mode dominates the phase sensitivity function Z(θ), reflecting the circular geometry of the limit cycle orbit in the phase space. We thus introduce the constraint Eq. (13) to avoid rapid oscillations and choose physically natural PRCs, similarly to Ref. [28]. Introducing Lagrange multipliers µ 0 and ν 0 , the action to be minimized is given by The Euler-Lagrange equation determining the optimal Z(θ) is given by where Z (4) denotes the 4th derivative of Z. When µ 0 > 0 and ν 0 > 0, we obtain a general solution that satisfies the periodic boundary condition Z(θ) = Z(θ + 1) as where α and β are constants. Due to the periodicity Z(θ) = Z(θ + 1), the coefficient of θ should be quantized as where n is an integer number. The constant α is determined from the first constraint Eq. (12) as namely, α = √ 2B 0 . The constant β is determined from the boundary conditions for Z(θ). Without losing generality, we can assume that Z(0) = 0 and Z ′ (0) > 0, which yields β = 0. The second constraint Eq. (15) gives Equations (17) and (19) give the relation between Lagrange multipliers (µ 0 , ν 0 ) and the parameters (B 0 , C 0 ). In the following, we will control the Lagrange multipliers to find optimal solutions with given squared amplitude and overall smoothness. The optimal phase sensitivity function is thus given by which is always sinusoidal regardless of the constraint parameters. The corresponding Lyapunov exponent is obtained from Eq. (8) as which decreases with the wavenumber n without bounds. Namely, rapidly oscillating Z(θ) can yield very small Λ 0 if the constraint on the smoothness of Z(θ) does not exist. The second constraint Eq. (13) restricts the range of the wavenumber n. In particular, when ν 0 is sufficiently large, only small n is allowed (See appendix). To obtain realistic PRCs, we thus set the parameter ν 0 > 0 large enough and focus on Z(θ) with n = 1 as well as the corresponding G(θ), namely, we look for the optimal PRC having only a single pair of positive and negative lobes (Type-II) that oscillates only once in θ ∈ [0, 1) and crosses the θ-axis exactly twice, which is typical of realistic limit-cycle oscillators. C. Poisson impulses What is the optimal shape of the PRC when the oscillators are driven by common Poisson noise? As we saw, when the applied impulse is sufficiently weak and the amplitude of the PRC is small enough, linear Gaussian approximation holds and the optimal PRC is sinusoidal. But linear approximation may not be valid when the impulse intensity is increased [10]. On the other hand, if no constraint is imposed on the PRC, an obvious optimal solution is a sawtooth, consisting of a straight line of slope −1 and a sharp jump to satisfy the periodic boundary conditions. The corresponding Lyapunov exponent diverges to −∞, because a single impulse can already synchronize the oscillators by instantaneously reseting their phases to the same value. However, if the impulse is not sufficiently strong to kick the oscillator, such a simple solution is impossible. How does the optimal PRC behave in between the two limiting situations? In the following, we focus on two simple cases in which the oscillators are driven by (i) excitatory impulses with a constant intensity (all impulses take the same intensity c), and (ii) both excitatory and inhibitory impulses (the impulses take either c = a or c = −a with equal probability). We examine how the optimal PRC deviates from the sinusoid and eventually approaches the trivial sawtooth shape as the constraint on the squared amplitude of the PRC is increased. Excitatory impulses We assume that the impulse intensity c always takes the same value and simply denote the PRC corresponding to this value as G(θ). The Lyapunov exponent is given by We minimize Λ 1 [G] under the constraints on squared amplitude and overall smoothness of G, and examine the dependence of the optimal PRC on the parameter B that determines the squared amplitude while fixing C small enough (actually taking the value of ν appropriately large) such that the PRC keeps a given level of smoothness. Introducing Lagrange multipliers µ and ν, the action to be minimized is given as The optimal PRC G(θ) is determined by the Euler-Lagrange equation which gives where G (4) denotes the fourth derivative of G. If the squared amplitude of the PRC B is sufficiently small, linear approximation for the PRC should hold, i.e., is a small constant. The constraints Eqs. (23) and (24) become equivalent to Eqs. (12) and (13) under the linear approximation by rescaling the multipliers as µ = µ 0 /ǫ 2 and ν = ν 0 /ǫ 2 . Substituting these into Eq. (27), we obtain and taking the ǫ → 0 limit with D = λǫ 2 fixed, we obtain the Euler-Lagrange equation (15) for weak Gaussian noise and thus yields sinusoidal Z(θ) and G(θ) as the optimal solution. On the other hand, if we ignore the constraint Eq. (23), G(θ) = −θ + const. is a trivial solution to Eq. (27), which gives a sawtooth. Thus, when the squared amplitude of G(θ) is controlled, mutation of the optimal PRC between the two limiting shapes is expected. To confirm this, we numerically calculate a family of solutions to Eq. (27) using the shooting method [29]. Namely, we numerically integrate Eq. (27) by the Runge-Kutta method with adaptive time grids from θ = 0 to θ = 1 and find appropriate initial conditions G(0), G ′ (0), G ′′ (0) and G ′′′ (0) satisfying the periodic boundary conditions at θ = 0 and θ = 1. We vary the Lagrange multiplier µ > 0, obtain the corresponding optimal PRC, and check if its squared amplitude was equal to the constraint B. Solutions to Eq. (27) exist also for µ < 0, but they maximize the Lyapunov exponent rather than minimize it, and thus are optimal not for synchronization but for desynchronization (see Appendix). It can be shown that large values of ν lead to small wavenumber (long wavelength) solutions (see Appendix). We fix the multiplier ν at ν = 10 −5 , which is large enough, to choose non-trivial solutions that cross the θ-axis exactly twice in [0, 1) corresponding to the n = 1 case in Eq. (20). No periodic solutions are found when ν < 0. Properties of the optimal solution can be well understood by approximate phase-plane analysis as explained in Appendix. Appendix). As expected, we see that the optimal PRC is almost sinusoidal when the parameter B is small. As B is increased, the optimal PRC gradually deviates from the sinusoid and approaches a symmetric sawtooth limit (which gives B = 1/12). Correspondingly, the Lyapunov exponent Λ 1 plotted in Fig. 1(b) becomes more negative and tends to diverge, and its inverse τ 1 = −1/Λ 1 , which gives characteristic time for the stochastic synchronization, gradually decreases to zero as shown in Fig. 1(c). The optimality of the obtained PRC can be clearly demonstrated by numerical simulations. Figure 2 shows realizations of the stochastic synchronization processes with the optimal and suboptimal (sinusoidal) PRCs. We see that the stochastic synchronization occurs much faster when the optimal PRC is used. Excitatory and inhibitory impulses We next consider the case that the intensity of the impulses takes two values ±a with equal probability, namely, For simplicity, we seek for symmetric PRCs that satisfy G(θ, −a) = −G(θ, a). This condition should be always satisfied if a is sufficiently small, because the PRC can be linearly approximated as G(θ, c) = cZ(θ). The existence of the diffusion limit is also ensured with this condition [11]. Note that, for stronger impulses, the PRC generally becomes asymmetric and does not satisfy the above condition. We here focus only on the symmetric case for simplicity. The Lyapunov exponent is then given by with the abbreviation G(θ) = G(θ, a). We minimize Λ 2 [G] under the constraints (23) and (24). Introducing Lagrange multipliers µ and ν, we obtain the action and the associated Euler-Lagrange equation If the squared amplitude B of the PRC is sufficiently small, we can rewrite Eq. (32) using the linear approximation of the PRC with rescaled multipliers, G(θ) = G(θ, a) = aZ(θ), µ = µ 0 /a 2 and ν = ν 0 /a 2 , as νZ (4) (θ) + λa 2 2 Taking the diffusion limit, i.e., a → 0 and λ → ∞ with D = λa 2 fixed, the Euler-Lagrange equation (15) under the linear Gaussian approximation is derived. Therefore, we obtain a sinusoidal Z(θ) and hence G(θ) as the optimal solution for small B. On the other hand, if we ignore the constraint, Eq. (32) has the obvious solution G(θ) = −θ as before. In the present case, additionally, G(θ) = θ is also an optimal solution because G(θ, −a) = −G(θ, a). Using the numerical shooting method, we obtain a family of optimal solutions to Eq. (32) as plotted in Fig. 3(a). As in the previous case, the multiplier ν is fixed at 10 −5 , which is large enough to yield smooth PRCs. Unlike the previous case, no solution with period 1 exists when µ < 0. As the parameter B increases, the optimal PRC gradually deviates from the sinusoid. In this case, the PRC approaches a double sawtooth, in contrast to the single sawtooth that we obtained previously, reflecting the symmetry assumption. The Lyapunov exponent Λ 2 becomes more negative and tends to diverge, and the characteristic synchronization time τ 2 decreases to zero as shown in Figs. 3(b) and (c). The optimality can be demonstrated by numerical simulation as shown in Fig. 4. IV. DISCUSSION We considered the optimization problem of the PRC for synchronization of limit-cycles oscillators by common Poisson noise and observed a crossover of the optimal PRC from a sinusoid to a sawtooth by increasing its squared amplitude. Now we take some time to stress the importance of considering nonlinear PRCs. The phase sensitivity function Z(θ) quantifies the linear response property of the oscillator phase to infinitesimal perturbations, which is determined by the local phase-space structure of the oscillator near the limit-cycle orbit [18][19][20]. In contrast, the PRC can reflect nonlinear dynamics of the oscillator away from the limit-cycle orbit by finite distances, providing more detailed information. Also, in many experiments, applied perturbations to the oscillator are not always sufficiently small and nonlinear effects can become important. In the present study, we considered only two simple types of driving impulses, i.e., (i) excitatory and (ii) excitatory and inhibitory impulses, and also assumed symmetry of the PRCs in the latter case. More general types of driving impulses and asymmetric PRCs can be considered within the same framework, though they are beyond the scope of the present study. For example, it would be interesting to seek for the optimal family of PRCs for a given distribution of the impulse intensity c by making additional assumptions on the c-dependence of the PRC G(θ, c). Are there examples of the optimal PRC in nature? In neurophysiology, the PRCs of periodically spiking cells have been recorded in many experiments [24][25][26][27]. For example, Tateno and Robinson [25] calculated the PRCs of periodically spiking interneurons from monkey somatosensory cortex and examined their dependence on the intensity of applied perturbations. As the intensity increases, the PRC changes its shape from sinusoidal to sawtoothed (Figs. 4 and 5 in Ref. [25]). This dependence of the PRC on the applied signal intensity resembles the gradual transition that we obtained in Fig. 1. The authors also found that the sawtooth-like PRCs lead to faster synchronization of the neurons [26]. Nesse and Clark [27] calculated the PRC of photoreceptor cells from marine invertebrate Hermissenda and revealed noticeable linear dependence of the PRC on θ (Figure 5 in Ref. [27]). The authors suggested that the reset effect of such a PRC may be helpful for network information processing. Stochastic synchrony can be a mechanism for long-range synchronization of gamma oscillations in the cortex [2]. Because the thalamus is at the center of the brain and communicates with all cortical regions, it is a good candidate to provide common input to areas of the cortex that are far apart and not directly connected. This shared thalamic drive represents a straightforward mechanism to mediate synchrony between these areas, and the PRC of neurons in cortex receiving thalamic input could be optimized for this purpose. In Ref. than others (the amplitude of postsynaptic potentials spans over a few millivolts). Thus, it might actually be more appropriate to consider finite-intensity impulses than weak Gaussian noise as the driving signal to the neurons. The optimization viewpoint may give interesting insights into the understanding of biological systems, because they evolved to perform certain biological functions efficiently. If the stochastic synchronization mechanism is used in some biological systems, their PRC may be optimized to best perform synchronization. The sawtoothed PRCs that we obtained are not only optimal for the synchronization by common Poisson noise, but they are singular in the sense that they lead to instantaneous phase resetting of the oscillators. Thus, it may not be surprising if such a singular shape is actually utilized in real biological systems. This parallel between evolutionary optimization and optimization for a desired function certainly makes the interpretation of such a singular shape highly suggestive and intriguing. A. Dependence of the optimal PRCs on the multipliers We find that if the multiplier ν 0 or ν is sufficiently large, only small wavenumber (long wavelength) solutions are allowed for Z or G. This can be proven for the Euler-Lagrange equations (15), (27) and (32). Linear Gaussian approximation We consider a solution of the Euler-Lagrange equation (15) with wavenumber n and denote the corresponding Lagrange multipliers (µ n , ν n ). Substitution into (15) yields 8π 4 ν n n 2 + Dπ 2 + µ n n 2 = 0, namely, the Lagrange multipliers scale with the wavenumber n as µ n ∝ n 2 and ν n ∝ 1/n 2 . Thus, larger µ and smaller ν lead to PRCs with larger wavenumbers. We set ν sufficiently large to obtain the n = 1 solution in the main text. Poisson impulses Rescaling the phase variable as θ → nθ, the Euler-Lagrange equaion (27) is transformed to Defining a rescaled PRC G n (θ) = G(nθ)/n, the above equation can be cast into the same form as Eq. (27), where rescaled Lagrange multipliers µ n = n 2 µ and ν n = ν/n 2 are introduced. Thus, if G(θ) is a solution of Eq. (27) with multipliers µ and ν, its rescaled function G n (θ) is also a solution of Eq. (27) with rescaled multipliers µ n and ν n (n = 1, 2, · · · ). This implies that larger wavenumber solutions (n > 1) correspond to larger µ and smaller ν. As shown in Fig. 1, the multiplier µ control the squared amplitude B and determine the shape of the periodic solutions. Thus, ν determines the wavenumber of the solution, which we take sufficiently large (ν = 10 −5 ) to obtain the PRC corresponding to n = 1. Similarly, rescaling Eq. (32), we obtain to find µ n = n 2 µ and ν n = ν/n 2 . Thus, if ν is sufficiently large, the PRC takes the smallest wavenumber n = 1. C. Phase-plane analysis As we explained, we fix the multiplier ν large (but still much smaller than unity, ν = 10 −5 ≪ 1) to obtain physically realistic PRCs. Here, to gain insights into how the shapes of the optimal PRCs are determined, we set ν = 0 and ignore the 4th-order derivatives in the Euler-Lagrange equations, which does not affect the solutions qualitatively. With this approximation, the dependence of the optimal solution on the constraint B or on the Lagrange multiplier µ can be clarified by a simple phase-plane analysis. Excitatory impulses We set ν = 0 to approximate Eq. (27) as and rewrite this equation as We examine the orbit of this two-dimensional dynamical system as a function of θ ∈ [0, 1) on the G − H plane with a periodic boundary condition G(0.5) = G(−0.5) and H(0.5) = H(−0.5). Let us assume µ > 0 first. Figure 5(a) shows an example of the vector field at µ = 10. The horizontal line H = −1 is a separatrix corresponding to the sawtooth solution G ′ (θ) = −1. All orbits starting from H > −1 are closed, implying the existence of a conserved quantity. Applying Noether's theorem [30] to the Lagrangian in Eq. (27), we find that the quantity is actually conserved along the flow generated by Eq. (41), reflecting the translational symmetry of the Lagrangian with respect to phase, namely, that the Lagrangian does not depend on θ explicitly. A solution possessing period 1 is chosen from this family of closed orbits by the shooting method. The solid loop in Fig. 5(a) shows such a periodic solution, and the solid curve in Fig. 5(b) is the corresponding optimal PRC. No orbit starting from H < −1 can form a closed loop, because the vector field points to the upper-right and lower right in the third and fourth quadrant, respectively. Thus, in this region, the orbits have to jump from G(θ) = −0.5 to 0.5 as shown by a broken curve in Fig. 5(a). However, the periods of such orbits are always less than 1 and therefore solutions with G ′ (θ) < −1 do not exist. It can be seen from Eq. (41) that the Lagrangian multiplier µ determines the time scale of the dynamics in the vertical H direction. As µ increases, the vertical dynamics becomes faster, so that the orbit is more strongly attracted to the separatrix H = −1 and tends to move along it, as shown in Fig. 5(c). Correspondingly, the optimal PRC approaches the sawtooth as shown in Fig. 5(d). Note that the separatrix G ′ = H = −1 persists even if ν > 0. This can be confirmed by taking the limit G ′ → −1 in Eq. (27), which gives G ′′ → 0. Thus, the sawtooth limit also persists in the original system. When µ < 0, we obtained the optimal PRCs for desynchronization as summarized in Appendix D. Excitatory and inhibitory impulses The same analysis can be applied to the case with both excitatory and inhibitory impulses. As shown in Fig. 6, when µ > 0, horizontal lines H = ±1 are the separatrices. Orbits starting from |H| < 1 always form closed loops, while those starting from |H| > 1 cannot form a period-1 solution. The conserved quantity in this case is given by Increasing the multiplier µ, the optimal solution gradually expands and changes its shape from a circle to a rectangle. The corresponding PRC deviates from a sinusoid and approaches a double sawtooth. As before, the separatrices persist even if ν > 0. No orbit with period 1 was found when µ < 0. D. Optimal PRCs for stochastic desynchronization The Euler-Lagrange equation gives the solutions that yield the extremum of the action, namely, minimum and maximum, of the Lyapunov exponent Λ under the given constraints. In the case of excitatory impulses, we can vary the Lagrange multiplier µ controlling the squared amplitude of the PRC in the negative range, µ < 0, while keeping the other Lagrange multiplier ν the same as in the main text, ν = 10 −5 , to obtain the PRC that maximizes the Lyapunov exponent. The corresponding Lyapunov exponent is positive, indicating that the PRC is optimal for stochastic desynchronization [11]. As shown in Fig. 7(a), this optimal PRC has a sharp cusp at θ = 0 when µ is sufficiently negative, and gradually approaches a sawtooth as µ increases. Examples of the optimal PRC and the corresponding phase-plane orbit are plotted in Figs. 7(b) and 7(c). It is interesting to note that the PRC plotted in Fig. 5(b) or (d) is "type 1" while the PRC in Fig. 7(a) is "type 0" in Winfree's classification [19,33]; the type 1 PRC is continuous and is observed for moderate perturbation intensity, whereas the type 0 PRC is discontinuous and is observed when an oscillator is strongly perturbed [34]. Thus, under the present criteria, the optimal PRC for stochastic synchronization is type 1 and that for desynchronization is type 0.
2011-06-17T09:37:41.000Z
2011-06-17T00:00:00.000
{ "year": 2011, "sha1": "314020c3dd1b945c4f6b6ddc71b39baf00c9bf96", "oa_license": null, "oa_url": "http://t2r2.star.titech.ac.jp/rrws/file/CTT100685631/ATD100000413/", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "314020c3dd1b945c4f6b6ddc71b39baf00c9bf96", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Mathematics", "Physics" ] }
3796744
pes2o/s2orc
v3-fos-license
Variability in P1 gene redefines phylogenetic relationships among cassava brown streak viruses Background Cassava brown streak disease is emerging as the most important viral disease of cassava in Africa, and is consequently a threat to food security. Two distinct species of the genus Ipomovirus (family Potyviridae) cause the disease: Cassava brown streak virus (CBSV) and Ugandan cassava brown streak virus (UCBSV). To understand the evolutionary relationships among the viruses, 64 nucleotide sequences from the variable P1 gene from major cassava producing areas of east and central-southern Africa were determined. Methods We sequenced an amplicon of the P1 region of 31 isolates from Malawi and Tanzania. In addition to these, 33 previously reported sequences of virus isolates from Uganda, Kenya, Tanzania, Malawi and Mozambique were added to the analysis. Results Phylogenetic analyses revealed three major P1 clades of Cassava brown streak viruses (CBSVs): in addition to a clade of most CBSV and a clade containing all UCBSV, a novel, intermediate clade of CBSV isolates which has been tentatively called CBSV-Tanzania (CBSV-TZ). Virus isolates of the distinctive CBSV-TZ had nucleotide identities as low as 63.2 and 63.7% with other members of CBSV and UCBSV respectively. Conclusions Grouping of P1 gene sequences indicated for distinct sub-populations of CBSV, but not UCBSV. Representatives of all three clades were found in both Tanzania and Malawi. Background Cassava (Manihot esculenta Crantz, Family: Euphorbiaceae) is an important staple food crop for over 800 million people across the globe [1]. Although cassava is known to be vulnerable to at least 20 different viruses, the two most economically damaging viral diseases in Africa are cassava mosaic disease and cassava brown streak disease (CBSD). The diseases have been associated with production losses worth more than US$1 billion every year [2]. Recent developments in cassava research have shown that CBSD is emerging as the most important viral disease of cassava in Africa, and is consequently a threat to food security [1]. Two distinct species of the genus Ipomovirus (family Potyviridae), Cassava brown streak virus [3] and Ugandan cassava brown streak virus (UCBSV [4,5]) cause the disease. In this paper, both viruses are collectively called CBSVs. The characteristic symptoms of CBSVs include typical 'feathery' chlorosis and yellow patch symptoms along secondary and tertiary veins of older leaves of cassava, brown streaks on the stems, constriction in storage roots, and brown spots in the tuber visible when it is cut [6,7]. Previously, CBSD was reported only from the coastal lowlands of East Africa, but recently it has spread throughout the Great Lakes region of East and Central-Southern Africa [8][9][10][11][12][13][14]. Potyviridae is a family of plant viruses with a single stranded, positive sense RNA genome and flexious, filamentous particles [10]. The monopartite +ssRNA genomes of the members of Potyviridae share similar genomic organization, with levels of amino acid identity in their polyproteins ranging from 42 to 56% among different species of the same genus and from 25 to 33% among viruses from different genera [11]. However, conservation of individual mature proteins varies. P1, a serine protease that self-cleaves at its C terminus and acts as an accessory factor for genome amplification (reviewed in [12] is the first protein of the polyprotein and the most variable in length and amino acid sequence [11]. Other roles of P1 include boosting the activity of the helper component protease (HCPro) to suppress RNA silencing [13] and to enhance the pathogenicity of heterologous plant viruses during coinfection [14,15]. The genomes of CBSVs lack HCPro and the short P1 gene has RNA silencing suppression function [16]. Significantly divergent P1 gene sequences of CBSV have been found and recent studies have suggested that the P1 gene of CBSV (together with NIa, 6 K2, and NIb) have evolved more rapidly compared to other genes [17]. Genetic variability is an intrinsic feature of RNA viruses because of high mutation rates resulting from the lack of proofreading activity of their RNA-dependent RNA polymerases [14,18]. RNA recombination events can additionally shape the diversity of populations of RNA viruses [15] which can lead to new phenotypes such as host range expansion [19]. Diversity among CBSV isolates was initially assessed using sequences at the conserved 3′terminus of the RNA genome comprising the coat protein gene and parts of NIb [16] while comparative studies with complete viral genomes [5,9,20], have revealed more pronounced and distinctive features among virus isolates. In one previous study [5], sequence analysis of 7 virus isolates revealed two distinct CBSV sequence clades that were separated to the species level. Different biological features of members of these two clades provided justification for CBSVs to be assigned to two species: UCBSV and CBSV [5]. In that same study an isolate from coastal Tanzania (CBSV-Tan70, FN434473) was identified which was very similar to CBSV isolates throughout much of its genome, but with a strikingly different P1 gene which was equidistantly related to both CBSV and UCBSV isolates. As this divergent P1 region was found only in one CBSV isolate which otherwise had similar biological features than other CBSVs, further species delineation was not possible because of lack of similar isolates. The recent analyses of additional CBSV genome sequences from Tanzania [9] and from Uganda [16] revealed further diversity and also indicate the potential for an additional species or subspecies within the CBSVs. In the study presented here, a total of 64 P1 gene sequences of CBSV isolates from major cassava producing areas of east and central-southern Africa were analysed. We sequenced a portion of the P1 gene from 31 isolates (from Malawi and Tanzania) and analyzed them with those previously reported from Uganda, Kenya, Tanzania, Malawi and Mozambique and present substantial evidence for the widespread occurrence of a distinct Cassava brown streak virus clade tentatively named CBSV-Tanzania (CBSV-TZ). Source of virus isolates, amplification and sequencing Cassava cuttings collected from CBSD-symptomatic plants in Malawi and Tanzania (Table 1) during national surveys in 2013 (under the auspices of each country's agricultural research institutes). The plants were classified by having symptoms that were consistent with CBSD (feathery chlorosis along veins in leaves and brown streaks/lesions along the plant stem), or potentially were coinfected with agents causing both CBSD and CMD (mosaic, mottling, misshapen and twisted leaflets) and were taken to The Leibniz Institute -Deutsche Sammlung von Mikroorganismen and Zellkulturen GmbH (DSMZ) Plant Virus Department, where they were maintained under greenhouse conditions. Total RNA was extracted from the virus-infected leaves of the cassava plants using the cetyl trimethyl ammonium bromide method [21] with modifications described previously [22] or using an RNeasy Mini kit (Qiagen). Nucleic acids were quantified using a Nanodrop spectrophotometer, and about 2.0 × 10 −5 μg/mL nucleic acid was used for virus detection by RT-PCR as detailed in Winter et al. [5]. A cDNA fragment, the partial sequence of the P1 gene, was amplified using virus specific primer sets designed by Winter et al. [5]. The reactions were performed in a GeneAmp 9700 PCR thermal cycler (Applied Biosystems, Foster City, CA, USA) set with the following conditions: 42°C for 30 min for reverse transcription, followed by heat denaturation at 94°C for 5 min; and then 35 cycles of amplification comprising the following: denaturation at 94°C for 1 min, annealing at 52°C for 1 min, extension at 72°C for 1 min, followed by a single cycle of final extension at 72°C for 10 min. All RT-PCR products were purified using a Qiagen gel extraction kit, ligated into the pDrive U/A cloning vector (Qiagen) and subsequently electroporated into Escherichia coli DH5α cells. The clones were Sanger sequenced in both orientations. A single consensus sequence for each isolate was verified to be CBSV by blastn searches of GenBank (https://blast.ncbi.nlm.nih.gov/Blast.cgi). The resulting nucleotide sequences were submitted to GenBank (pending accession numbers, Table 1). Nucleotide similarity and putative recombination breakpoint analysis Percentage nucleotide identities were computed in Geneious Software v10.0.5 [23]. A matrix of nucleotide identities was produced using the Sequence Demarcation Tool v1 [24]. Putative recombination events were detected using nine recombination detection programs within the RDP4 package (http://darwin.uvigo.es/rdp/ rdp.html): RDP, GENECONV, MaxChi, Chimaera, Bootscan, Siscan, PhylPor, LARD, and 3Seq [25]. Analyses were carried out using default settings (except sequences were set to linear) and the Bonferroni correction P-value cut-off of 0.05. Only breakpoints supported by at least three methods were considered further [26]. Phylogenetic analysis Phylogenetic relationship among P1 regions of CBSV isolates (Table 1) was determined. The sequences were aligned using ClustalW [27] in MEGA 7 [28] and edited manually. The alignment was trimmed to give all sequences uniform length. MEGA 7 was used to construct maximum likelihood (ML) phylogenetic trees, and editing was done in FigTree v1.4.2 (http://tree.bio.ed.ac.uk/ software/figtree/). The trees were created using a GTR nucleotide substitution model, and the best tree was bootstrapped with 1000 replicates [29]. Results To examine the genetic diversity of CBSVs, field surveys and extensive sampling were performed in Malawi and Tanzania in 2013, yielding a total of 31 newly sequenced isolates (16 from Tanzania and 15 from Malawi). Thirty-three other previously published P1 sequences of CBSVs were retrieved from GenBank and aligned with these new sequences and a sister taxon, Sweet potato mild mottle virus [3]. The alignment (510 nt) was found to be free of detectable recombination. A phylogenetic tree generated from these 64 partial P1 sequences confirmed significant genetic variability among CBSVs and unambiguously resolved three clades. Seven isolates; five from Tanzania (TZ-Nal:07, TZ_Mari_1_13, TZ:Kor6:08, TZ-19-1, Tan_70) and two from Malawi (MW16, MW40) formed a clade which is significantly divergent from other CBSV isolates (we term this clade CBSV*) and the UCBSV isolates respectively (Fig. 1). We have tentatively named this group CBSV-Tanzania as it is more closely related to CBSV than UCBSV isolates and contains sequences predominantly from Tanzania. The clade includes the CBSV isolate Tan_70 from coastal Tanzania which was previously reported [5]. Isolates belonging to the CBSV-TZ clade were closely related, sharing P1 gene sequences very different from those in the CBSV* and UCBSV clades (Fig. 2). P1 sequences in the CBSV-TZ clade have low sequence identity with P1 gene sequences of isolates in the CBSV* (63.2 to 70.9%) and UCBSV (62.0 to 65.4%) clades. Discussion As CBSD continues to threaten subsistence cassava production in east, central and southern Africa, there is a [32] need to understand dynamics of viral diversity as this has implications on evolution and emergence of new species or strains. This is especially critical in light of the rapid spread of the disease from the Great Lakes region of east and central-southern Africa [8,9,[11][12][13]20]. We present here an analysis of 64 partial P1 sequences of cassava brown streak viruses from cassava growing regions of Africa where CBSVs are known to occur. Considerable variance of gene size and sequence within P1 genes of the family Potyviridae has been previously reported [17,14] indicating that P1 is an ideal region to reveal population differentiation and incipient speciation within cassava ipomoviruses. Further, whole genome analyses of CBSVs had previously identified unusual sequence diversity in P1 [5]. Our phylogenetic analysis revealed that the CBSVs sequences formed three distinct clades ( Figure 1). In addition to the previously characterized species UCBSV and CBSV, the novel clade which includes the Tan_70 isolate [5] presents a major sub-group of CBSV, for which we propose the tentative name CBSV-Tanzania. Another study on variation of CBSVs, based on short coat protein fragments (~230 nt) revealed a number of viruses that are intermediate between the two CBSV and UCBSV species subgrouping, and consequently presented a hypothetical possibility of a new novel species or sub-species associated with CBSVs [30]. Recent whole genome analyses of UCBSV isolates [9] suggested further speciation among isolates of UCBSV from Tanzania. Our results, concentrating on the analysis of the variable P1 gene and additional virus isolates from east and centralsouthern Africa, confirm the diversity observed with the in other studies and provides evidence from P1 gene analysis for subdivision of CBSV and the presence of the clade CBSV-TZ. Our results also show that the Malawi and Tanzania viruses are more diverse than those found in Kenya, Uganda, and Mozambique ( Figure 1). That Tanzania has qualitatively higher diversity of CBSVs may not just be due to increased surveillance and sampling there; while UCBSV is distributed all over Malawi, CBSV* and sub-group CBSV-TZ are localized in northern Malawi, bordering Tanzania [30]. Movement of cultivars between the two countries could help to explain the shared diversity of CBSVs, which could be due to either purely geographical reasons or unique adaptations of circulating CBSV-TZ to locally popular cassava cultivars. While the region around Lake Malawi was where CBSD was first observed [6] the higher prevalence and wide distribution of UCBSV compared to CBSV throughout Malawi, Tanzania and surrounding countries suggest that UCBSV was likely the virus implicated in the first finding of CBSD. Comparisons of full genome sequences of Malawian CBSVs with those of CBSVs obtained from (Table 1). Sequences are from Tanzania (green), Mozambique (yellow), Kenya (red), Uganda (purple) and Malawi (blue). Bootstrap values higher than 70% are shown. The scale is in substitutions/site CBSD-affected areas of neighboring countries (Tanzania and Mozambique) would likely clarify questions about the evolutionary history and biogeography of the viruses in the region. Regardless, it is clear that the CBSVs do not have geographically distinct distributions as was previously hypothesized [4]. Studies by Ndunguru et al. [9] showed that a previously described CBSV Tanzanian isolate TZ-Nal 07 had a recombination event in the 5′ end in the P1 gene. The P1 region is known to harbor obvious recombination in several potyviruses [31] and contributes to its overall variability. Although our final dataset did not statistically support recombination breakpoint (s) within P1, when diverse isolates from Kenya [32] were left out of the analysis, the isolate (TZ-Nal 07) was identified by two methods as a putative recombinant between a member of the CBSV-TZ clade (TZ:Kor6:08) and CBSVMo_83 (data not shown). This recombination event may be better supported from the full genome dataset [9] but the finding is consistent with the phylogenetic placement of TZ-Nal 07 as basal to the CBSV-TZ clade. However, we have no evidence for recombination being the origin for this well-supported subgroup and hence the diversification of P1 in the genomes of CBSV-TZ isolates still requires further investigation. Conclusions Our in-depth look at CBSVs from Malawi and Tanzania has revealed that the divergent Tan_70 isolate is in good company, and that the CBSVs have three separable groups of diverse P1 gene sequences. Further research will establish if the variable P1 region is an accurate bellwether for overall population divergence, and future phenotypic characterization will determine whether CBSV-TZ represents a novel strain or subspecies of CBSV. Fig. 2 Pairwise identity matrix generated from CBSV partial P1 gene sequences. Each colored key represents a percentage to the identity score between two sequences
2017-07-18T16:34:05.809Z
2017-06-20T00:00:00.000
{ "year": 2017, "sha1": "504a1f6f5d5485345b91fc6ef2380fb0f168cf8d", "oa_license": "CCBY", "oa_url": "https://virologyj.biomedcentral.com/track/pdf/10.1186/s12985-017-0790-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "89396f14a559ced097b42e3e11faf87b39372cb4", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
224874290
pes2o/s2orc
v3-fos-license
Application of Model Predictive Control for Large-Scale Inverted Siphon in Water Distribution System in the Case of Emergency Operation : The emergency control of Menglou~Qifang inverted siphon, which is about 72 km long, is the key to the safety of the Northern Hubei Water Transfer Project. Given the complicated layout of this project, traditional emergency control method has been challenged with the fast hydraulic transient characteristics of pressurized flow. This paper describes the application of model predictive control (MPC), a popular automatic control algorithm advanced in explicitly accounting for various constraints and optimizing control operation, in emergency condition. For the fast prediction to the pipe-canal combination system, a linear model for large-scale inverted siphon proposed by the latest research and the integrator-delay (ID) model for open canals are used. Simulation results show that the proposed MPC algorithm has promising performance on guaranteeing the safety of the project when there are sudden flow obstruction incidents of varying degrees downstream. Compared with control groups, the peak pressure can be reduced by 17.2 m by MPC under the most critical scenario, albeit with more complicated gates operations and more water release (up to 9.75 × 10 4 m 3 ). Based on the linear model for long inverted siphon, this work highlights the applicability of MPC in the emergency control of large-scale pipe-canal combination system. Introduction The Menglou~Qifang inverted siphon is an important part of the Northern Hubei Water Transfer Project (NHWTP), which is about 72 km long and has a design flow of 38 m 3 /s. It is rare in the worldwide water transfer projects, so there may be no referable experience about the emergency control, which is vital to the safety of the NHWTP. When there is an accident, untimely or unreasonable control measures may lead to serious secondary accidents, such as tube bursting caused by water hammer, overtopping of open canals caused by too fast gate operations, and so on. Therefore, it is of great significance for the NHWTP to study the hydraulic response and emergency control of the ultra-long inverted siphon in accident conditions. The most conventional method for such a problem is making an emergency dispatching plan, which can be divided into three steps. The first step is to give guidelines for emergency dispatch based on specific project and experience [1,2]. Then, a variety of gates operation schedules are set out to study the hydraulic response [3]. Finally, the optimal group is selected to develop the emergency scheduling plan according to the simulation results [4][5][6]. The biggest problem about the emergency dispatching plan is that it is only applicable to typical accidents in typical canals and may consume a lot of time and computing resources. Moreover, if there is an accident condition that is not considered in the plan, the performance of emergency control may still largely depend on the operators' personal experience and proficiency. So canal automation methods are needed in emergency control [7]. Soler et al. [8] used the "Gómez, Rodellar and Soler" (GoRoSo) feedforward algorithm to compute the gate trajectories that could smoothly carry the canal from the initial to the final state by keeping the water depth constant at checkpoints in the case of an emergency by closing the upstream pool. Lian et al. [9] and Xu et al. [10] proposed the calculation formula of emergency gate falling time in sudden water pollution accidents to control the pollution in the channel of accident. However, they have only focused on the transfer range of pollutants, not the safety of the canals system structures. Kong et al. [11] used proportional integral (PI) water level difference feedback control algorithm to prolong the continuous delivery time of pools with offtake delivery demand under sudden upstream water interruption. Cui et al. [7] and Kong [12] proposed a two-step control algorithm under emergency conditions to deal with the recovery characteristics of canals, respectively. In the meantime, the effect of gate controlling on canals and stable water diversion in upstream reaches are also taken into account. In these studies, the research object of emergency dispatching is the open water system, rather than the pipe-canal combination system in which the hydraulic transition process of pressurized flow is faster and more intense than that of open flow, and slight operating changes may cause large differences in results which make the conventional method of making an emergency dispatching plan unsuitable. It would be applicable if the pressure fluctuations in the inverted siphon can be predicted and gates actions are taken in advance according to the safety restrictions. At this time, model predictive control (MPC) is a good choice. The biggest advantage of MPC is the ability to explicitly account for constraints on water head, gate movements and so on, which is hard for other automatic control methods, such as PI controllers and linear quadratic regulators (LQR) [13][14][15]. MPC has a wide range of applications, in addition to conventional water dispatching [16][17][18], there are also unconventional operation conditions, such as risk mitigation [19], drought [20][21][22] and flood [23,24] or even emergency conditions. Vierstra [25] dealt with an unexpected failure of a pump station in the South-North Water Transfer Project, which shows MPC can anticipate the hydraulic interaction between all canal pools and supply as much water as possible under the structure failure. Other studies about emergency controls by MPC are mostly in other fields, such as electric control [26,27] and unmanned autonomous vehicles [28,29]. In general, MPC is used in the open canal system, rather than the pipe-canal combination system. The main reason is that there are mature linear models for open canal flow as the internal progress model of MPC, such as integrator delay (ID) model, integrator-delay zero (IDZ) and reduced Saint-Venant models [30], but the pressurized flow does not. The authors [31] recently proposed a linear model that relates the pressure head variations at the downstream end of an inverted siphon to the flow rate variations at two ends. Therefore, this research is carried out to evaluate the predictive effect of the linear model proposed by Mao et al. [31] as the internal model of MPC on the one hand, on the other hand to study the applicability of MPC in emergency control by considering security constraints and preventing secondary accidents. The structure of this paper is as follows. Firstly, the project is introduced in the Section 2 for its particularity. In Section 3, the principles of simulation and model predictive control are presented. Then, in Section 4, four test scenarios of different accident levels and their control groups are simulated and analyzed to evaluate the effect of MPC on emergency control of the NHWTP. Discussion and conclusions are drawn in the last two sections. The Northern Hubei Water Transfer Project The Northern Hubei Water Transfer Project draws water from Danjiangkou Reservoir, and its Menglou~Qifang inverted siphon (i.e., Pool 2 in Figure 1) is 72 km long, which is rare in the worldwide water transfer projects. During the hydraulic transition, the unsteady flow in the open canals and in the pipelines will influence each other. In order to study the hydraulic response of Menglou~Qifang inverted siphon under accident conditions, the modeling scope in this study should cover the long inverted siphon and the open canals at both ends (i.e., Pool 1 and Pool 3 in Figure 1). The initial flow rate is the design flow rate, 38 m 3 /s, and there are no offtakes along the line. The layout is shown in Figure 1 and basic parameters are presented in Table 1. The initial openings of the gates on open canals are set as 80% of the maximum opening according to experience. The opening and closing speed of the gates is limited to 0.5 m/min. Figure 1). The initial flow rate is the design flow rate, 38 m 3 /s, and there are no offtakes along the line. The layout is shown in Figure 1 and basic parameters are presented in Table 1. The initial openings of the gates on open canals are set as 80% of the maximum opening according to experience. The opening and closing speed of the gates is limited to 0.5 m/min. On Pool 1, a water release gate is arranged 100 m away from the inlet section of the inverted siphon, with a width of 3 m, a height of 4.8 m and a designed discharge of 38 m 3 /s. An overflow weir is set on Pool 3, which is 200 m away from the outlet section of the inverted siphon. The width of overflow weir crest is 60 m and overflow will begin when the water level is higher than 128.50 m. Pool 2 contains a long inverted siphon, which can be divided into three parts: inlet section, pipe body section and outlet section. Three DN3800 mm pre-stressed concrete cylinder pipes (PCCP pipes for short) are used in the pipe body section, and the maximum design pressure head of outlet pipeline is 61 m. The inlet section is equipped with a two-stage free drop section, and connected with a regulating pond through the breast wall, with a top elevation of 144.41 m. Gate 3, the control gate of On Pool 1, a water release gate is arranged 100 m away from the inlet section of the inverted siphon, with a width of 3 m, a height of 4.8 m and a designed discharge of 38 m 3 /s. An overflow weir is set on Pool 3, which is 200 m away from the outlet section of the inverted siphon. The width of overflow weir crest is 60 m and overflow will begin when the water level is higher than 128.50 m. Pool 2 contains a long inverted siphon, which can be divided into three parts: inlet section, pipe body section and outlet section. Three DN3800 mm pre-stressed concrete cylinder pipes (PCCP pipes for short) are used in the pipe body section, and the maximum design pressure head of outlet pipeline is 61 m. The inlet section is equipped with a two-stage free drop section, and connected with a regulating pond through the breast wall, with a top elevation of 144.41 m. Gate 3, the control gate of the outlet section, is located at the junction of the pressure flow in the pipeline and the open flow in Pool 3. Model Formulation In this part, the control purposes, the methodologies about simulation and automatic emergency control, and the scenarios designed for simulation will be described. MPC will be applied to two objects: the exit gate of the long inverted siphon (Gate 3) and the water release gate on Pool 1 to prevent pipes from bursting and open canal from overtopping, respectively. Four test scenarios are given to test the emergency control effect of MPC when there are sudden flow obstruction incidents of varying degrees downstream. Variations used throughout this paper are listed in Appendix A. Control Purposes Under sudden accidents, the following safety conditions need to be met by reasonable emergency control strategies in the progress of gate closing from initial opening to target opening. 1. The cross section in front of Gate 3 (Section A-A in Figure 1) is easy to burst when the pressure exceeds the limit in the progress of fast closing, so the internal water pressure of this cross section must be controlled below 61 m. 2. The water level in the regulating pond should be controlled between 128 m and 144.41 m to prevent overflow or air intake at the inlet section. 3. To ensure that the overtopping accident will not occur in the open canals during the control process. When the water level in Pool 3 exceeds 128.50 m, it will automatically overflow because of the overflow weir. According to experience, the possibility of overtopping in Pool 3 is small. Nevertheless, the water level in Pool 1 needs to be reasonably controlled by the water release gate to prevent overtopping. Preissmann Slot Method The Preissmann slot method (PSM) has been widely used in modeling transitions between free-surface flow and pressurized flow [32]. Comparing the continuity equation and motion simulation of unsteady flow in open canals and pressurized flow in pipes, as shown in Equations (1) and (2), respectively, it can be seen that if a narrow slot is assumed on the top of pressurized flow (as shown in Figure 2) and the width of slot is set as Equation (3), the control equations of the two flow regimes will be the same. Therefore, Equation (1) can be used as the basic equations to uniformly describe the open flow and the pressurized flow. Model Formulation In this part, the control purposes, the methodologies about simulation and automatic emergency control, and the scenarios designed for simulation will be described. MPC will be applied to two objects: the exit gate of the long inverted siphon (Gate 3) and the water release gate on Pool 1 to prevent pipes from bursting and open canal from overtopping, respectively. Four test scenarios are given to test the emergency control effect of MPC when there are sudden flow obstruction incidents of varying degrees downstream. Variations used throughout this paper are listed in Appendix A. Control Purposes Under sudden accidents, the following safety conditions need to be met by reasonable emergency control strategies in the progress of gate closing from initial opening to target opening. Figure 1) is easy to burst when the pressure exceeds the limit in the progress of fast closing, so the internal water pressure of this cross section must be controlled below 61 m. 2. The water level in the regulating pond should be controlled between 128 m and 144.41 m to prevent overflow or air intake at the inlet section. 3. To ensure that the overtopping accident will not occur in the open canals during the control process. When the water level in Pool 3 exceeds 128.50 m, it will automatically overflow because of the overflow weir. According to experience, the possibility of overtopping in Pool 3 is small. Nevertheless, the water level in Pool 1 needs to be reasonably controlled by the water release gate to prevent overtopping. Preissmann Slot Method The Preissmann slot method (PSM) has been widely used in modeling transitions between freesurface flow and pressurized flow [32]. Comparing the continuity equation and motion simulation of unsteady flow in open canals and pressurized flow in pipes, as shown in Equations (1) and (2), respectively, it can be seen that if a narrow slot is assumed on the top of pressurized flow (as shown in Figure 2) and the width of slot is set as Equation (3), the control equations of the two flow regimes will be the same. Therefore, Equation (1) can be used as the basic equations to uniformly describe the open flow and the pressurized flow. Since the width of the narrow slot is very small, the influence on the wetted area and hydraulic radius can be neglected, but the transmission of pressure waves in the pipeline can be simulated by a suitable slot size. where H is the water depth in the open flow, or pressure head in the pressurized flow (m); A is the wetted area (m 2 ); B is the width of water surface or slot (m); B sl is the width of the narrow slot (m); g is the acceleration of gravity (m/s 2 ); a is the speed of the acoustic wave, which is taken as 1054 m/s; v is the flow velocity (m/s); S f = n 2 v 2 /R 4/3 is the hydraulic slope; n is the Manning coefficient; R is the hydraulic radius (m); S b is the bed slope. Model Predictive Control MPC is capable of foreseeing delivery problems and dealing with various constraints on the controlled variable (water level or pressure head) or the control variable (gate position), which can ensure safe hydraulic transition by advance prediction and objective function optimization. In this paper, the water depth in Pool 1 and the pressure head before Gate 3 are both important controlled variables, directly related to whether there will be secondary accidents such as overtopping or tube-burst in the emergency control progress. Accordingly, MPC will be implemented on the Gate 3 and the water release gate, which can be called MPC-1 and MPC-2, respectively. MPC-1 is configured with four aspects: internal model, receding horizon, constraints and objective function [14], which will be explained below. It is activated at the moment of the accident. Every control cycle, ten control action options will be predicted by Mao's linear models [31] and compared by the constraints and objective function. At the end of the cycle, only the most optimal gate opening increment of Gate 3 is chosen as the output. MPC-2 is consistent in the internal model and receding horizon with MPC-1, but it controls the water release gate only based on the predicted results of every control cycle without constraints and optimization. More details of the control strategies are shown in Table 2. The schematic overview of the control system can be seen in Figure 3. The parameter C is set to prevent the inverted siphon intake, which will be explained in Section 3.4. Further, the actual water system is simulated by solving the Saint-Venant equation. The authors recently proposed a linear model that related the pressure head variations at the downstream end of an inverted siphon (notated as h 2 (k)) to the flow rate variations at two ends (notated as q 1 (k) and q 2 (k)) [31]. It divided h 2 (k) in the inverted siphon into the low-frequency part and high-frequency part, caused by the deformation of the siphon wall and the reflection of the acoustic wave, respectively. In Mao's study [31], the linear model was adopted to model two scenarios, one is a virtual large-scale inverted siphon and the other one is a PVC pipe. In the meantime, the accuracy of the linear model is verified in the frequency domain using the Bode plot, and the pressure head computed using the linear model is compared with the simulation results of finite volume method (FVM). The discrete time-invariant linear model for the long inverted siphon applied in this study is given in Equation (4). where k is time step; h 1 (k) and h 2 (k) are pressure head deviation at the upstream end and downstream end at time step k (m), respectively; h f is the frictional head loss computed using the flow rate at the upstream end of the inverted siphon under the assumption of a quasi-steady flow condition (m); a is the speed of the acoustic wave, which is taken as 1054 m/s; A is the wetted area (m 2 ); g is the acceleration of gravity (m/s 2 ); k d is the delay steps of the acoustic wave traveling from the upstream end to the downstream end, and it can be computed approximately using: where L is the distance of the long inverted siphon (m); and ∆t is the computation time step, which is taken as 1 min. In order to further verify the linear model, it is applied to the project as described in Section 2. The test scenario is given in Table 3 and the result is shown in Figure 4. In addition, the four-point difference implicit scheme of Pressman (i.e., finite difference method, FDM) is also used to solve the Saint-Venant equation. It can be seen from the Figure 4 that the results of the two methods are quite different during the gate operations, but on the whole, the linear model can reasonably reflect the unsteady flow characteristics at the downstream end of the inverted siphon (MAPE is mean absolute percentage error, and the smaller the value is, the better the effect of the model is). The reason for the difference is that there are many assumptions and simplifications in the derivation of the linear model, but it is generally acceptable. The Linear Model for the Open Canal For the pipe-canal combination system shown in Figure 1, the above linear model can be used as the internal model for the prediction to the long inverted siphon, and the integrator-delay (ID) model is commonly used for open canal [30]. It assumes that an open canal reach is separated into a uniform flow and a backwater section, as presented in Figure 5. Delay time (τ in s) and average storage area (As in m 2 ) are the two main properties of each open canal reach in ID model, as can be seen in Table 4. The discrete time-invariant ID model applied in this study is: t Δ The Linear Model for the Open Canal For the pipe-canal combination system shown in Figure 1, the above linear model can be used as the internal model for the prediction to the long inverted siphon, and the integrator-delay (ID) model is commonly used for open canal [30]. It assumes that an open canal reach is separated into a uniform flow and a backwater section, as presented in Figure 5. Delay time (τ in s) and average storage area (A s in m 2 ) are the two main properties of each open canal reach in ID model, as can be seen in Table 4. The discrete time-invariant ID model applied in this study is: where H d (k) is the water depth at the downstream end of the pool at time step k (m); Q out (k) is the control flow to the downstream reach at time is the inflow (m 3 /s) to the backwater section with k d being the delay time step between control action and the change in average downstream water level; and Q off-take (k) is the off-take outflow which originates from the control of the water release gate (m 3 /s). The Linear Model for the Open Canal For the pipe-canal combination system shown in Figure 1, the above linear model can be used as the internal model for the prediction to the long inverted siphon, and the integrator-delay (ID) model is commonly used for open canal [30]. It assumes that an open canal reach is separated into a uniform flow and a backwater section, as presented in Figure 5. Delay time (τ in s) and average storage area (As in m 2 ) are the two main properties of each open canal reach in ID model, as can be seen in Table 4. The discrete time-invariant ID model applied in this study is: where Hd(k) is the water depth at the downstream end of the pool at time step k (m); Qout(k) is the control flow to the downstream reach at time step k (m 3 /s); Qin(k) is the inflow (m 3 /s); Qin(k − kd) is the inflow (m 3 /s) to the backwater section with kd being the delay time step between control action and the change in average downstream water level; and Qoff-take(k) is the off-take outflow which originates from the control of the water release gate (m 3 /s). Receding Horizon The receding horizon used in this paper is different from the conventional MPC [14]. A control cycle is completed every ten time steps, which means it is only once every ten minutes that the gates' actions are judged by MPC-1 and MPC-2. Accordingly, the prediction horizon is also set as ten time steps. This is done to prevent frequent opening and closing of gates and to reduce simulation time. Constraints MPC has the ability to deal with constraints. Controlled variable such as internal water pressure at the downstream end of the long inverted siphon is not allowed to violate their safety limit (between 3.8 m and 61 m) anytime within the prediction horizon, which can be described as: where i is the number of predicted steps, from 1 to 10; k is the time step when MPC-1 is invoked every control cycle; H d (k + i) is the pressure head at the downstream end of the long inverted siphon, which is predicted at step k + i (m). Objection Function In every control cycle, the future sequence over the prediction horizon of control actions will be optimized. This is done by minimizing an objective function with penalties on the pressure head deviations from setpoint which is taken as H d (k) and on the control options for gate operation (i.e., the effort that has to be put in controlling Gate 3). The objective function for MPC-1 is: where J represents the objective function that needs to be minimized; ∆e j is the gate opening increment of option j; NISE is the non-dimensional integrated square of error which can measure the stability of water level control [33]; n is the number of steps over the prediction horizon; H d (k) is the pressure head in the downstream end of the long inverted siphon at time step k (m); G is the penalty on NISE and the value is taken as 10. W is the penalty on gate opening increment and is taken as Equation (10) to reduce the number of gate movements. The values of G and W are determined through trial and error. The MPC-2 on the water release gate does not contain the optimization process by minimizing the objective function. Instead, the target opening of the water release gate is determined just according to the prediction result of the last step in the prediction horizon every control cycle, as shown in Equation (11). where e target_release is the target opening of the water release gate (m); ∆H gate2 (k + n) is the predicted deviation of water level compared with initial water level in front of Gate 2 at the last step of the prediction horizon (m). If MPC-1 is invoked repeatedly throughout the simulation, it may cause frequent vibration of Gate 3. So it is set that MPC-1 will be shut down permanently from the moment when the pressure head has been within the safe range in the past one hour after the accident and Gate 3 has reached target opening. Due to the large difference between the wave speed of the open flow and the pressurized flow, there is a large hysteresis in open canals. Because of this, MPC-2 needs to be repeated through the whole process of simulation to prevent Pool 1 from overtopping. Boundary Conditions and Other Control Strategies The setting of boundary conditions has a great influence on the simulation results. The three pools together are treated as an independent canal in this study. The upstream boundary is set as the flow through Gate 1 which is assumed to vary linearly with the gate opening. The downstream boundary is similarly set as the flow rate of Gate 4, but computed by the free discharge formula of the sluice hole [34], as shown in Equation (12). Setting the boundary conditions in this way means the change progress of the flow through the Gate 1 is given, which may result in fewer operations of the water release gate because of less water delivered from upstream. However, it has little impact on the control and safety of the long inverted siphon. where e is the opening of the gate (m); b is the width of the gate (m); H 0 is the water depth in front of the gate; µ 0 is the coefficient of discharge. In the model, the flow of Gate 2 and the water release gate is also computed by Equation (12) where C d is the coefficient of the sluicegate discharge; H 2 is the water depth behind the gate (m). For overflow weir, its flow can be computed by free-overflow formula of wide crested weir [34]: where δ s is the coefficient of side-contract and is taken as 0.9; m is the coefficient of discharge and is taken as 0.32; b weir is the width of weir crest (m); H 0 is the depth in front of the overflow weir, m. For quick adjustment, the opening of Gate 2 is determined every 15 min by the inverse calculation module of gate opening for a given target flow, which is decided as the following: where Q 2 is the target steady-state flow rate for emergency dispatch (m 3 /s); Q 22 is the target flow of Gate 2 for inverse calculation module. When the sudden accident occurs, the inflow of Pool 2 can be quickly adjusted to target flow, but the outflow of Pool 2 is controlled by MPC-1 which suppresses the rapid changes of flow to prevent pressure surge. Inevitably, there is a tendency of storage decreasing in Pool 2 and it is necessary to replenish the incoming water to prevent the inverted siphon intake, which is why the parameter C is set. As for Gate 3, the exit gate of the long inverted siphon, in order to adjust the gate flow to the target flow as soon as possible and prevent frequent gate action, it is also controlled every 15 min by inverse calculation module of gate opening after the MPC-1 is shut down. In addition, Gate 1 and Gate 4 are operated according to the gates schedule for simplicity. Test Scenarios Here, the main goal is to ensure the control effect of MPC when there is a sudden flow obstruction incident in the downstream of the Menglou~Qifang inverted siphon. To this end, four test scenarios are chosen according to the risk of the accident, as can be seen in Table 5. According to the target flow in each scenario, the steady flow calculation is carried out, and the target opening of Gate 3 (the setting parameter of MPC-1) is preliminarily obtained. The target opening of Gate 1 and Gate 4 is simply determined by Equation (17): where j is the number of the gate; e 1j is the initial opening of gate j (m); e 2j is the target opening of gate j (m); Q 1j is the initial flow of gate j (m 3 /s); Q 2j is the target flow of gate j (m 3 /s). It is assumed that an accident occurs when T = 1 h and emergency control is activated. The upstream gate (Gate 1) and downstream gate (Gate 4) operate according to the schedules in Table 5, and the middle gates (the water release gate, Gate 2, and Gate 3) are automatically controlled by the algorithm/ strategies presented in Sections 3.3 and 3.4. The Simulation Results The study about emergency control carried out in this paper pays more attention to the safety of the project, which is guaranteed by MPC, and there is no offtake along the line, so there is no need to maintaining a constant water level at some point like other researches about canal automation [37]. The results of the simulation are shown in Figures 6-9, which show that the proposed control algorithm and strategies can fulfill the emergency control tasks automatically under different risks of flow obstruction incidents downstream. Water 2020, 12, x FOR PEER REVIEW 11 of 20 Test Scenarios Here, the main goal is to ensure the control effect of MPC when there is a sudden flow obstruction incident in the downstream of the Menglou~Qifang inverted siphon. To this end, four test scenarios are chosen according to the risk of the accident, as can be seen in Table 5. According to the target flow in each scenario, the steady flow calculation is carried out, and the target opening of Gate 3 (the setting parameter of MPC-1) is preliminarily obtained. The target opening of Gate 1 and Gate 4 is simply determined by Equation (17): where j is the number of the gate; e1j is the initial opening of gate j (m); e2j is the target opening of gate j (m); Q1j is the initial flow of gate j (m 3 /s); Q2j is the target flow of gate j (m 3 /s). It is assumed that an accident occurs when T = 1 h and emergency control is activated. The upstream gate (Gate 1) and downstream gate (Gate 4) operate according to the schedules in Table 5, and the middle gates (the water release gate, Gate 2, and Gate 3) are automatically controlled by the algorithm/ strategies presented in Sections 3.3 and 3.4. The Simulation Results The study about emergency control carried out in this paper pays more attention to the safety of the project, which is guaranteed by MPC, and there is no offtake along the line, so there is no need to maintaining a constant water level at some point like other researches about canal automation [37]. The results of the simulation are shown in Figures 6-9, which show that the proposed control algorithm and strategies can fulfill the emergency control tasks automatically under different risks of flow obstruction incidents downstream. The detailed simulation results of these four test scenarios are provided in Table 6, from which some trends can be seen: 1. NIAW is non-dimensional integrated absolute gate movement, which can measure the opening and closing amplitude and frequency of the gate [33], and can be calculated by Equation (18). NIAW of Gate 3 and maximum pressure head in front of Gate 3 are on the rise with the risk of accident increasing, except for Scenario B, which will be analyzed later. It means that the condition is becoming more and more difficult to control from Scenarios A to D. For the safety of long inverted siphon, Gate 3 has to go through a more complex process of opening and closing determined by MPC-1. In addition, the NIAW of Gate 2 is zero under all scenarios, indicating gradually closed in the progress of automatic control without being opened and closed frequently, as can be seen in Figures 6b-9b. The detailed simulation results of these four test scenarios are provided in Table 6, from which some trends can be seen: 1. NIAW is non-dimensional integrated absolute gate movement, which can measure the opening and closing amplitude and frequency of the gate [33], and can be calculated by Equation (18). NIAW of Gate 3 and maximum pressure head in front of Gate 3 are on the rise with the risk of accident increasing, except for Scenario B, which will be analyzed later. It means that the condition is becoming more and more difficult to control from Scenarios A to D. For the safety of long inverted siphon, Gate 3 has to go through a more complex process of opening and closing determined by MPC-1. In addition, the NIAW of Gate 2 is zero under all scenarios, indicating gradually closed in the progress of automatic control without being opened and closed frequently, as can be seen in Figures 6b-9b. The detailed simulation results of these four test scenarios are provided in Table 6, from which some trends can be seen: 1. NIAW is non-dimensional integrated absolute gate movement, which can measure the opening and closing amplitude and frequency of the gate [33], and can be calculated by Equation (18). NIAW of Gate 3 and maximum pressure head in front of Gate 3 are on the rise with the risk of accident increasing, except for Scenario B, which will be analyzed later. It means that the condition is becoming more and more difficult to control from Scenarios A to D. For the safety of long inverted siphon, Gate 3 has to go through a more complex process of opening and closing determined by MPC-1. In addition, the NIAW of Gate 2 is zero under all scenarios, indicating gradually closed in the progress of automatic control without being opened and closed frequently, as can be seen in Figures 6b-9b. The detailed simulation results of these four test scenarios are provided in Table 6, from which some trends can be seen: NIAW is non-dimensional integrated absolute gate movement, which can measure the opening and closing amplitude and frequency of the gate [33], and can be calculated by Equation (18). NIAW of Gate 3 and maximum pressure head in front of Gate 3 are on the rise with the risk of accident increasing, except for Scenario B, which will be analyzed later. It means that the condition is becoming more and more difficult to control from Scenarios A to D. For the safety of long inverted siphon, Gate 3 has to go through a more complex process of opening and closing determined by MPC-1. In addition, the NIAW of Gate 2 is zero under all scenarios, indicating that Gate 2 is gradually closed in the progress of automatic control without being opened and closed frequently, as can be seen in Figures 6b, 7b, 8b and 9b. where t is time (min); t 1 and t 2 are the moments for flow changing and stabilizing (min); ∆t is the discrete-time step of control systems, which is taken as 1 min; T is the simulation time (min); e t is the gate opening at time t (m); e max is the maximum gate opening (m). 2. There is an obvious increase in the NIAW of the water release gate and water abandoned from Scenarios A to D. When Gate 2 is quickly closed by the inverse calculation module, the water level in front of Gate 2 rises rapidly. The MPC-2 shows good performance on predicting the change of water level and controlling the water release gate ahead of time to prevent the overtopping accident. In consequence, the more serious the backwater caused by Gate 2 is, the more frequent the water release gate action is. Scenario D is the most urgent condition, and it can be seen in Figure 9c that the water release gate has been opened and closed for a total of three times to deal with the sharp fluctuation of the water level before Gate 2. In the meantime, the overflow weir also works but not in Scenarios A to C. According to the schedule for Gate 4, it will be completely shut down to prevent the water from flowing downstream and causing a secondary accident. However, at this time, Gate 3 is not completely closed under the control of MPC-1 for the safety of the long inverted siphon and continued to discharge into the Pool 3 for a period time, which leads to the rapid rise of water level in the canal Pool 3 and overflow. It can be seen in Figure 10 that the overflow lasts about 1.5 h, and the water level in Pool 3 finally stabilizes at the elevation of the overflow weir crest, 128.5 m. 3. The water level in regulating pond is determined by the variation of the flow through Gate 2 and the flow into the long inverted siphon. Due to the fast velocity of the acoustic wave, the latter is greatly affected by the Gate 3. The simulation results show that there is a drop in the stable water level in regulating pond from Scenarios A to D. When there is an accident, the flow of Gate 2 can be quickly adjusted to the target flow through the inverse calculation module, but the downstream outflow of the long inverted siphon still needs a period time to adjust (judged by MPC-1). The continuous flow difference between upstream and downstream leads to the decline of the water level in the regulating pond. With the increase of accident risk, the flow difference between the upstream and downstream of Pool 2 is greater, which results in a decrease in the stable water level in the regulating pond. Furthermore, a sharp increase of water level can be seen in the regulating pond when Gate 2 and Gate 3 begin to move. There are several possible explanations for this, but the main reason may be the influence of water hammer waves. Gate 3 is located at the junction of pressurized flow and open flow, and closing with maximum speed when the accident occurs will result in severe water hammer. The rapid propagation of the water hammer wave upstream leads to a sharp decrease in the flow into the long inverted siphon, which is larger than the decrease of flow of Gate 2 due to the hysteresis of the open flow. Consequently, the water level in the regulating pond rises rapidly at the beginning of the accident. But it will decrease soon with the closing of Gate 2 and the reflection of the water hammer wave by the regulating pond. where t is time (min); t1 and t2 are the moments for flow changing and stabilizing (min); Δt is the discrete-time step of control systems, which is taken as 1 min; T is the simulation time (min); et is the gate opening at time t (m); emax is the maximum gate opening (m). 2. There is an obvious increase in the NIAW of the water release gate and water abandoned from Scenarios A to D. When Gate 2 is quickly closed by the inverse calculation module, the water level in front of Gate 2 rises rapidly. The MPC-2 shows good performance on predicting the change of water level and controlling the water release gate ahead of time to prevent the overtopping accident. In consequence, the more serious the backwater caused by Gate 2 is, the more frequent the water release gate action is. Scenario D is the most urgent condition, and it can be seen in Figure 9c that the water release gate has been opened and closed for a total of three times to deal with the sharp fluctuation of the water level before Gate 2. In the meantime, the overflow weir also works but not in Scenarios A to C. According to the schedule for Gate 4, it will be completely shut down to prevent the water from flowing downstream and causing a secondary accident. However, at this time, Gate 3 is not completely closed under the control of MPC-1 for the safety of the long inverted siphon and continued to discharge into the Pool 3 for a period time, which leads to the rapid rise of water level in the canal Pool 3 and overflow. It can be seen in Figure 10 that the overflow lasts about 1.5 h, and the water level in Pool 3 finally stabilizes at the elevation of the overflow weir crest, 128.5 m. The simulation result of Scenario B does not conform to the overall trend, which is very likely to be related to the MPC parameters settings and the low prediction accuracy of the linear model during the gate operation. The latter will be discussed in Section 5. These four scenarios designed in Table 5 are all simulated by the same program without modifying any setting parameters, such as the prediction horizon, the control interval, the cost-weighting matrix, or the control action options for gate opening increment. Because of this, it is possible that for each scenario, the setting of the control parameters is not the most optimal, which shows most obvious in Scenario B, but it is acceptable if the control purposes can be achieved. Moreover, it may be more suitable for engineering practice to deal with different levels of flow obstruction accidents with the same set of parameters. To further study the effect of MPC-1 and MPC-2, two control groups are set up based on the Scenario D. One (Scenario D-1) is simulated without MPC-1, the other one (Scenario D-2) is simulated without both MPC-1 and MPC-2, and the results are shown in Figures 11 and 12. Looking at Figure 11, it is apparent that the pressure in the inverted siphon is at the critical point of exceeding the limit, which is not accepted, and the overtopping accident is also easy to occur in front of the Gate 2. Without the control of MPC-1, Gate 3 is shut down with the maximum speed (0.5 m/min), accompanied by severe water hammer which can lead to the burst of inverted siphon. Comparing Figures 9a and 11a, it can be seen that the MPC-1 based on the linear model for long inverted siphon [31] can successfully predict the peak pressure and adjust the action of Gate 3 ahead of time to avoid danger. When neither MPC-1 nor MPC-2 is enabled, not only the inverted siphon may burst, but also a serious overtopping accident will occur in Pool 1, just as shown in Figure 12. As for the regulating pond, the water level did not drop to a relatively low level as shown in Figure 9b. It is mainly because that the MPC-1 was closed and the gates at both ends were closed quickly so that the water was fully stored in Pool 2, rather than flowing downstream. Besides, the water level in the regulating pond is finally maintained at a value slightly higher than the initial water level. It is mainly related to the non-synchronous change of flow during the closing of Gate 2 and Gate 3. Scenario D-2 is actually a typical and conventional emergency dispatching plan, in which Gate 1~Gate 4 are closed as quickly as possible to prevent water from flowing downstream. According to similar engineering reports, the operators are likely to control the gates in this way when a huge downstream accident occurs and there is no similar engineering experience for reference. It can be seen from Figure 12 that this emergency control effect is extremely poor. Of course, it is possible to obtain a set of safe results by continuously adjusting gate trajectories plan, but when another level of accident occurs, the same work needs to be repeated. Compared with advanced model predictive control, the traditional method is quite time-consuming, laborious and inefficient. Discussion This paper takes the Northern Hubei Water Transfer Project as the research object and takes MPC as the control method to study the emergency dispatching. Although this project is particular due to the 72 km long inverted siphon, it can also fully illustrate the advantages and applicability of MPC in emergency control. According to the results in Section 4, it is clear that MPC has good performance in predicting the fluctuation of water level and pressure head to prevent secondary accidents. Nevertheless, there are also some aspects worthy of attention and discussion. First of all, the selected speed of the acoustic wave is an important factor affecting simulation results. Malekpour and Karney [38] investigated the source of the spurious numerical oscillations often observed in simulations using the well-known Preissmann slot method (PSM). They pointed out that PSM cannot sustain the negative pressures that frequently occur in simulations and spurious numerical oscillations are often induced when the flow switches from the open canal to pressurized flow with the higher acoustic wave velocities being introduced. In other words, when the real acoustic wave velocity is used, which is likely higher than 1054 m/s and usually needs to be measured by experiment, the pressure oscillation before the Gate 3 may lead to temporary negative pressure, frequently prematurely terminating the simulation. Three-dimensional simulation or finite volume method can better solve this problem, but for the long inverted siphon in this paper, it will consume a lot of computer resources, which makes the tuning and application extremely difficult. Scenario D-2 is actually a typical and conventional emergency dispatching plan, in which Gate 1~Gate 4 are closed as quickly as possible to prevent water from flowing downstream. According to similar engineering reports, the operators are likely to control the gates in this way when a huge downstream accident occurs and there is no similar engineering experience for reference. It can be seen from Figure 12 that this emergency control effect is extremely poor. Of course, it is possible to obtain a set of safe results by continuously adjusting gate trajectories plan, but when another level of accident occurs, the same work needs to be repeated. Compared with advanced model predictive control, the traditional method is quite time-consuming, laborious and inefficient. Discussion This paper takes the Northern Hubei Water Transfer Project as the research object and takes MPC as the control method to study the emergency dispatching. Although this project is particular due to the 72 km long inverted siphon, it can also fully illustrate the advantages and applicability of MPC in emergency control. According to the results in Section 4, it is clear that MPC has good performance in predicting the fluctuation of water level and pressure head to prevent secondary accidents. Nevertheless, there are also some aspects worthy of attention and discussion. First of all, the selected speed of the acoustic wave is an important factor affecting simulation results. Malekpour and Karney [38] investigated the source of the spurious numerical oscillations often observed in simulations using the well-known Preissmann slot method (PSM). They pointed out that PSM cannot sustain the negative pressures that frequently occur in simulations and spurious numerical oscillations are often induced when the flow switches from the open canal to pressurized flow with the higher acoustic wave velocities being introduced. In other words, when the real acoustic wave velocity is used, which is likely higher than 1054 m/s and usually needs to be measured by experiment, the pressure oscillation before the Gate 3 may lead to temporary negative pressure, frequently prematurely terminating the simulation. Three-dimensional simulation or finite volume method can better solve this problem, but for the long inverted siphon in this paper, it will consume a lot of computer resources, which makes the tuning and application extremely difficult. After this, how to choose the boundary conditions is also influential, especially for the exit gate (Gate 3) of the long inverted siphon, which has a significant impact on the hydraulic response process in the pipes. Gate 3 is located at the junction of the pressurized section and the open canal section, belonging to the orifice-submerged outflow of the long pressurized pipeline, which is highly coupled and non-linear. However, there is little research on this kind of complex boundary. The sluicegate discharge equation proposed by Henry [35], which is selected as the boundary condition for the Gate 3, has been verified by many scholars for many years as a classical empirical formula for calculating gate discharge. But it is more suitable for the gates at the reservoirs or dams [39]. Whether it is suitable for the exit gate of the long inverted siphon needs to be further verified by prototype-observation or three-dimensional simulation. In future research, it might be possible to deal with the above two problems by coupling 1D and local 3D simulation. More specifically, a small region surrounding the moving exit gate of the long inverted siphon is resolved by computational fluid dynamics (CFD), using a dynamic mesh library, while the rest of the system is modeled by FDM. Furthermore, the volume of fluid (VOF) algorithm is needed in CFD to simulate the two-phase flow patterns for its advantageous ability of tracking the gas-liquid interface. The applications of similar collaborative simulation technology show its feasibility [40,41], but it is rarely used in the large-scale pipe-canal combination system and can be tried in the further studies. Along the line of consideration, there is another factor that has an impact on the accuracy of the simulation results, and that is the value of computation time step ∆t. The smaller the ∆t is, the greater the peak pressure in the pipes can be captured during the simulation, but the longer the simulation time is. Based on the comprehensive consideration, ∆t is taken as 1 min in this paper for the Northern Hubei Water Transfer Project. It could be a good choice to use different computation time steps in the open flow section and the pressurized section for more accurate results. Last but not least, the linear model of water movements for large-scale inverted siphon [31] is not perfect. Despite the linear model has good performance in MPC-1, there are still some gate operations that may be unnecessary for guaranteeing the safety of the long inverted siphon, such as the gate operations between 2 h and 5 h shown in Figure 7a. One reason for this may be that during the time of gate operation, the prediction result of the linear model fluctuates more violently than the actual pressure (as shown in Figure 4), so that although it can successfully predict the time when the pressure peak occurs and weaken the pressure peak ahead of time by MPC-1, it will also lead to frequent movements caused by misjudgment. Further research on this linear model is a solution for more precise control. Conclusions Based on the linear model for large-scale inverted siphon proposed by Mao et al. [31], the present study is designed to evaluate the applicability of model predictive control in the case of an emergency. On the basis of the aforementioned results and analysis, the following conclusions can be drawn: 1. When there is no similar engineering experience for reference, the traditional method of making emergency dispatching plan is inefficient. If there is an accident that is not considered in the plans, the performance of emergency control may be greatly reduced, or even a secondary accident may occur due to the fast hydraulic transient characteristics of pressurized flow in the long inverted siphon. At this point, automatic control can be a good choice. 2. The MPC algorithm proposed in this paper can effectively prevent the long inverted siphon from bursting in the outlet section and overtopping in the inlet section when there are sudden flow obstruction incidents of varying degrees downstream. With the rise of accident risk, the control difficulty for MPC also increases, which can be reflected in more complicated gates operations (e.g., the NIAW of Gate 3 in huge risk scenario is more than twice as high as that in low risk scenario) and more water release (up to 9.75 × 10 4 m 3 ). 3. The predicted results of this linear model can help MPC to reduce the peak pressure by taking action ahead of time and ensure the safety of the project, such as the decrease in peak pressure by 17.2 m compared with the results of Scenarios D and D-1. However, due to the poor accuracy of the prediction results during the gate operation, it may lead to some unnecessary gate movements judged by MPC. The Northern Hubei Water Transfer Project, which has a 72 km long inverted siphon, is a typical and rare pipe-canal combination system. The great difference between the wave velocity of open flow and pressurized flow makes the control of this kind of project, especially under accident conditions, extremely difficult. On the one hand, it is necessary to ensure the safety of hydraulic structures, on the other hand, excessive gate movements should be avoided. That is why a lot of time is spent on tuning the objective function and control parameters to adapt to different degrees of downstream flow obstruction incidents. This work can be a reference for the emergency dispatching of similar projects. Further studies need to be carried out to improve the accuracy of the linear model and evaluate the feasibility of combining local 3D-simulation. Funding: The authors acknowledge the support of the NSFC grant 51979202 and NSFC grant 51009108. Conflicts of Interest: The authors declare no conflict of interest.
2020-10-19T18:08:51.113Z
2020-09-30T00:00:00.000
{ "year": 2020, "sha1": "5862175094390f410ecf3c54a2cb89d87f88682f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4441/12/10/2733/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "73fcd4aebaf77375611d0a2d816b93aa4392fb1c", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
256457895
pes2o/s2orc
v3-fos-license
Glucose assimilation rate determines the partition of flux at pyruvate between lactic acid and ethanol in Saccharomyces cerevisiae Engineered Saccharomyces cerevisiae expressing a lactic acid dehydrogenase can metabolize pyruvate into lactic acid. However, three pyruvate decarboxylase (PDC) isozymes drive most carbon flux toward ethanol rather than lactic acid. Deletion of endogenous PDCs will eliminate ethanol production, but the resulting strain suffers from C2 auxotrophy and struggles to complete a fermentation. Engineered yeast assimilating xylose or cellobiose produce lactic acid rather than ethanol as a major product without the deletion of any PDC genes. We report here that sugar flux, but not sensing, contributes to the partition of flux at the pyruvate branch point in S. cerevisiae expressing the Rhizopus oryzae lactic acid dehydrogenase (LdhA). While the membrane glucose sensors Snf3 and Rgt2 did not play any direct role in the option of predominant product, the sugar assimilation rate was strongly correlated to the partition of flux at pyruvate: fast sugar assimilation favors ethanol production while slow sugar assimilation favors lactic acid. Applying this knowledge, we created an engineered yeast capable of simultaneously converting glucose and xylose into lactic acid, increasing lactic acid production to approximately 17 g L−1 from the 12 g L−1 observed during sequential consumption of sugars. This work elucidates the carbon source‐dependent effects on product selection in engineered yeast. S. cerevisiae exhibits the Crabtree effect such that respiration is shut down in favor of fermentative metabolism when glucose is present in high concentrations even under aerobic conditions. [11] Because of this, production of large amounts of lactic acid from glucose has generally required deletion of pyruvate decarboxylase (PDC) enzymes encoded by the PDC1, PDC5, and PDC6 genes. [12][13][14] However, this deletion of PDCs comes with critical drawbacks such as C 2 auxotrophy which requires supplementation with either acetate or ethanol, and reduced growth rate and fermentation speed. As such, it is desirable to engineer a yeast strain capable of efficient production of non-ethanol compounds derived from pyruvate, such as lactic acid, without deleting PDC enzymes. Prior reports have established methods to completely rewire yeast metabolism, bypass the fermentative metabolic structure regardless of aeration, and engineer Crabtree-negative yeast. [15] However, the prior reports showing superb yields of lactic acid from xylose without deletion of any PDC genes suggest alternative approaches are possible. As the preferred production of ethanol over lactic acid is closely tied to glucose uptake, glucose signaling and metabolism may play a role in the partition of flux at the pyruvate branch point. Perception of glucose in yeast is performed through the sensors Snf3 and Rgt2, which respond to low and high concentrations of glucose, respectively. [16] The glucose signal leads to degradation of Mth1 and Std1 [17] and phosphorylation of the repressor Rgt1, [18] releasing it from promotors and derepressing its target genes. There is some evidence that high concentrations of xylose can be sensed by Snf3 and simulating a glucose signal by deletion of RGT1 has been shown to improve xylose fermentation. [19] Although the Snf3/Rgt2 signaling pathway has been extensively mapped as to its role in regulating expression of hexose transporters, the perception of glucose is also known to affect the growth rate and thus metabolism of yeast. [20] However, the exact effects of the Snf3/Rgt2 signaling pathway on yeast metabolism remain largely unexplored. S. cerevisiae also exhibits accelerated glycolysis in the presence of high levels of glucose. [21] While accelerated glycolysis leads to rapid production of ethanol in wild-type strains, controlling central carbon metabolism by modulating expression of HXK1 can paradoxically increase yields of a pathway competing for pyruvate flux without deletion of endogenous PDC genes. [22] By exploring the properties of glucose perception and glycolytic flux, we report here a deeper understanding of the mechanisms underlying enhanced yields of lactic acid from cellobiose and xylose fermentations. Deletion of the glucose sensors Snf3 and Rgt2 enhances yields of lactic acid from glucose and this improvement is replicated in a strain possessing all native sensors by controlling glycolytic flux. We also fully replicate the high yields of lactic acid observed during cellobiose fermentations solely by manipulating glycolytic flux. However, this methodology remains unable to match the high yields of lactic acid observed during xylose fermentations, suggesting still-unknown physiological or regulatory mechanisms present during xylose fermentation which promote the production of lactic acid over ethanol. Ultimately, we apply the knowledge gained here to create a strain capable of producing high yields of lactic acid during glucose/xylose co-consumption. Strains, media recipes, and culture conditions All strains and plasmids used in this study are listed in Tables 1 and 2, respectively. Preculture was performed in 5 mL of YP medium (10 g L −1 yeast extract, 20 g L −1 Bacto peptone) aerobically at 30 • C for 36 h with 40 g L −1 of glucose as a carbon source. Fermentation experiments were performed in 125 mL flasks containing 25 mL YP medium at 30 • C and 100 rpm, with initial carbon sources indicated in fermentation profiles. All fermentations were performed using biologically independent duplicate cultures. All fermentations with lactic acid as a product contained 20 g L −1 CaCO 3 to maintain pH near neutral levels. Doxycycline-controlled expression of hexokinases was performed as described previously. [23] Briefly, when using the D452-iH1L and D452-iH2L strains, preculture was performed in 5 mL of YP medium aerobically at 30 • C for 36 h with 40 g L −1 of galactose along with 1, 3, 6, or 12 µg mL −1 of doxycycline. Doxycycline was added to fermentations with the D452-iH1L and D452-iH2L strains to final concentrations of 1, 3, 6, and 12 µg mL −1 of fermentation medium to control expression of hexokinase and the overall consumption rate of glucose. Genetic techniques Standard restriction digestion and molecular cloning techniques were employed for plasmid creation. [24,25] The structure and target sequences of all guide RNAs used in this study are listed in Table S1 and all primers used in this study are listed in Table S2. For doxycycline induction experiments, the rtTA(S2) variant [26] was synthesized from IDT as a gBlock. Two integrative plasmids were created to house the synthetic transactivator rtTA(S2) [26] and the TetO7-driven target gene as described previously. [23] The pRS405-LdhA plasmid was created by PCR amplifying the pPGK1-LdhA-tPGK1 expression cassette from pITY-LdhA [4] using primers SL13 and SL14. To create strains D452-2L, D452-2AL, D452 iH1L, and D452 iH2L, the pRS405-LdhA plasmid was linearized with AgeI then integrated into the yeast genome and selected using the leucine auxotrophic marker. To create the EJ4L and SR8L strains, a single copy of an LdhA expression cassette was inserted into the PBN1-SBP1 intergenic region using CRISPR/Cas9 genome editing ( Expression analysis RNA was extracted from three biologically independent replicates of strains SR8 and SR8#22 grown to mid-exponential phase with glucose as a carbon source. RNA-seq experiments and analysis were performed as previously described. [23,27] RESULTS To examine the effects of a carbon source on lactic acid and ethanol production, we selected three yeast strains within the same lineage with different sugar consumption abilities: D452-2, the parental yeast strain, SR8, an engineered xylose-consuming strain derived from F I G U R E 1 Carbon source affects lactic acid yield. Fermentation of the D452-2L strain on glucose (top left), the SR8L strain on xylose (top right), and the EJ4L strain on cellobiose (bottom left) along with yields from each fermentation (bottom right). Fermentations were inoculated to an initial cell density equal to an optical density (OD) of 1 and performed in 25 mL YP media in 125 mL flasks with 20 g L −1 CaCO 3 at 100 RPM and 30 • C. Data points indicate the optical density at 600 nm (yellow circles) or the concentration of glucose (blue squares), xylose (purple squares), cellobiose (green squares), lactic acid (red upward triangles), or ethanol (black downward triangles). Data points are the average of biologically independent duplicate cultures with standard deviations indicated by error bars, which are not visible when the standard deviation is smaller than the size of the data point D452-2; and EJ4, a cellobiose-and xylose-consuming strain derived from SR8. [28][29][30] The R. oryzae lactic acid dehydrogenase encoded by LdhA was then introduced into each strain to enable production of lactic acid, yielding D452-2L, SR8L, and EJ4L. [4] The three resulting strains were then cultured: D452-2L on glucose, SR8L on xylose, and EJ4L on cellobiose. Carbon sources strongly influenced lactic acid and ethanol production. D452-2L cultured on glucose produced almost entirely ethanol, EJ4L cultured on cellobiose produced slightly more lactic acid than ethanol, and SR8L cultured on xylose produced almost entirely lactic acid ( Figure 1). Xylose cultures led to the highest production of lactic acid (10.6 g L −1 ), followed by cellobiose (7.3 g L −1 ) and lastly glucose with the lowest production (1.1 g L −1 ). The transmembrane sensors Snf3 and Rgt2 alter the growth rate and metabolism of yeast in response to extracellular glucose. [20] While there is some evidence that an active Snf3/Rgt2 sensing path-way enhances xylose consumption, [19] the role these two sensors play in cellobiose and xylose fermentations remains unclear. [31] We hypothesized that the Snf3/Rgt2 pathway might be inactive during consumption of non-native sugars xylose and cellobiose. Therefore, inactivating this pathway during glucose fermentation through deletion of SNF3 and RGT2 may enhance lactic acid production from glucose. We therefore derived a series of strains with Δsnf3Δrgt2 and GLK1) in D452-2, leading to the strain D452∆hxk 0 which is unable to consume glucose. Then, the synthetic transactivator rtTA(S2) [26] was introduced followed by reintroduction of hexokinase expression under control of the rtTA(S2)-controlled TetO7 promoter. As a result, expression of hexokinase was controlled by the addition of doxycycline in a titratable manner. This technique was previously found to enable complete control over the glucose consumption rate. [23] We created two sets of strains from this, one with controllable expres-sion of HXK1 (D452 iH1) and the other with controllable expression of HXK2 (D452 iH2). We next introduced R. oryzae LdhA and named the resulting strains D452 iH1L and D452 iH2L. These strains were then cultured in flasks with various amounts of doxycycline to observe lactic acid production from the strains exhibiting a wide range of glucose uptake rates ( Figure S5). We found that the yields of lactic acid and ethanol were clearly linked to the sugar uptake rates (Figure 3). Slower glucose-consuming strains tended to produce more lactic acid while faster glucose-consuming strains pro- Consumption of glucose/xylose mixtures by engineered yeast usually results in two stages: rapid consumption of glucose followed by slow xylose assimilation. However, reduced glycolytic flux allows simultaneous uptake of both glucose and xylose. [23] This coupled with the information shown here-that xylose is a superior carbon source for lactic acid production and that slow glucose consumption is better than fast glucose consumption for yielding lactic acid-led us to hypothesize Therefore, the R. oryzae LdhA gene was introduced into the glucose/xylose co-fermenting engineered yeast strain SR8#22. [23] Then, the SR8L strain and the SR8#22L strain were compared in fermen- DISCUSSION The Snf3/Rgt2 sensors control glucose perception in yeast [20] and initiate a signaling cascade which enables precise expression of hexose transporters in response to extracellular glucose concentrations. [32] As expected, deletion of these sensors negatively impacts the fermentation of glucose ( Figure 2, Figure S1). Yeast cells without functional glucose sensors tend to produce less ethanol, accumulate more biomass, and require longer periods of time to complete a fermentation. Although we found that deletion of the glucose sensors benefits the production of lactic acid in both galactose and glucose cultures, we were able to match the increased yields in strains with active sensors by reducing the glucose assimilation rate. Nonetheless, the inherently increased biomass yields indicate that the glucose sensor deletions may be a viable strategy to increase yields of growth-associated products in engineered yeasts. Interestingly, we also observed that deletion of Snf3 and Rgt2 negatively impacts fermentation of xylose ( Figure S2). Recent observations indicate that xylose interacts with at least one of these sensors and can induce signal transmission, as deletion of either glucose sensor leads to decreased HXT1 expression during xylose fermentations. [19] As such, yeast strains lacking the glucose sensors may have altered transporter expression profiles which are sub-optimal for xylose uptake. This possibility is compatible with our observation that deletion of glucose sensors had little effect on cellobiose fermentation ( Figure S3). As cellobiose fermentation depends on transgenic expression of the cellobiose transporter CDT-2, altering native transporter expression leaves cellobiose uptake unaffected. Using the D452 iH1L and D452 iH2L strains with direct control over hexokinase expression, we identified a connection between glycolytic flux and the yields of lactic acid and ethanol from glucose. There must be some other physiological mechanism or factors that lead to enhanced lactic acid yields from engineered yeast strains cultured on xylose. For instance, glucose induces degradation of the Jen1 transporter, [9] which along with Ady2 is known to modulate lactic acid production in yeast. [33] Cells cultured on xylose display increased expression of JEN1 and ADY2 compared to cells cultured on glucose and deletion of these two transporters reduces lactic acid yields from strains cultured on xylose. [8,27] A low rate of sugar metabolism coupled with strong potential for exporting lactic acid may explain the nearly theoretically maximum yields of lactic acid observed during mid-exponential phase of xylose fermentations. As such, coupling overexpression of JEN1 and ADY2 with a reduced glycolytic flux may enable high lactic acid yields from glucose without deleting endogenous PDCs. Notably, the K m of R. oryzae LdhA on pyruvate is approximately 0.5 mM while the major S. cerevisiae PDC isoforms Pdc1, Pdc5, and Pdc6 exhibit K m values of 4.7, 9.9, and 8.2 mM, respectively. [4,34] Similarly, S. cerevisiae pyruvate dehydrogenase exhibits a much higher affinity for pyruvate than PDC. These varied kinetic parameters have been proposed as a determining factor of the flux distribution at the pyruvate branch point in yeast. [35] A slow metabolic rate may contribute the kinetics of the enzymes using pyruvate as a substrate. [35] It is thus possible that some currently unknown intracellular glucose signaling mechanisms modify the kinetics of the PDC enzymes and impact the flux distribution at the pyruvate branch point. 13C-MFA of S. cerevisiae has shown that fermentative conditions result in about 75% of pyruvate flux moving toward acetaldehyde. [36] In contrast, during a reduced Crabtree effect condition, only about 1% of pyruvate flux is directed toward acetaldehyde. [37] While in fermentative conditions only about 6% of pyruvate enters the mitochondria, a reduced Crabtree effect condition leads to approximately 75% of pyruvate entering the mitochondria. [36,37] The high yields of lactic acid from xylose thus strongly agree with prior reports that xylose metabolism elicits a respiratory response in engineered yeast. [38] Expression analysis of the fast glucose-consuming strain SR8 and slow glucose-consuming strain SR8#22 reveals no significant differences in expression of PDC1, PDC5, PDC6, nor the transcription factor encoded by PDC2 ( Figure S7). Additionally, no difference in expression of PGK1, whose promoter drives expression of LdhA in our study, was observed. Taken together, these observations indicate that transcriptional regulations alone cannot explain our observations and some post-translational regulations are key to explain the control glucose assimilation rate exerts over flux distribution at the pyruvate branch point. Deletion of the PDC genes to enhance production of non-ethanol products can be undesirable as the resulting PDC-negative strain will be C 2 auxotrophic and require supplementation of either acetate or ethanol, in addition to a reduced growth rate. An alternative method has been proposed to prevent ethanol production without the C 2 auxotrophy through deletion of the acetaldehyde dehydrogenase isoforms. [39,40] Expression of a lactate dehydrogenase in a strain with deletions in the acetaldehyde dehydrogenases ADH1, ADH2, ADH3, ADH4, ADH5, and SFA1, the glycerol-3-phosphate dehydrogenases GPD1 and GPD2 and the PDC PDC1 enabled production of lactic acid from glucose without ethanol accumulation or any requirement for supplementation of C 2 compounds. However, the resulting strain completed fermentations extremely slowly: over 40 h were required for complete consumption of 2 g L −1 glucose. In contrast, our highest-yielding strain SR8L was able to convert 20.9 g L −1 xylose into 10.6 g L −1 lactic acid in under 40 h. In our experiments with co-fermentation of glucose and xylose, we employed medium containing approximately equivalent amounts of the sugars at 20 g L −1 each. Optimizing these concentrations and the ratio of glucose to xylose can aid in maximizing yield and productivity of lactic acid in a laboratory setting. For industrial applications, lignocellulosic hydrolysates typically contain approximately 60%-70% glucose and 30%-40% xylose. [41] Nonetheless, the results presented here serve as a general proof-of-concept and it is reasonable to expect that the trends we observed here will be maintained regardless of the medium sugar concentrations and ratio. In addition to the increased lactic acid yields shown here, reducing the glucose phosphorylation rate has been shown to enable simultaneous consumption of glucose/xylose and glucose/galactose mixtures. [23] After expressing the LdhA gene in the glucose/xylose co-consuming strain SR8#22, we observed simultaneous bioconversion of glucose and xylose into lactic acid and ethanol. Enabling equal co-consumption of both glucose and xylose avoids a two-phase fermentation where glucose is rapidly converted to ethanol, followed by a slower conversion of xylose into predominantly lactic acid. Such a two-phase fermentation is incompatible with techniques such as continuous fermentation, as glucose is continuously fed into the reaction chamber and will constantly be preferred over the accumulating xylose. Although the results here represent a step in the right direction, additional challenges remain before S. cerevisiae is a viable production host for commercial biorenewable lactic acid. The experiments performed here employed calcium carbonate as a neutralizing agent, which is too costly for an economically sustainable biorefinery. Further engineering to enhance S. cerevisiae tolerance to low pH conditions or inhibitors found in common feedstock hydrolysates will be essential to any viable commercial efforts. It is also possible that S. cerevisiae may never be an optimum production host for biorenewable lactic acid. Identification and engineering of non-conventional yeasts such as Issatchenkia orientalis have revealed the possibility for incredible natural resistance to inhibitors and low pH inherent to certain species. [42] Despite the vast differences between baker's yeast and many nonconventional yeasts, some exist which are metabolically very similar to the highly studied S. cerevisiae. Future researchers may aim to strike a balance between identifying new yeasts with optimum natural characteristics and the ability to transfer over the vast amounts of knowledge gained from years of research into S. cerevisiae. Alternatively, the mechanisms underlying increased resistance of these non-conventional yeasts may be identified and engineered into S. cerevisiae. For example, the I. orientalis GPI-anchored protein encoded by IoGAS1 confers low pH tolerance when expressed in engineered S. cerevisiae strains. [13,43] Future researchers aiming to create lactic acid and other pyruvate-derived molecules in S. cerevisiae strains expressing native PDC enzymes may find some inspiration from the physio-logical changes present during xylose metabolism. Using xylose as a carbon source has been shown to increase yields of a variety of products, such as isobutanol, [44,45] 2,3-butanediol, [46,47] poly-3-Dhydroxybutyrate, [48,49] squalene, and amorphadiene. [50] A deeper understanding of the exact phenomena underlying enhanced product yields from xylose may allow adapting this knowledge to other carbon sources, such as sucrose, glucose, and fructose, which are cheaper and more abundant than xylose. While our findings demonstrate that flux, but not perception, contribute to enhanced yields of lactic acid, additional research will need to be performed to investigate whether these conclusions stay valid for other products derived from pyruvate.
2023-02-02T06:16:19.577Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "fbc9263d80512c8a1c70316371693ad2ab0bbb3c", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/biot.202200535", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "013a2c85fe509871b382d78692720380341f7d10", "s2fieldsofstudy": [ "Biology", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
231900487
pes2o/s2orc
v3-fos-license
Jail‐based competency treatment comes of age: Multi‐site outcomes and challenges to the implementation of an evidence‐based forensic continuum Abstract The jail‐based competency treatment (JBCT) model has become an established forensic practice across the country. From the perspective of implementation science and the three core elements of the Promoting Action on Research Implementation in Health Service (PARiHS) framework, the JBCT model is a remarkable example of how context (an unrelenting and overwhelmingly strong demand for forensic beds) has driven multiple state governments to facilitate implementation of a methodology in the absence of empirical evidence supporting its efficacy. This 7‐year study of outcomes from four JBCT program sites provides this much‐needed evidence by showing that JBCT restored 56% of 1553 male and 336 female patients over an average of 48.7 days. At the same time, the study highlights how variations in JBCT models, methods, and preadmission stabilization time present challenges to planned and effective implementation of evidence‐based practice at the statewide system level. By identifying differential responsiveness to JBCT treatment by diagnosis and other factors, the study suggests preliminary implementation ideas for what types of patients are well served by the JBCT model as part of a continuum of restoration options that includes inpatient, outpatient and diversion. Significant findings showed that JBCT patients were restored at a higher rate and in a shorter time if they were female, < 20 years old (highest restoration rate; those < 60 years old also significantly better rates), free of co‐occurring intellectual and cognitive deficits, and malingering. Of the major diagnoses, schizoaffective disorder required a significantly longer length of JBCT treatment for restoration, and lower restoration rates than schizophrenia and bipolar disorder, although this was moderated by a significant interaction with abuse of amphetamines. the study suggests preliminary implementation ideas for what types of patients are well served by the JBCT model as part of a continuum of restoration options that includes inpatient, outpatient and diversion. Significant findings showed that JBCT patients were restored at a higher rate and in a shorter time if they were female, < 20 years old (highest restoration rate; those < 60 years old also significantly better rates), free of co-occurring intellectual and cognitive deficits, and malingering. Of the major diagnoses, schizoaffective disorder required a significantly longer length of JBCT treatment for restoration, and lower restoration rates than schizophrenia and bipolar disorder, although this was moderated by a significant interaction with abuse of amphetamines. | INTRODUCTION The jail-based competency treatment (JBCT) model has revolutionized the field of restoration of competency and is now widely accepted and used in multiple states. This recent and rapid development is remarkable from the perspective of implementation science because the JBCT model continues to expand despite its serious lack of evidence-based support. Using the three core elements of the Promoting Action on Research Implementation in Health Service (PARiHS) framework (Kitson, Harvey, & McCormack, 1998), the implementation of JBCT is an extraordinary demonstration of how context (an unrelenting and overwhelmingly strong demand for forensic "beds") has driven state governments to facilitate implementation of a methodology that lacked any strong empirical evidence. In many ways, the implementation of JBCT is a real-world example of what not to do in terms of the ideals of implementation science. The research evidence should be strong before implementation is justified. It is not. There should be careful planning and "deliberate and purposive actions" to implement a new treatment (Proctor et al., 2011). Yet implementation of JBCT has been unsystematic and disorganized at best. The criteria for evaluating its effectiveness should be agreed upon before implementation. The only prior "agreement" regarding JBCT, however, has been a shared pursuit of a solution to the national "competency crisis." This article will first describe the context which has spawned the JBCT model and driven its haphazard growth. After a summary of the limited empirical evidence to date, the article will present a large empirical study that provides some initial evidence to support JBCT. In presenting the methodology and results of the study, the article will elucidate the multiplicity of factors and real-world barriers that undermine the implementation ideals of sound research, careful planning and evaluation. It concludes with a discussion of how implementation science may better guide the future implementation of evidence-based research in JBCT practice. | The context for the growth of JBCT In 2019, Gowensmith (2019) coined the term "competency services crisis" to describe the unprecedented escalation in the demand for competency restoration and related forensic services in recent years in the United States. A recent report by the National Association of State Mental Health Program Directors (2017) showed a 25% 84 -JENNINGS ET AL. increase in the number of patients who were incompetent to stand trial (IST) receiving competency restoration services between 1999 and 2005, and a 37% increase between 2005 and 2014. This rising demand for competency restoration services, combined with a severe shortage of state forensic hospital beds, has created waiting lists and led many states and counties to try less intensive and/or alternative methods of competency restoration, such as outpatient restoration, restoration provided to ISTs in general population in jails, pretrial diversion services for mental health and substance abuse, use of the Sequential Intercept Model (Callahan & Pinals, 2020a) for diversions, and JBCT units. As observed by Callahan and Pinals (2020b, p. 691), the nation's IST system "is in crisis" and the solution is "best facilitated by support for empirical research on the individual-and system-level factors that contribute to the waitlists and system paralysis." This study adds empirical support for the JBCT model but emphasizes that JBCT should be just one choice in a continuum of restoration service options. Ideally, a continuum approach enables the IST system to match the individual's restoration needs to the type and intensity of restoration services needed. A recent publication by Ash et al. (2020) presents a real-world example of a successful continuum in metropolitan Atlanta. Depending on their needs, IST patients in that program can be referred to six options: outpatient restoration at a local public psychiatric hospital; individual competency tutoring while housed in the general jail population; diversion out of corrections for mental health services; "specialized day treatment" in a designated 16-bed JBCT unit; a special program for women; and inpatient hospitalization. Similarly, three recent publications have discussed JBCT as one option within an array of options. In the first, Wik (2018) surveyed and distinguished two types of JBCT: "full scale" JBCT programs typically dedicate a unit/pod/ area within a jail for a day treatment-like program of individual and group-based therapeutic and competencyfocused activities, while usually serving as a housing unit for IST patients; whereas "time-limited" JBCT services are typically limited to competency tutoring/supports provided to individuals while they are awaiting admission to the state hospital (called "stop-gap" services by Gowensmith, Murrie, and Packer, 2014). In the second article, Heilbrun et al. (2019) reviewed and compared outcomes for restoration programs in multiple settings, including 10 state/forensic hospitals, three prison psychiatric units, eight community-based settings, eight jail-based settings, and three others. Acknowledging the potential of jail-based and community-based alternatives to traditional hospital-based restoration, the Heilbrun group proposed a "system-level decision tree" for determining the best restoration service option for IST patients. In the third article, Danzer, Wheeler, Alexander, and Wasser (2019) conducted the first objective review and discussion of the differential benefits of delivering restoration of competency treatment in three main types of settings: traditional hospital, jail-based and outpatient (e.g., Gowensmith, Frost, Speelman, & Therson, 2016;Johnson & Candilis, 2015;Mikolajewski, Manguno-Mire, Coffman, Deland, & Thompson, 2017). The Danzer team systematically considered the advantages and disadvantages of each setting with regard to three primary outcome measures: (1) rates of restoration, (2) lengths of stay necessary to achieve restoration, and (3) lengths of stay necessary to determine non-restorability. As shown in these articles, restoration services should be viewed as a continuum in which JBCT is designed to augment, not replace, traditional hospital-based forensic treatment. In the past decade, the JBCT model has proliferated, and JBCT programs are now active in multiple states. Most notably, "jail-based competency programs have become the rule statewide" in the State of Arizona, where five county-based JBCT programs provide nearly all restorations, while referrals to the state hospital for ISTs are few (Bloom & Kirkorsky, 2019, p. 1). Similarly, the California Department of State Hospitals (DSH) has embraced the model and developed a statewide system that offers a dozen county-based JBCT units with a total capacity of over 425 beds. Historically, the haphazard proliferation of JBCT programs -arising independently in multiple jurisdictions and without any planned evaluative research designs -created an impossible situation for effective implementation science. It begins with the fundamental challenge of defining the JBCT model itself, which Wik (2018) broadly categorized as either "full scale" or "limited." To the best of our knowledge, there are full-scale JBCT programs in eight states, and limited JBCT programs in five more. Full-scale JBCT programs (coded as "F") are structured JENNINGS ET AL. -85 programs that provide intensive daily treatment with both individual and multiple group-based rehabilitative activities, while limited programs (coded as "L") are constrained to one-to-one counseling in the jail on a lessthan-daily frequency. In rough chronological order of implementation, these JBCT programs include Virginia (F: 1997-2002 | Advantages and disadvantages to implementation The growth of the JBCT model has been fueled by multiple perceived advantages, including accelerated access to more timely restoration and mental health services, reduction in prolonged incarceration for individuals waiting for limited hospital beds, reduction in the cost of restoration compared with hospitalization, removal of incentives for malingering, and improved proximity to local attorneys and family support (Ash et al., 2020;Danzer et al., 2019;Jennings & Bell, 2012;Kirkorsky, Gable, & Warburton, 2019;California Legislative Analyst's Office, 2012Rice & Jennings, 2014;Wik, 2018). Presumably, the greatest advantage of JBCT would be that it yields superior outcomes in terms of rates of restoration and length of time to restore compared with hospitalization and other outpatient restoration programs. In their "attempted meta-analysis" of 40 years of restoration studies in a wide array of treatment settings, Pirelli and Zapf (2020) found a base rate of 81% restoration and a median length of stay of 147 days overall and 175 days in studies that measured only a single group undergoing restoration treatment. Similarly, Zapf and Roesch (2011) reported that 75% of patients are restored in less than 182 days, while Gowensmith et al. (2016) reported a lower restoration rate of 70% for 13 outpatient programs and an average length of treatment of 149 days. By contrast, as shown in Table 1, the available JBCT outcome data show similar rates of restoration, but in substantially shorter lengths of treatment. Of the nine JBCT studies that reported restoration rates, six programs achieved restoration rates of 79-90% within an average of 77-120 days. Most notably, the three JBCT studies that were specifically designed to restore patients within a targeted term of 60-70 days achieved much lower restoration rates of 55%-60%, but within a much better average of just 45-57 days. One study by Ash et al. (2020) reported a restoration rate of just 40%, but also achieved 31% more diversions and an average of 98 days to achieve restoration. This broad comparison suggests that JBCT programs can achieve similar rates of restoration as traditional hospital-based treatment (and outpatient restoration) in significantly less time. But the rates of restoration are decidedly worse when JBCT programs are intentionally designed to be time-limited (e.g., target of 70 days). For example, the 83% rate of restoration of the Virginia JBCT program (Jennings & Bell, 2012), which had no time limits for restoration, was much higher than the 55-56% of the California JBCT studies, which did have a target of 70 days (Rice & Jennings, 2014;and present study). In short, JBCT offers a less expensive option -in terms of fewer days and lower per-diem costs -but there is a trade-off in successful restorability if the program is time-limited. The JBCT model also has its disadvantages and critics. The most fundamental disadvantage is the austere, restrictive, and decidedly untherapeutic environment and culture of jails (Bloom & Kirkorsky, 2019;Douglas, 2019;Felthous & Bloom, 2018;Kapoor, 2011). A related concern is that jail-based programs may not have adequate mental health staffing and/or limited availability of therapeutic modalities and acute psychiatric support (Felthous & Bloom, 2018;Wik, 2018). Although large jails may have greater mental health resources than small ones, such as acute, semi-acute or designated mental health units, there is still the question of the appropriate allocation of staff resources to the patients in the JBCT. -87 Other criticisms have focused on the ethical, legal and clinical problems of involuntary medication (IM) in correctional settings (Bloom & Kirkorsky, 2019;Danzer et al., 2019;Douglas, 2019;Felthous & Bloom, 2018;Kirkorsky et al., 2019). Moreover, if a JBCT program is prohibited from administering IM, the advantage of accelerated access to restoration services can disappear, because the attempt to restore with lower-intensity JBCT services can delay the initiation of hospital restoration, where forced medication is available (Ash et al., 2020). Another criticism is that the JBCT model is explicitly focused on reducing symptoms and barriers to competency that can be addressed in a time-limited period. As such, the treatment received is not designed to address all the individual's psychiatric needs (Wik, 2018). Other critics point to the potential conflicts of interest that arise from the lack of separation between evaluators and treaters in determining when a patient is restored to competency (Callahan & Pinals, 2020b;Kapoor, 2011) and when treatment concerns must be balanced with the control, security and authority structure of the correctional system (Bonner & Vandecreek, 2006). Finally, and specific to this article, critics point to the continuing lack of empirical support for the JBCT model (Danzer et al., 2019;Kirkorsky et al., 2019;Wik, 2018). Despite the proliferation of JBCT programs, the variability in its implementation across multiple states and counties has been a challenge to building a necessary evidence base, in two respects. First, the impetus to implement JBCT has been driven by state agencies which, in seeking practical solutions to the IST crisis, have foregone the step of designing formal evaluative research before implementation. Consequently, there are only a handful of peer-reviewed publications that provide details about JBCT program outcomes and some unpublished presentations and reports (see Table 1). Second, the extreme variation among JBCT programs is a barrier to comparing the effectiveness of different JBCT programs against traditional inpatient restoration. Even if the definition is restricted to "full scale" jail-based units that house IST patients together, there are differences in size/capacity, eligibility criteria/population served, the mix of staffing, capacity for IM, program components, separation of evaluators and treaters and other parameters. The empirical outcomes from this multi-site, 7-year study of 1889 IST patients seeks to address this lack of evidence-based implementation by presenting large-scale aggregate outcomes for a particular JBCT model. This exploratory study has the advantage of allowing comparison of outcomes among four separate JBCT units that applied the same methodology. All four JBCT programs used separate housing units for ISTs in which competency services are delivered, applied the same staffing model, applied the same admission criteria and process, and used the same curriculum and components of restoration treatment (see Method section). The origin of the JBCT model used in this study is traced to an innovative program that created a temporary 35-bed acute inpatient forensic unit in a regional jail in Virginia in 1997 (Jennings & Bell, 2012). Although the jailbased unit was intended only to augment capacity at the state hospital during the renovation of its secure forensic facility, the program proved that IST patients could be humanely stabilized, treated and restored to competency in a jail setting. In 5 years of operation, this jail-based unit achieved an 86% rate of restoration in an average of 77 days for about 484 IST patients. Seven years later, in response to a call to address the shortage of state hospital beds and waiting lists in California, this model was replicated as a 20-bed pilot program in the San Bernardino County jail in 2011 (Rice & Jennings, 2014). The goals were to initiate psychiatric care and restorative treatment sooner, while reducing the prolonged time that IST individuals would otherwise wait in jail for transfer to the state hospital for restoration. The pilot also sought to maximize resources by distinguishing those IST patients who could be restored in a short-term program, while conserving state hospital beds for those who required longer-term intensive treatment. This jail-based restoration of competency ("ROC") pilot program succeeded in meeting its objectives, earning a best practices award for its humane treatment from the California Council on Mentally Ill Offenders (COMIO) and saving an average of $70,000 per restoration compared with the state hospital (California Legislative Analyst's Office, 2012). Published outcomes for the first 30 months showed that 55% of patients were restored in an average of 57 days (Rice & Jennings, 2014). Given these promising results, the California DSH supported the continued expansion of the model as new JBCT programs opened in other county jails. Currently, there are about a dozen JBCT programs in California with a combined capacity of over 425 beds. - The current study was exploratory and had no a priori hypotheses to be tested because the data were collected and analyzed retrospectively from an existing database of variables and outcomes. The data analytic plan was to look at outcomes based on variables that are commonly studied in competency restoration research, such as gender, race/ethnicity, age, diagnostic category, medication compliance and presence of intellectual disabilities and substance abuse. The goal was to discern how JBCT results may be consistent or inconsistent with other studies regarding these variables. Furthermore, in terms of implementation science, it was hoped that empirical results might guide the implementation of evidence-based JBCT practice by suggesting categories of individuals that respond well (or poorly) to JBCT as compared with traditional hospital restoration. We also examined whether differences in how JBCT was implemented made any difference. | Sample Data were collected from monthly utilization reports to the California DSH, which provided demographic information (i.e., age, race and gender); dates of admission, restoration and transfer (used for calculating lengths of stay); medication compliance; and diagnostic information about the subjects. DSH gave its approval to analyze and present the data for research and quality improvement purposes. Data were aggregated and analyzed by two of the authors who are independent of the operation of the four JBCT program sites and could objectively evaluate the data. By combining the standardized spreadsheets from the four programs into a single spreadsheet, the researchers averted potential errors from manual entry or transfer of data. In the few instances in which the automated length of stay calculation was less than zero days or uncalculated, the researchers requested verification of dates from the JBCT program. For these reasons, the obtained data were believed accurate and reliable. 29.2% African-American, 3.6% Asian and 1.8% Other. | Procedure/treatment The IST patients were housed separately in traditional male or female residential pods within the respective county jails, with dayroom space and/or auxiliary space for group programming. The treatment teams for each JBCT unit were similar to a traditional forensic psychiatric hospital unit, including a forensic psychiatrist, forensic psychologist, psychiatric nurse, social worker, recreational therapist, licensed psychiatric technician and clerk to coordinate scheduling, court dates, transports and reports. The treatment teams were exclusive to the JBCTs and separate from other jail mental health personnel. The "direct care staff" were security officers who were specially trained in mental health and positive behavioral supports. A designated deputy participated in team meetings and was the only security officer privy to clinical information. Treatment began with a multidisciplinary assessment of the person's psychological functioning, suicide and behavioral risk, current level of trial competency and likelihood of malingering. A battery of psychological tests was used to evaluate cognitive abilities, social and psychological functioning, psychiatric symptoms and potential malingering. As needed, the psychologist used other tests for specific targeted areas of deficit. Fourteen specific JENNINGS ET AL. competency deficits were assessed and identified using the Revised Competency to Stand Trial Assessment instrument (R-CAI) or the Competence-related Abilities Rating Scale (CARS; Hazelwood & Rice, 2011). Assessment continued through the course of the admission to measure response to competency treatment, monitor progress and identify new problems to target for restoration. Based on the assessments, the treatment plan was individualized consistent with the person's level of functioning and continued to be revised to reflect progress in treatment. It was common for the team to discuss the treatment plan informally on a daily basis and formally discuss treatment weekly. The competency treatment curriculum was "standardized" by giving each patient the same 36-page workbook titled Trial Competency Education: Patient Workbook and Study Guide, and the 21-page workbook titled Understanding My Legal Case. Since the workbook chapters correspond to the 14 competency barriers assessed by the R-CAI or CARS, the team could flexibly focus on the competency barriers that were specific to the individual (rather than a generic one-size-fits-all approach to competency education). The JBCT programs focused on individual strengths and targeted abilities that are related to competency, including remediation of deficits and alleviation of acute symptoms. The primary objective for most IST patients was to resolve the psychosis, when present, to enable the patient to regain general thinking abilities. This, in turn, would facilitate the patient's capacity to understand the legal/court process and co-operate with legal counsel in mounting a defense. If competency could not be restored, the team would compile evidence to credibly opine that the patient was not restorable. The treatment team combined the proactive use of psychiatric medications, motivation to participate in rehabilitative activities, and multimodal cognitive, social and physical activities that addressed competency in a holistic fashion. Individuals in the program typically met one-on-one with a treatment professional at least twice daily about issues related to regaining their mental health and/or competency. They were also engaged in 3.5-5.5 hours of group-based psychosocial rehabilitative activities each weekday depending on the individual's current capacities (experience showed that the lower functioning patients could not tolerate more than 3-4 hours of focused work per day). Although allowed by the California Penal Code, the JBCT programs did not deliver IM (except for the San Diego program which served less than 5% of the total subjects). Instead, the JBCT treatment teams encouraged voluntary assent to medication by building rapport, using persuasion and offering simple incentives, such as Ramon noodles, snacks, toiletries, and access to movies/TV shows. Independent opinions of restoration of individuals were made by a psychologist who was not part of the team and did not have a therapeutic relationship with the individual. | Measures There were two dependent variables for measuring outcomes: length of treatment to restore (LOTR) is defined as the number of days from admission to the JBCT program to the date that the individual is opined to have been restored to competency (or not); and rate of successful restoration is the percentage calculated by dividing the number of individuals in a given category who were restored by the total number of persons in that category (both restored and not restored). Aggregation of these individual-level measures allowed for inferences of system-level implementation outcomes. 1 | Primary independent variables Analyses were conducted to determine significant differences in the two outcome measures based on the following independent variables: program site; gender; diagnostic categories; co-occurring substance abuse, intellectual/ cognitive disorders and medication compliance. Outcomes by treatment site The Tukey post hoc analysis showed significantly shorter LOTR for the out-of-county program than all three of the in-county program sites at a level of p < 0.001). χ 2 analysis also showed a significant difference in rate of restoration [χ 2 (6, N = 1889) = 13.4, p < 0.037, Cramer's V = 0.060]. The rates of restoration for the SB-OutCounty (58.4%) and San Diego In-County sites (60.9%) were significantly higher than those of the SB-InCounty (53%) and Riverside In-County sites (51.6%). As the SB-OutCounty program was the only one serving out-of-county patients, analyses were conducted to determine if its shorter LOTR and higher restoration rates were attributable to administrative procedural differences. To be specific, individuals living within their local home county could be admitted directly to the three in-county sites; by contrast, the out-of-county patients serviced by the SB-OutCounty program had to wait for authorization by their local county courts and wait for available transportation to the treatment site. It is possible that the extended time needed to process and transport individuals to the one out-of-county treatment site would allow more time for patients to detoxify from substances, perhaps receive some initial psychiatric treatment from the mental health provider in the originating jail, or gain some degree of spontaneous stabilization and recovery from acute psychiatric symptoms. Three cohort groups served by the four JBCT programs were identified to test this hypothesis: (1) those served within their home counties (San Diego, San Bernardino, and Riverside); (2) those referred from Los Angeles County (which offered transports twice each week to SB-OutCounty); and (3) those from all other California counties (which were limited to a single transport each week to SB-OutCounty; Table 3). San Bernardino incounty Riverside ROC Outcomes by gender and diagnoses The proportions of patients who were restored to competency were 65.2% for females and 54.3% for males. A χ 2 test of independence showed that the difference in rate of restoration is significant for all diagnoses Outcomes by broad diagnostic categories To facilitate a more meaningful analysis of differences between diagnoses, all variations of schizophrenia were combined into one group, all variations of intellectual and cognitive disorders into another, and all types of primary substance abuse into another, while diagnoses with an N < 10 were excluded. This yielded 10 "broad diagnostic categories," as shown in Table 4. A one-way ANOVA was conducted to test whether there were significant differences in LOTR across the 10 broad diagnostic groupings. A significant difference was found with a nearly Abbreviations: LOTR, length of treatment to restore; SB, San Bernadino; SD, San Diego. Outcomes by co-occurring substance abuse In order of frequency of substance abuse, results showed the following: stimulant-amphetamine type (31.1%), cannabis (29.4%), alcohol (26%), stimulant-cocaine type (6.9%), unknown/unspecified (4.5%), opioid (3.8%) and all other types (< 2%). Overall, results found that 58.4% (1098/1880) of the patients had a primary or secondary diagnosis of substance abuse, with the highest percentage (32.8%) abusing amphetamine alone or in combination with alcohol and/or other substances (see Table ). A χ 2 test of independence found that the relation between In short, those abusing amphetamine appear to respond well to JBCT treatment. By contrast, those abusing alcohol did not -except when they were also abusing amphetamine. Those abusing alcohol alone or alcohol in combination with substances other than amphetamine showed the worst rates of restoration (at 56% and 56.9%, respectively), while those abusing alcohol with amphetamine or alcohol with amphetamine and other substances showed a higher rate of restoration (65.6% and 67.2%, respectively). Further analyses looked at the interactive effect of amphetamine abuse on the three largest diagnostic groupings: schizophrenia, schizoaffective and bipolar disorder (see Table 6). A 2 � 3 factorial ANOVA of the LOTR by the three diagnostic groups and co-occurring amphetamine abuse was conducted. A significant main effect for T A B L E 6 Amphetamine abuse by three largest diagnostic groups (length of treatment to restore) Note: Amphetamine use categories (marked *) had significantly higher rates of restoration and shorter LOTR than categories with no amphetamine use (marked **). Abbreviations: dx, diagnosis; LOTR, length of treatment to restore; SA, substance abuse; uncalc, uncalculated. . It is also notable that the rates of co-occurring amphetamine abuse were virtually identical for the three diagnostic groups at 37%, 39%, 40% (or 34%, 36%, 34% if counting those with either unknown or no substance abuse). Diagnostic categories Given the significant main effect showing that schizoaffective disorder had the poorest outcomes for LOTR compared with schizophrenia and bipolar disorder, a pattern analysis was conducted to see if there were differences in the pace of restoration over time for these three diagnostic groups. As shown in Figure 1, the total numbers of patients opined as restored and not restored for each of the three diagnostic groups were plotted across time within 5-day time cohorts. Taking the first graph as an example, it shows that 19 patients with schizophrenia were restored in the first 10 days of admission, while 14 were considered as not restorable. In the next 5-day period (days 11-15), 33 were restored and 12 were considered as not restorable, and in the next 5-day period (days 16-20), 56 were restored and 10 considered as not restorable. As shown in Figure 1, there were distinct patterns in the pace of responsivity to treatment. Trend lines were added to aid interpretation, which is explained in the Discussion section. Outcomes by intellectual and cognitive disorders Analysis showed that 7% of all patients had a diagnosis of an intellectual or cognitive disorder. A χ 2 test of independence showed a significant relationship between intellectual and cognitive deficits and restorability and learning disorders (M = 63.9). Tukey post hoc tests showed that the difference between intellectual disabilities and no cognitive deficits was closest to significance at p < 0.089. Outcomes by age cohort Analyses were conducted to determine any differences in outcomes based on the age of the individuals. Subjects were assigned to one of 17 age cohorts (of 3 years each) based on age at admission (e.g., ages 20-22.9). A χ 2 test of independence showed that the relation between age group and restorability was significant [χ 2 (16, (53-56, 56-59, 59-62, 62-65, and >65 years). A one-way ANOVA showed no significant differences in LOTR in amphetamine abuse by age cohort. Outcomes by medication compliance Medication compliance data were available for 736 of the 1889 patients. Analyses were conducted to see the impact of medication compliance on rates of restoration and LOTR. The proportions of patients who were restored to competency were 69.5% for those not prescribed medications, 62.2% for those fully adherent to their medications, 47.9% for those with intermittent adherence, and 29.9% for those who refused recommended medications. ANOVA of LOTR also showed a significant difference with a large effect size [F(3, 735) = 4.61, p < 0.003, η p 2 = 0.019]. Tukey post hoc analyses indicated that those with no medications had significantly shorter LOTR than those who were fully adherent (p < 0.005) and those who were intermittently compliant (p < 0.019) and approached significance for refusing medications (p < 0.056). When the "no medication" group was removed from the ANOVA, no significant differences were found [F(2, 646) = 0.72, p < 0.49, η p 2 = 0.002]. | DISCUSSION The jail-based restoration of competency model has become established as a viable and humane model for the restoration of individuals who are adjudicated IST. But it has lacked the empirical evidence that should be prerequisite to its widespread implementation in multiple states. Based on the history of JBCT in California alone, one can see all eight types of "implementation outcomes" posited by Proctor et al. (2011) are present: adoption, acceptance, appropriateness, feasibility, cost-effectiveness, penetration, sustainability and fidelity. For better or worse, JBCT has been "adopted" and "accepted" in state forensic policy and practice as clinically "appropriate"; has been shown to be operationally "feasible" and "cost-effective"; and has thoroughly "penetrated" the field. The model has also proven to have "sustainability", defined by Proctor et al. (2011, p. 70) as "the extent to which a newly implemented treatment is maintained or institutionalized within a service setting's ongoing, stable operations" and by Rabin, Brownson, Haire-Joshu, Kreuter, and Weaver (2008) as the integration of a given program within an organization's culture through policies and practices. Indeed, the California DSH has established formal policies, standards and guidelines for establishing the required components of jail-based restoration programs and it supports continuing implementation of the model at many locations. "Fidelity," the most often-measured implementation outcome, is the most problematic. It is defined as "the degree to which an intervention was implemented as it was prescribed in the original protocol" in terms of adherence, dose or amount of program delivered, and quality of program delivery (Proctor et al., 2011, p. 69-70). As the first JBCT model of its kind in California, the model evaluated in this study constitutes the "original protocol," and adherence to the Department's subsequent JBCT guidelines could potentially function as the measure of fidelity to that protocol going forward. As described in the Conclusion, the issue of fidelity is both the fundamental barrier to implementation of evidence-based JBCT and the future road to its more effective implementation. This large multi-site study seeks to put "the horse in front of the cart" by providing the type of evidence-based support for the JBCT model that should be sought before its continued implementation in practice. The JBCT model is best applied as one option in a continuum that could include outpatient, pretrial diversion, "off-ramping," 2 jailbased restoration units, and hospital-level treatment. In an era in which forensic hospital beds are limited and waiting lists can be long, this continuum approach can optimize the use of resources and better match appropriate treatment to individual level of need. Based on our empirical findings showing differential effectiveness of JBCT treatment for different diagnostic categories, it is possible to make some preliminary recommendations about JENNINGS ET AL. screening and referring categories of IST patients to different levels of intensity, specifically, to either JBCT or hospital treatment (see summary Table 7). This step would facilitate more effective implementation of JBCT as an option. Cognitive and intellectual disorders. The research literature is overwhelming in showing that individuals with intellectual and developmental disabilities (IDD) and neurocognitive disorders like dementia have poorer rates of restorability and longer lengths of treatment to restore competency. The results of this study are entirely consistent with this literature, suggesting that such individuals are poor candidates for a short-term restoration model like JBCT and are better served by direct referral to the state hospital or a forensic facility that specializes in IDD. However, it is also possible that these diagnostic groups may respond poorly to restoration efforts regardless of setting and intensity of services and that providing more intense hospital-level services is actually an ineffective use of resources. Age. The results showed that young adults aged < 20 years had the highest rate of restoration (at 74%) in an average of 46.6 days. This high rate of restoration is consistent with three studies of juvenile restoration, which found a restoration rate of 71-76% in an average of 90-120 and 217 days (McGaha, Otto, McClaren, & Petrila, 2001;Warren et al., 2010Warren et al., , 2019. The notable difference is that this JBCT study achieved the same rate of restoration in a fraction of the time. - Three factors help to explain this outcome. For most of these young adults, this is their first episode of acute psychosis, which tends to respond well to psychiatric treatment. Second, this age group has very low rates of amphetamine abuse, which suggests that they have not abused this drug for a period long enough to suffer permanent changes in brain chemistry. Third, treatment teams observe that the youngest patients are understandably frightened by what may be their first arrest or incarceration, which motivates them to be cooperative and to actively engage in treatment. At the opposite end of the age scale, those aged 62-65 and over 65 years had the lowest rate of restoration, at just 36%. This poor response to treatment may reflect the reduced energy and motivation of older people suffering from entrenched and intractable symptoms and behaviors established over a lifetime of chronic mental illness. It is fully consistent with the research literature that is decisive in finding that older individuals show poorer restoration outcomes (Danzer et al., 2019;Gay, Vitacco, & Ragatz, 2017;Morris & DeYoung, 2012;Morris & Parker, 2008;Mossman, 2007;Valerio & Becker, 2016;Warren, Chuahan, Kois, Dibble, & Knighton, 2013). The results suggest that seniors may be poor candidates for JBCT and better directed to hospital level care. Medication refusal. The research literature shows that compliance with psychiatric medications is associated with better restoration outcomes (e.g., Galin, Wallerstein, & Miller, 2016;McMahon, Marioni, Lilly, & Lape, 2014;Warren et al., 2013). Not surprisingly, this study found that those refusing medications had a very low rate of restoration (only 30%), as compared with those with intermittent compliance (48%), full adherence (62%) and who were not prescribed medications (70%). Except for the San Diego site, the JBCT sites could not use IM, and refusal to take medications was one of the leading clinical reasons for inability to restore competency and the decision to transfer the patient to the state hospital. Of note, the San Diego JBCT had the highest rate of restoration at 61% and the shortest LOTR of the three in-county programs by nearly 1 week. Given that the San Diego data are limited to just 1 year of operation and only 92 patients, however, it is not possible to make a firm conclusion that IM is the primary reason for this higher restoration rate. Future research will be illuminating because the San Bernardino JBCT program has since added the capacity for IM and it will be possible to compare JBCT performance with and without the IM option. Even if IM is available, however, the application of physical force to apply medications should be extremely rare. Use of persuasion, encouragement and incentives to comply with medications should always be the first and foremost choice. At the same time, the option of suggesting IM increases the team's flexibility in applying greater persuasion. Primary diagnosis of amphetamine abuse. The results show that patients with a primary diagnosis of amphetamine abuse (or amphetamine abuse with alcohol abuse) tend to be restored very rapidly (in < 38 days) and at a high rate exceeding 90%. Presuming that recovery occurs more rapidly as the individual withdraws from the effects of active substances, these two categories appear well served in the controlled environment of the jail where they can have respite time from drug abuse to regain stability (the exception would be acute and severe dependency which requires careful medical attention and detoxification for withdrawal risks, but this treatment would probably have been completed prior to referral to the JBCT program). The high rate of amphetamine abuse also deserves discussion. One third of the subjects had a diagnosis of stimulants-amphetamine type (not including cocaine) either alone or in combination with alcohol and/or other drugs. This was the highest rate of all substance abuse, followed by cannabis (29%), alcohol (22%) and stimulantscocaine type (7%). This extraordinary high rate may be unique to this sample, which is predominantly drawn from southern California. Or it could be reflective of a national pattern that is not fully recognized. Presently the opioid abuse epidemic commands national attention, but an analysis of the rates of opioid abuse in this sample showed an average of only 4% with a high of 5% in 2016. Other substance abuse. Individuals with primary or co-occurring diagnoses of amphetamine, cannabis, opioid, or polysubstance abuse (N = 273) were restored at rates of 63-80%, while those with primary or co-occurring diagnoses of alcohol or cocaine abuse were restored at 56%, suggesting that patients who abuse alcohol and cocaine may present with more chronic conditions that require more time for restoration and recovery. The JENNINGS ET AL. -99 competency restoration literature on substance abuse appears minimal and no studies of amphetamine abuse were found. Two studies suggest that co-occurring substance abuse leads to longer stays and lower rates of restoration (Morris & DeYoung, 2012;Warren et al., 2013), while one study found that alcohol at the time of the offense was associated with more timely restoration (Nicholson, Barnard, Robbins, & Hankins, 1994). Gender. Studies of restoration by gender have found that females are more likely to be restored (Morris & Parker, 2008;Rice & Jennings, 2014;Warren et al., 2013) or have shown mixed or inconclusive results (Fogel, Schiffman, Mumley, Tillbrook, & Grisso, 2013;Schwalbe & Medalia, 2007). This study found that women did significantly better in JBCT in both the rate of restoration and LOTR. Our first guess for this strong result was a gender difference in the frequency of particular diagnoses that respond better or worse to JBCT treatment. In this study, however, women responded better than men in every diagnostic category except schizoaffective disorder (with a lower rate of restoration) and primary amphetamine abuse (with a longer LOTR). Since the gender difference in diagnostic frequency of schizoaffective disorder was < 1% and the disorder showed the poorest response to treatment for all patients, our data suggest that factors other than diagnosis explain why women achieved better outcomes than men. The difference may be attributable to a difference in the quality of social support in the female treatment milieu compared with the males. Even though the treatment curriculum and mix of group-based and individual treatment were the same for both genders, it was observed that the women typically expressed much higher levels of interpersonal communication, altruistic help and emotional support for each other, which likely improved motivation and outcomes. This observed difference in gender milieu suggests the value of programming initiatives that can enhance the interpersonal community support in male JBCT units. Less frequent diagnoses that responded well to JBCT. There were four primary mental illness diagnoses that occurred at least 15 times but less than 40 times. The first, malingering, showed a restoration rate of 96% and an average LOTR of 50.7 days. The JBCT model may be ideal for individuals suspected of malingering because it eliminates any advantage of seeking the perceived comforts of hospital-level care as a way out of the jail setting and it avoids the wasted use of an inpatient hospital bed. It is notable that all 25 cases of malingering were male. Restoration rates were > 90% and LOTR was close to 40 days for both major depression and depression, while the restoration rate was 87% and LOTR was < 30 days for stress reaction. These results suggest that these three diagnostic groups may be well served in the JBCT setting and in a relatively short time-frame. Lastly, given the profoundly fixed psychotic delusions characteristic of delusional disorder, it would be expected that short-term treatment in the JBCT would be unlikely to have a lasting impact. Indeed, results showed a 50% rate of restoration, but also a relatively short average LOTR at 42 days. It appears that JBCT may be effective with delusional disorder to the degree that clinical staff can restore overall competency despite (or "independent" of) the patient's major fixed delusion and, if not, can recommend transfer to the hospital. Most frequent major diagnoses. Differences in response to JBCT were found for the three most common diagnoses: schizophrenia and bipolar and schizoaffective disorders. First, based on the pattern analyses conducted, it appears that bipolar disorder may be well served in the short-term jail-based setting. Bipolar disorder showed extremely high rates of restoration in the first 25 days, followed by a steady decline in effectiveness from days 26 to 70, and then a sharp decline in effectiveness thereafter (see Figure 1). The high effectiveness of initial treatment of bipolar disorder is probably due to the prompt restoration of lithium to patients who had stopped medication, which often yields a rapid recovery. On the other hand, if the patient with bipolar disorder has not responded in the first 70 days, it appears that further time in the JBCT is decreasingly likely to achieve restoration. In addition, there is a significant interaction effect that shows that those with bipolar disorder and co-occurring amphetamine abuse respond poorly to treatment in the JBCT setting. There could be something about amphetamine that is more appealing to patients with bipolar disorder and/or interferes with the usual effectiveness of standard medications for bipolar disorder. Second, like bipolar disorder, patients with schizophrenia diagnoses showed extremely high rates of restoration in the first 25 days, followed by moderate effectiveness from days 26 to 50, and then followed by a very sharp 100 -JENNINGS ET AL. decline in restorability thereafter (see Figure 1). The high effectiveness of initial treatment is probably due to the prompt restoration of anti-psychotic medications to patients who have stopped medication, which often yields a rapid recovery. Alternatively, this pattern may be related to the positive short-term impacts of detoxification from amphetamine abuse as suggested by the interaction effect showing that those with schizophrenia who also abuse amphetamine have a significantly shorter length of treatment than those who do not. Third, results show that JBCT was less effective in restoring patients with schizoaffective disorder and they required the longest treatment time to be restored. Given its complex and variable mix of thought disorder and affective disruption, schizoaffective disorder may be the most difficult to accurately diagnose and generally needs more time to respond to treatment. Like bipolar disorder with co-occurring amphetamine abuse, patients with schizoaffective disorder who abuse amphetamine do significantly worse than those who do not. Perhaps it is the shared affective disturbance of bipolar and schizoaffective disorders that makes amphetamine more attractive to these patients and/or that causes greater interference with recovery. Overall, it appears that JBCT treatment for schizoaffective disorder is quite poor in the short run, but slightly more effective after 55 days (with a spike of success at 56-60 days). Ultimately, however, the rate of restoration for schizoaffective disorder falls lower as the length of stay goes longer. | Implications for implementation of evidence-based practice The differential patterns of response across the three main diagnostic groups raise important questions about facilitating implementation of evidence-based JBCT practice. The overall average length of time to restore was 48.7 days, which is a third of the median of 147 days across restoration programs, broadly defined (Pirelli & Zapf, 2020). Clearly, more than half of the sample (56%) was restored in a relatively short time. The challenge is distinguishing which diagnoses and factors promote restoration using JBCT. For example, the pattern analysis showed that many of those with bipolar disorder and schizophrenia responded quickly to JBCT restoration efforts in the first 25 days, while the remainder responded at a slower rate over the next 25 days, and only a few responded well beyond that time. In short, many "fast-responding" participants responded well to the JBCT, even many with the most severe psychotic disorders. At the same time, many participants (44%) did not respond to JBCT at all and needed transfer to the state hospital for additional restoration services. Perhaps most noteworthy is this study's finding of significantly better outcomes for the out-of-county group. It was theorized that the added time waiting for admission to the JBCT allowed more time to detoxify from substances, possibly to receive some initial psychiatric medications in the originating jail, and/or to gain some degree of spontaneous recovery from acute symptoms. Given the primary goals behind implementation of JBCT nationallyreducing both demand for traditional state hospital IST restoration beds and the excessive length of time that mentally ill individuals spend in jail awaiting those beds -it would be paradoxical, to say the least, to recommend a return to increased waiting time as a way of diverting ("off-ramping") unnecessary hospital admissions. Nonetheless, future implementation research needs to measure the time from arrest to admission to JBCT as a key moderating factor in improving implementation. The issue of "fast responders" also emerged as an unexpected interaction effect with amphetamine use. Patients with schizophrenia responded more quickly to JBCT if they were abusing amphetamine, while those with bipolar disorder and schizoaffective disorder responded more poorly when abusing amphetamine. It is possible that some of the fast-responding group included people diagnosed with a functional psychotic disorder, but who were actually experiencing amphetamine-induced psychosis. It is also possible that the removal of amphetamine in the jail setting has a more rapid "alleviating" effect on the psychotic symptoms of schizophrenia. Given this new knowledge of high rates of amphetamine abuse in the sample, future implementation research will need to carefully differentiate persons with functional psychotic disorders abusing amphetamine from those experiencing amphetamine-induced psychosis. | Limitations and future research In conclusion, this research contributes to the need for empirical support of the JBCT model itself, but also has major implications for the implementation of a continuum of restoration options at the system level. In particular, the results suggest which diagnoses may be well or poorly suited to JBCT's less intensive, short-term focus. But the study also has its limitations and raises new and complex challenges for the implementation of evidence-based forensic practices. Design. The design was an uncontrolled, descriptive study of the JBCT program, which is weaker than other potential implementation study designs, for example a cluster-randomized or quasi-experimental design. Furthermore, the study was exploratory in nature and analyzed data retrospectively from an existing database, so it lacked any clearly defined hypotheses for testing. The lack of hypotheses increases the risk of chance findings. Table 1, JBCT programs have been variously operated by state agencies, counties, universities and private for-profit companies, and most often in partnerships of these entities. Critics of the model suggest that private companies may have a financial conflict of interest in delivering the service (Douglas, 2019). Ostensibly, private providers would be more biased than states, counties, or universities in their desire to show that JBCT is effective at restoration (to justify its continued use and make a profit). In this case, however, the state controlled all referrals to JBCT and there was no incentive either to "rush" patients to restoration or to keep them longer than necessary. By drawing the anonymous data from spreadsheets and having the analysis performed by two "independent" psychologists with no direct involvement in operating any of the four program sites, there was no bias or expectations about the results obtained. Other than the natural desire of any provider to show overall good outcomes, the two independent authors had no vested interest in any of the four programs, no preconceived hypotheses to confirm, and no financial benefit to gain from conducting the research. Conflict of interest. As shown in The author who was directly responsible for the JBCT programs had no role in the data analysis but contributed to efforts to understand and clarify the results obtained. Fidelity of the model. Measuring fidelity requires an evaluation of adherence to an "original protocol." As noted in the historical overview, however, there is no single protocol for jail-based restoration of competency programs. Even if JBCT is restricted to "full scale" jail-based units that house IST patients together, programs differ in terms of size/capacity, eligibility criteria, staffing, use of IM, program components, separation of evaluators and treaters, and other parameters. Given the absence of an empirical evidence base for a specifically defined JBCT prior to its implementation, this study -by attempting to control much of the above variation through the application of the same specified set of JBCT methods and protocols at four different "full scale" program sites -could be said to set out the "original protocol." The most important implementation lesson about JBCT gained from this study is the need to control for system-level variation regarding the time preceding admission to JBCT treatment. The authors incorrectly presumed the sites were equivalent (except for one small site that could use IM and did not treat females) and were frankly surprised to find that the one program site serving out-of-county patients had significantly better restoration rates and shorter LOTR. Further examination revealed that an administrative procedural difference delayed the time that out-of-county patients are admitted to the JBCT to begin restoration. This "pre-treatment time" spent in jail before admission to the JBCT unit appears to be a critical time period during which individuals may detoxify from substances and, depending on the originating jail, receive some psychiatric medications and treatment to begin stabilizing their conditions, or even time for "spontaneous recovery" to begin. In order to make fair comparisons of jail-and hospital-based restoration, future research will need to strictly define and control for the influence of these pretreatment "administrative" differences (i.e., by tracking the three dates of arrest, courtordered referral, and admission), which may be the area of greatest difference among JBCT models across the country. In conclusion, this study reflects the tremendous complexity of factors at play in conducting research in realworld forensic settings for both treatment outcomes and implementation outcomes. In their categorization of 102outcomes for implementation research, Proctor et al. (2011, p. 65) observed that some implementation studies "infer implementation success by measuring clinical outcomes at the client or patient level" (as in this JBCT study), "while other studies measure the actual targets of the implementation," such as quantifiable success in relieving system-wide census pressures (which has been the context driving the expanded use of the JBCT model in California). This research study needed to focus on treatment outcomes first because of the unmet need to establish an evidence base for this "new" intervention. This step is needed to justify the growing use of JBCT and, as cautioned by Felthous (2020), to ensure that diligent discussion and planning precede any premature and ill-advised application of JBCT to the restoration of insanity acquittees. It has been argued that the greatest challenge to the implementation of evidence-based JBCT practice is the tremendous variation in design across programs and interventions. At the same time, there is a great deal of variation across hospital-based programs nationally on many of the same variables that critics express concern about in JBCT. One could argue that the whole field of competency restoration treatment lacks clear evidencebased interventions. To quote from the "attempted meta-analysis" by Pirelli and Zapf (2020, p. 134), "virtually no published data reflect specific intervention efforts that lead to competence restoration". They also observed that "competency restoration procedures were overwhelmingly nonspecific across studies and not reported in more than half of them". It is hoped that this study contributes to the quest for effective implementation of an evidencebased continuum of restoration options that includes JBCT. ENDNOTES 1 One reviewer queried the value of additional "implementation outcome" measures such as cost-effectiveness and "ability to relieve system-wide census pressures." The former was twice measured by an independent California government agency, in 2012 and 2017, showing huge savings over traditional state hospital restoration (California Legislative Analyst's Office, 2012, 2017).The latter measure is more complex. Although implementation of JBCT has expanded dramatically as one systemic solution, the overall demand for IST services in California (i.e., the PARiHS context driving implementation) has itself increased dramatically, and so census pressure appears unrelieved. In fact, the benefits of facilitating better access to restoration services -by using JBCT -may be (positively) contributing to the rising demand for IST services. 2 Off-ramping is the practice of assessing IST individuals in the jail setting, who have been waiting for admission to restoration services, to determine if they still need restoration services or can be discharged or diverted as appropriate.
2021-02-13T06:16:40.371Z
2021-02-12T00:00:00.000
{ "year": 2021, "sha1": "a6443057a43e724f696322746f2668f3d45d3d3e", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/bsl.2501", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "4a5a24c3633c881a3b9c852c355ef1b7e6df0204", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
21202964
pes2o/s2orc
v3-fos-license
Parsonage-Turner syndrome in a patient with bilateral shoulder pain: A case report Objective: Parsonage-Turner syndrome is a peripheral neuropathy characterized by acute onset shoulder pain, myalgia, and sensory disturbances. The present report discusses a rare case of Parsonage-Turner syndrome and highlights the importance of accurate history recording and thorough physical examination for the diagnosis of the disease in rural areas. Patient: A 28-year-old woman presented to our clinic with acute bilateral shoulder pain and difficulty moving her right arm. A diagnosis of Parsonage-Turner syndrome was suspected based on the progression of symptoms, severity of pain, and lack of musculoskeletal inflammation. The diagnosis was confirmed by neurological specialists, and the patient was treated with methylprednisolone, after which her symptoms gradually improved. Discussion: The differential diagnosis of shoulder pain is complicated due to the wide variety of conditions sharing similar symptoms. Accurate history recording and thorough physical examination are required to differentiate among conditions involving the central nerves, peripheral nerves, and nerve plexuses. Conclusion: Although the symptoms of Parsonage-Turner syndrome vary based on disease progression and the location of impairment, proper diagnosis of acute shoulder pain without central neurological symptoms can be achieved in rural areas via thorough examination. Introduction Parsonage-Turner syndrome is a peripheral neuropathy characterized by acute onset shoulder pain, myalgia, and sensory disturbances. Also known as acute brachial neuropathy, Parsonage-Turner syndrome is often accompanied by impairments of the suprascapular and axillar nerves 1) . Men are more likely to be affected than women (male:female range; 9 to 11.5:1), and initial symptoms are typically unilateral, although some evidence indicates that initial symptoms may arise from bilateral regions in rare cases 2,3) . The estimated prevalence of Parsonage-Turner syndrome is approximately 1.64 per 100,000. However, the differential diagnosis of shoulder pain is broad, indicating that these estimates may be lower than the true prevalence of the condition 4) . Diagnosis is further complicated by the heterogeneity of symptoms among patients, which vary according to the nerves injured and the speed at which the disease progresses 4) . Therefore, a comprehensive approach involving accurate history taking, physical examination, and specific tests (e.g., electromyography and brachial plexus magnetic resonance imaging [MRI]) is required to ensure proper diagnosis, while delay of diagnosis and treatment may result in lasting functional damage 5) . In the present report, we discuss the case of a young woman with bilateral shoulder pain who received a final diagnosis of Parsonage-Turner syndrome, focusing on the perspective required for diagnosis of the disease in rural areas. Patient A 28-year-old woman presented to our remote island clinic in Okinawa with acute bilateral shoulder pain and difficulty moving her right arm. The patient reported the pain as dull and continuous without any history of preceding infection or trauma. She reported being unable to perform housework or care for her child because of pain. Examina-tion revealed a temperature of 36.2°C, blood pressure of 123/60 mm Hg, pulse of 75 beats per minute, and respiratory rate of 14 breaths per minute. The patient was alert and fully oriented, with no apparent disturbances in consciousness. On inspection, she had right deltoid muscular atrophy. A manual muscular test revealed weakness of right shoulder extension, flexion, external rotation, internal rotation, and abduction without muscle tenderness ( Figure 1). Sensory loss was noted in the lateral portion of the right shoulder. The remainder of the examination was normal. Neck x-ray and blood tests were negative for osteoarthritis and any abnormality indicative of autoimmune disease ( Table 1). The patient was referred to a hospital specializing in orthopedics and neurosurgery, where no abnormalities of the brain, neck, chest or shoulders were observed on MR images. A diagnosis of Parsonage-Turner syndrome was suspected based on the progression of symptoms, severity of pain, and lack of musculoskeletal inflammation. The patient was prescribed pregabalin for pain control and referred to a neurologist at another hospital. Cervical MRI (short T1 inversion recovery [STIR] images) revealed high intensity areas and swelling in the right suprascapular and right axillar nerves, as well as areas of high intensity in the right supraspinatus, infraspinatus, and teres minor muscles ( Figure 2). Similar changes were observed for the left suprascapular nerve, supraspinatus muscle, and infraspinatus muscle, sug-gesting bilateral brachial plexus neuritis (Figure 2). Electromyography revealed denervation of the right supraspinatus and left supraspinatus muscles. The patient was diagnosed with bilateral brachial plexus neuritis (Parsonage-Turner syndrome) and treated with methylprednisolone, after which her symptoms gradually improved. Discussion The onset of Parsonage-Turner syndrome is typically unilateral, and the clinical course may be either acute or chronic 1) . However, this patient presented with rare bilateral symptoms 2) , which can further complicate the differential diagnosis of the condition. Previous reports have indicated that inflammation associated with Parsonage-Turner syndrome can result in muscle weakness and atrophy 6,7) . Therefore, the patient's muscular atrophy may have been due to neurogenic damage, based on the inflammation of the right brachial plexus and muscular denervation observed on MRI and electromyography. Typically, neurogenic muscular atrophy develops distally, while myogenic atrophy develops proximally 8) . However, the patient of the present case experienced atrophy mainly around the shoulder, further complicating the diagnosis of her pain and weakness. In addition, the patient's symptoms can be associated with a wide variety of conditions. Accurate diagnosis of the patient's shoulder pain was made by visits to multiple specialists in orthopedic, neurosurgery, and neurology. Although shoulder pain is mainly caused by musculoskeletal diseases 9) , similar pain may also arise from dermatologic, cardiovascular, and diaphragmatic diseases 10) . In the present case, the patient's pain was caused by inflammation of the brachial plexus, which is comprised of peripheral nerves from the cervical and thoracic spinal cord. Neurogenic pain in shoulders originates from the spinal cord, spinal nerve roots, and peripheral nerves. Although diseases of the spinal cord can usually be differentiated from one another via assessment of tendon reflexes 11) , diseases involving the spinal cord roots and peripheral nerves require additional assessment of sensory loss and motor disturbances 12) . In the present case, the patient exhibited muscular weakness when performing all motions of the right shoulder. Our findings indicated that the distribution of affected nerves involved several spinal cord roots, suggesting damage to the Conclusion In the present report, we discussed the case of a young woman diagnosed with Parsonage-Turner syndrome after presenting with acute bilateral shoulder pain and right shoulder weakness. Our findings suggest that Parsonage-Turner syndrome should be included in the differential diagnosis of acute shoulder pain. Such diagnosis requires accurate history taking and thorough physical examination based on an understanding of the pathophysiology of various neurogenic diseases, especially in rural areas that may not have access to advanced imaging equipment. Conflict of Interest: The authors declare that they have no competing interests.
2018-04-03T02:00:46.908Z
2017-11-01T00:00:00.000
{ "year": 2017, "sha1": "744ca97c18b46c217024a4668b76e57efa073497", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/jrm/12/2/12_2933/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "744ca97c18b46c217024a4668b76e57efa073497", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258748419
pes2o/s2orc
v3-fos-license
Analysis of the Integrity of Prospective Teachers . Integrity is an essential character requirement for becoming a civics teacher. One can be known through his work for finding reliable prospective teachers. Every prospective civics teacher in junior and senior high school must complete the final lecture while attending lectures. The final project must be the work of the future teacher who meets the plagiarism assessment with a certain percentage (maximum 30%). Through a qualitative study of 79 prospective civics teachers in junior and senior high schools in the Quantitative Research course, it aims to answer the problem of plagiarism assessment as an indicator of the integrity of the prospective teacher. Researchers used qualitative analysis to analyze all data acquisition. The results showed that 54 (68.4%) future teachers had more than 30% similar abilities, and 25 people (31.6%) had less than 25% similarity. All prospective teachers responded positively to using the plagiarism assessment as a medium for detecting integrity. Through such tests, they are motivated to study more carefully and earnestly to make themselves worthy of being prospective civics teachers who meet the integrity requirements. INTRODUCTION Integrity is a significant part of the existence of a teacher in school [1], [2], [3].Moreover, for teachers who organize Civics learning activities.Civics subjects, in principle, are subjects responsible for managing learning activities that contain knowledge, attitudes, and behaviors that are one part of promoting Pancasila values [4], [5].With this principle, it is not surprising that, in general, people in Indonesia affirm that Civics teachers have a responsibility to maintain the morale of their students.Civics teachers must ensure that each student has knowledge, attitudes, and skills from the values of Pancasila [6].Civics teachers are responsible for ensuring that every student commits to having values of Pancasila as the nation's philosophy, both from the cognitive, affective, and psychomotor aspects. To ensure that Civics teachers can carry out their roles appropriately in schools when they take lectures at universities, they are accustomed to strengthening Pancasila's ICLIRBE values, norms, and morals.Prospective Civics teachers who teach Civics in junior and senior high schools must familiarize themselves with good behaviors based on Pancasila values [1], [7].In this section, integrity as a prospective teacher occupies an important role.Since attending lectures, prospective Civics teachers must have integrity. The importance of honesty, independence, creativity, and responsibility are part of the integrity that appears in the daily life of prospective Civics teachers in universities [8].In lecture activities, every lecture activity, assignment, and exam always contains these values, which are indicators of integrity.These learning activities require that prospective Civics teachers be able to do it responsibly, independently, honestly, and creatively. Each course implementation always demands that students, as prospective Civics teachers, be able to demonstrate values that reflect their integrity [9].Based on the author's observations on several courses in the Civics Study Program, there is information that for every student who attends lectures, there are three mandatory activities: lectures, assignments, and exams [10].Lectures are face-to-face activities between lecturers and students to discuss lecture material.A project is an effort to deepen the lecture material through student activities in completing specific tasks from the lecturer [11].An examination is a vehicle to determine the extent of student mastery of all lecture concepts.All of these activities require a charge of integrity [12].Completing assignments is a medium for students to demonstrate their integrity while attending a lecture.Through projects, students carry out all activities correctly based on existing provisions. Given the importance of the role of coursework in tertiary institutions, the main problem is whether students have completed coursework correctly.That is, have students independently completed lecture assignments according to their thoughts?Did the student fulfill the integrity requirements when completing the task without copying and pasting?To ensure this, the researcher conducted a plagiarism assessment on the course's final project.Why the choice in the final project?Lecturers, when doing lectures, need to use case studies and project base learning as the primary approach to learning. In the final project design, students contain these two things.The final assignment for student lectures has an assessment weight of 50% of the overall assessment. Based on these problems, the purpose of this study was to describe the results of the plagiarism assessment on the final assignment for prospective Civics teachers.Based on the trial results, the researcher will simultaneously describe student responses to the application of the plagiarism assessment as a requirement for determining the integrity DOI 10.18502/kss.v8i8.13285ICLIRBE of prospective teachers as well as essential things as a follow-up to efforts to fulfill the integrity of future Civics teachers. THEORETICAL FRAMEWORK Unfortunately, not all students are currently able to complete lecture assignments based on high integrity [13], [14].Several lecturers who held courses admitted that some students were careless in completing tasks.Students often copy and paste actions in completing tasks [15], [16].It is due to how easy it is for someone to gain access to sources of information freely.One can find sources of information and references freely through the internet [17].This convenience can be a positive factor in increasing student creativity in completing lecture assignments [18].However, the situation will be different when students instantly use the easy access to information to complete projects without proper thinking.As a result, students have been able to complete the task, but the person concerned does not get anything from completing the task. Students completing courses without using their minds is a form of integrity violation [9], [19].If students do this continuously, it will impact their weak capacity to understand the content of the lecture message [3].The inadequate ability of students to know the concept of a subject, in the end, makes Civics teachers who do not have the feasibility of conducting learning in junior and senior high schools [20].Course supervisors need to take serious action to resolve existing problems [21].Lecturers must seek academic ways to prevent student behavior against integrity.Lecturers need to do something that places students as loyal subjects in class with educational activities based on integrity requirements. For this reason, one of the academic activities to prevent acts against integrity is a plagiarism assessment in the final project.Through this method, the plagiarism assessment is a preventive measure to avoid student actions that violate integrity.If this is successful, efforts to create prospective Civics teachers with integrity can be successful. METHOD In this article, the researcher uses a case study type of qualitative research.This research takes place in the study of Quantitative Research Courses.Six classes from 2 universities attend this course.Five classes are from Universitas Mataram NTB, and 1 class is from Universitas Pendidikan Ganesha Bali.The total number of students DOI 10.18502/kss.v8i8.13285ICLIRBE is 79 people.In the course of the lecture, there was a case of a course supervisor implementing a course assignment system as a final project.The final project is a manifestation of the implementation of learning that uses a case study approach and project-based learning.This final project weighs 50%.This weight will undoubtedly determine students' graduation after attending lectures.This weight encourages course lecturers to make provisions in the form of a plagiarism assessment for students' final assignments.This plagiarism assessment indicates integrity for students who meet the requirements.All students become research subjects and complete the final project [22], [23]. The researcher used the Turnitin test to determine the similarity percentage in the student's final project [24].The researcher decided the maximum similarity brick for the final task was 30%.The researcher used online google forms to send a questionnaire instrument to obtain other data in the form of student responses and follow-up actions after knowing the results of the similarity test for the final project, the researcher used online google forms to send a questionnaire instrument [25].To ensure data accuracy on student questionnaires, researchers conducted online interviews with the help of zoom meetings.Ultimately, the researcher used a qualitative test on all data acquisition.The stages of the qualitative analysis test include data collection, data presentation, and concluding [26], [27]. RESULTS AND DISCUSSION Based on all the data findings in this descriptive study, the researcher describes the results found on the existing problems.This study has two main issues: the similarity test results and student responses to the similarity test results.The similarity test is an indication of plagiarism assessment.All data and analysis results are as follows: Plagiarism assessment in a course The lecturer of the Quantitative Research course conducted a similarity test on the final project.The results of the similarity test are an indication of their integrity in their participation in the class.Before implementing the similarity test, the lecturer distributed a list of questions to students.This list of questions helps determine their knowledge of the use of plagiarism assessment.There are six indicators in the distribution of these questions, including: (1) sources of knowledge in plagiarism assessments, (2) experience in plagiarism assessments, (3) responses to plagiarism assessments, (4) benefits of ICLIRBE using plagiarism assessments, and (4) follow-up and recommendations after plagiarism assessments.All coursework finals use a plagiarism assessment.25 32 Only thesis that uses plagiarism assessment. 79 100 The study program does not need to use a plagiarism assessment. 4 Table 1 above shows that all respondents (100%) claimed to have benefited from the plagiarism assessment.Through this activity, they feel confident and independent for the final task they have completed.This feeling indicates that self-confidence and independence are essential factors that encourage a person to believe in his abilities so that self-integrity will grow and develop optimally.It is in line with the respondent's acknowledgment that the plagiarism assessment has a positive value for students. Through a plagiarism assessment, students find indications that strengthen positive Another exciting part of the question distribution data for respondents is that students use various ways to obtain information about the importance of plagiarism assessment. Based on different details, 29% of students receive plagiarism assessment information from books, 39% from YouTube, and 32% from the internet.With the increasing availability and need for information technology, students have proven to use various sources to obtain information.Students no longer use a single source but have been able to use multiple sources to get information on the importance of plagiarism assessment. At the end of the distribution of questions, there is an essential student's expectation on the use of plagiarism assessment.32% of students expect the plagiarism assessment to apply to all courses.All course lecturers use plagiarism assessments to assess the extent of student integrity through the completion of the final project.All students also submitted suggestions that the plagiarism assessment applies to the thesis evaluation. However, 4% of students do not agree with the use of plagiarism assessment, both for coursework and thesis.This student's disapproval is on the basis that they do not believe in themselves to carry out activities independently to complete a course assignment. The results of the plagiarism assessment in the final project course The Civics Study Program has determined a student's final project similarity level of 30%. The absolute tolerance limit for students' final assignments is 30%.If students have a final project with a similarity percentage that exceeds these provisions, the final project in the category does not meet the requirements.Researchers conducted a similarity test on the Final Project of the Quantitative Research Course using Turnitin software.The results in Figure 1 show that at least two people (2.3%) have a similarity level of 0-10%, and at most, 30 people (34.9%) have a similarity level of 31-40%.Overall, 32 students (41%) met the maximum limit requirements on the final project, free of plagiarism.The rest, 47 people (59%), have not been free from plagiarism attempts.This data certainly needs serious attention for course lecturers and students as prospective Civics teachers who should have high integrity by freeing themselves from plagiarism. Why do students plagiarism when completing a final project course? Candidates for undergraduates in the Civics Study Program have a high responsibility to maintain integrity.It is in line with the aim of the study program, which is to create Civics graduates who can support self-integrity based on the values of Pancasila as the Indonesian philosophy of life [28].All courses play an essential role as a vehicle for strengthening the self-integrity of prospective teachers.The Quantitative Research course is one of the courses that have a deep concern for being a medium for developing integrity.All students must realize their integrity by completing the final project [29]. Independently completing course final assignments indicates the integrity of students in a course [8].Students who complete the final task alone will get a small percentage of similarity when participating in the plagiarism assessment.On the other hand, if a student completes the final project as a result of copy-pasting someone else's work, then the plagiarism assessment will show a large percentage of the similarity test results. As the data in Figure 1 In the academic field, students do not allow plagiarism [30], [31].This behavior can contaminate the self-integrity of prospective Civics teachers [32], [27].They realize that the behavior of plagiarism is a wrong fit.Based on the results of this plagiarism assessment, students admitted that they were under pressure to do better things in the future.Through independent completion, they will try to complete the final project in the next course according to their abilities. CONCLUSION The Integrity for prospective Civics teachers is a must.It implies that they must be able to demonstrate actions that show an indication of integrity.This research shows their integrity in the final project results, meeting the 30% similarity maximum requirement. The high number of respondents with a similarity of more than 30% indicates that students still need independent efforts so that each time they complete their final project, they need to avoid excessive citing of references.Students need to practice citing references and formulating appropriate sentences and paragraphs continuously so there is no plagiarism indication. Students as prospective Civic teachers realize that their habit of fully quoting other people's work can be indicated as an act of plagiarism.Their habit of doing this is still due to their weak reading habits, limited time to complete assignments, and other reasons that are not specific.Based on these findings, respondents submitted progressive proposals to be used as recommendations for research results. DOI 10 . 18502/kss.v8i8.13285ICLIRBE values for their work on the final project of a course.In addition, students (87%) were encouraged to independently complete the revision of the final project based on the results of the plagiarism assessment. Figure 1 : Figure 1: Similarity Test Results Using Turnitin in the Final Project of the Quantitative Research Course. shows, most students (62.8%) have a similarity level that exceeds the maximum prerequisite of 30%.These data indicate that students indicate did plagiarism in completing a final project.The researchers explored this data further by conducting interviews with these students.The results show that 30% of students take these actions based on compulsion due to time constraints in completing the final project.It prompted him to take shortcuts by copying and pasting other people's work.On the other hand, 60% of students admit their limitations in literacy.They do not yet have an active habit of reading references, so they have difficulty developing narratives DOI 10.18502/kss.v8i8.13285ICLIRBE when completing their final project.Of the rest, 10% of students do not have a specific reason for the act of plagiarism.It's just that they admit that this plagiarism occurs due to seeing other people who have previously plagiarized other people's work.When the action happens, the lecturer ignores the action, making the student perform the same action on another final assignment. Table 1 : Students' responses to the use of plagiarism assessment in the final project. determine the level of plagiarism in a final project.However, if this is not possible, all respondents suggest that the Civics Study Program use a plagiarism assessment to test the plagiarism level of a thesis.If these two things can go hand in hand, then at least efforts to improve the integrity of prospective Civics teachers can be realized. They propose at least two important things, and namely, if possible, all supervisory lecturers use plagiarism DOI 10.18502/kss.v8i8.13285ICLIRBE assessment to
2023-05-18T15:14:10.725Z
2023-05-16T00:00:00.000
{ "year": 2023, "sha1": "fccdcdd3f33521efd2c7c2252391d40d4d0a6f13", "oa_license": null, "oa_url": "https://knepublishing.com/index.php/KnE-Social/article/download/13285/21461", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3079f820b7b76cbd8f6d13c01b6416e589929912", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
252058922
pes2o/s2orc
v3-fos-license
Developmental dyslexia in selected South African schools: Parent perspectives on management services such as multisensory teaching methods and accommodations in South Africa. Parents of CWD therefore did not receive enough support in the management of their child’s dyslexia. Future research should be conducted regarding South African teachers’ knowledge and perspectives regarding dyslexia and the management thereof. It was recommended that professionals trained in the management of dyslexia educate and advocate for CWD and their families Introduction Dyslexia is a specific learning disorder (SLD) that globally puts approximately 700 million people at risk of life-long difficulties such as illiteracy and social exclusion (Cramer 2014:2). It is therefore imperative that dyslexia is managed effectively to allow learners who are impacted by this condition to achieve their full potential. The efficacy of any treatment approach ultimately lies in the support of knowledgeable and informed parents (Acar, Chen & Xie 2021:2). Global research conducted at schools in England, the USA, Northern Ireland and India reports that roles of the parents within the school environment are uncertain and that their views are not considered (Banga & Ghosh 2017:959;Beck et al. 2017:152;Cook 2017:51;Hegstad 2017:8;Ross 2019:141, 153). The parents in these studies experience a variety of emotions such as anger, frustration and guilt, much like those in South Africa (Du Plessis 2012:36-37). However, South Africa has a dearth of novel research into the management of children with dyslexia (CWD), which is exacerbated by the apparent lack of parental support. Moreover, given the unique educational, linguistic and socio-economic context locally, more specific investigations need to be launched into the perspectives and needs of parents and caregivers of CWD in South Africa. subtypes, is associated with morphological and functional abnormalities in the brain areas associated with reading (Ozernov-Palchik & Gaab 2016:156). The several classified subtypes of dyslexia include dysnemkinesia (poor motor memory development for written symbols), dysphonesia (an inability of graphemephoneme correspondence) and dyseidesia that manifests as a poor ability to distinguish between whole words as visual pictures and link them with auditory sounds (Coltheart & Friedmann 2018:8-26). The mentioned characteristics are identified in a person with dyslexia through the Diagnostic and Statistical Manual of Mental Disorders 5th edition criteria (APA 2013). According to the American Psychiatric Association (2013), dyslexia is now classified as an SLD in the DSM-5. Regardless of these demarcated specifics, controversy still exists regarding the interpretation and application of these diagnostic criteria to the most effective management protocols for dyslexia, as seen by the numerous studies conducted to understand the cause of dyslexia, its epidemiology and effective management of learning disabilities (Adubasim & Nganji 2017:2). International perspectives of dyslexia management The effect of an all-inclusive diagnostic umbrella term such as SLD may complicate the management of the condition, as not all SLDs would benefit from the very distinct treatment options for dyslexia alone (Cainelli & Bisiacchi 2019:559). Research conducted in Brazil, the United States of America (USA), India, China, the United Kingdom (UK) and Ireland specifically indicates the plethora of views on treatment. In Brazil, transcranial stimulation is used to improve the reading capability of individuals with dyslexia (Rios et al. 2018:6). Individuals are generally able to recognise the difficulties with literacy, but continued misconceptions about a specific diagnosis such as dyslexia are still present in the USA (Castillo & Gilger 2018:212). According to research studies done in India, the language processing skills are the most important part of management (Rao et al. 2017:165), yet little is mentioned about the different types of decoding and encoding. In Gharuan, a village situated in India, management includes educational frameworks, an individualised education plan (IEP) and early management (Shaywitz, Morris & Shaywitz 2008:1417-1418. Management of dyslexia in China is delayed compared with other languages such as English because of the complex Chinese writing systems (Cai et al. 2020:204). In the United Kingdom, the most effective management for dyslexia is phonics based -and multisensory approaches including auditory, visual and kinaesthetic aspects (Tidy & Huins 2015:3). In Ireland, it is widely suggested that schools receive further navigation and support in using resourcing models to meet the needs of the CWD, as they recently had to change the educational teaching support provided for CWD because the previous support did not meet the learners' needs (Tiernan & Casserly 2018:51-52). Finally, a recent Finnish study proposed a model of association learning, through which basic reading and spelling could be instructed. This game-based instruction (GraphoLearn Technology) has been designed for application in various alphabetic writing systems and can therefore benefit both opaque and transparent orthographies (Lyytinen et al. 2021). As such, this instructional approach may provide some answers for South Africa, where the language of learning and teaching (LoLT) is mainly the opaque English, yet many of the official languages of the country (e.g. Afrikaans and Sepedi) are transparent. However, irrespective of the common occurrence and nature of dyslexia, both internationally and on African soil, the disorder and its accompanying challenges of low resources and socio-economic factors have not received adequate attention in most developing countries (Makgatho et al. 2022). Schools require support not only in using resource models but also in providing adequate emotional support to their learners (Rao et al. 2017:163). Dyslexia should preferably be addressed via a multiprofessional approach, including speech-language therapists (SLTs), educational psychologists and clinical psychologists, who may aid by providing emotional support to the CWD and their caregivers (Cainelli & Bisiacchi 2019:559). According to Rao et al. (2017:163), a lack of emotional support is considered a risk factor for increased internalising, anxious and depressive behaviour amongst CWD. Teachers and scholastic environments are critical elements in the identification and management of CWD, as it is their responsibility to teach learners how to read (Lindstrom 2019:198). Multiple research studies confirm that teachers have a gap within their knowledge base regarding efficient reading instruction, and this includes limited knowledge of phonemic awareness, phonics instruction and morphemes (Hategan et al. 2018:117;McMahan, Oslund & Odegard 2019:22). This dearth of knowledge may be exacerbated where the learners have dyslexia. Evidence-based intervention approaches are just as important as teacher knowledge because they provide an adequate framework for teaching reading and writing (International Dyslexia Association 2019:1). Children with dyslexia have a phonological deficit and specific emphasis should thus be placed on phonological awareness skills in the management approach (International Dyslexia Association 2019:3). Effective techniques proposed by the International Dyslexia Association are collectively known as structured literacy (SL) instruction (Fallon & Katz 2020:336-337). Structured literacy is a general term associated with evidence-based intervention approaches which utilise all components of spoken language to teach reading, writing and spelling (International Dyslexia Association 2019:6). The language areas targeted in this approach may include phonology, orthography, syllables, morphology, syntax and semantics (Fallon & Katz 2020:337). A generalised approach to dyslexia management may be effective for many learners, but it is not necessarily sufficient for all, as the core of individual problems may not be clear, and the development of each CWD must be facilitated in a unique manner (Cainelli & Rao et al. 2017:163). In summary, there is a global controversy regarding effective management approaches to an SLD such as dyslexia, pertaining to the knowledge of professionals and teachers of CWD as well as within the research describing evidence-based intervention approaches. It is therefore reasonable to assume that differing and confusing perspectives may also be prevalent amongst parents or caregivers of CWD. Parental perspectives of the management of dyslexia in the international context Research has been conducted at schools in England, the USA and Northern Ireland to determine the experience of parents with CWD within the school environment. The findings of these studies show that parents' roles are uncertain, their views are not considered, they have difficulty accessing professionals and resources and they do not fully understand their rights (Beck et al. 2017:152;Cook 2017:52;Ross 2019:141, 153). Some studies reported that parents had difficulty accepting the diagnosis of dyslexia in their child and were hesitant to agree to adaptations in the academic curriculum (Fernández-Alcántara et al. 2017:538). Parents not only experience difficulty in the school environment but also in their more immediate environment, especially related to increased levels of stress. Ongoing stress can affect the parent-child relationship in several ways, such as insecure attachment (Carotenuto et al. 2017:5), low self-esteem (Delany 2017:100) and low family cohesion (Carotenuto et al. 2017:5). Many parents, especially mothers of CWD, have more stress and depressive behaviours than other parents (Carotenuto et al. 2017:5;Multhauf, Buschmann & Soellner 2016:1204. Educating parents with CWD may minimise parental stress associated with acceptance and management of dyslexia. Overall, it is evident that parents with CWD experience and display a variety of negative emotions and behaviours. A few emotions that are evident include stress (Carotenuto et al. 2017:5), depression (Multhauf et al. 2016(Multhauf et al. :1204, emotional strain (Banga & Ghosh 2017:961), isolation (Cook 2017:54), disempowerment (Ross 2019:153) and frustration (Cook 2017:57). Common behaviours that are reported in parents with CWD are internalising and externalising behaviour such as aggression, disagreement and indecision (Multhauf et al. 2016(Multhauf et al. :1204Watt 2020:123). It is therefore evident that parents of CWD require a greater extent of support in all spheres of life. South Africa faces additional challenges in the management of dyslexia because of the multilingual environment and lack of resources. It is consequently reasonable to assume that this context may also add to the psychosocial and general impact on the perspectives of local parents with CWD, but further research is needed. Dyslexia management in the South African context Learners in South Africa have access to schooling, but they experience inconsistencies in the quality of education received (Mkhwanazi 2018:79). Studies conducted locally demonstrated that many school-aged learners display poor literacy skills (Fourie, Sedibe & Muller 2018:85;Wilsenach 2015:7). Moreover, several South African CWD remain unidentified because of limited resources and screening and assessment tools (Clark, Naidoo & Lilenstein 2019:2, 9). In addition, it is often required of the learners to perform academically in reading and writing whilst receiving education in a language other than their first language (Moonsamy & Kathard 2015:69-70). The Department of Basic Education in South Africa makes provision for different accommodations for CWD such as additional reading time, allowing recordings in the classroom and using audiobooks (IEB 2017). However, the liable individuals such as teachers and government officials do not always provide the necessary granted accommodations (Dreyer 2015:96). Schools in South Africa have trouble in early identification of learning barriers (Mkhwanazi 2018:76). The Education White Paper No. 6 (Department of Education 2001) stated that an inclusive education environment should be provided for learners with special needs. This statement implies that these learners should, where possible, attend mainstream schools (Walton et al. 2009:105). Unfortunately, a recent study conducted in the North-West province reported that the public school environments still do not consider the needs of learners with disabilities such as dyslexia, despite the Education White Paper policy (Leseyane et al. 2018:6). Because of the lack of resources and inadequate management in South Africa, parents of CWD may have negative perspectives of the management of dyslexia. Parent perspectives on the management of dyslexia in a South African context From the many proposed reasons elucidated in the previous sections internationally, parent perspectives are not always considered or even investigated when it comes to CWD. In one of the few reported local studies, Du Plessis (2012: 36-37) established some years ago that parents of learners with learning disabilities in the Western Cape experience common emotions such as frustration, confusion, anger, guilt and helplessness. Likewise, Dreyer (2015:96) later proposed that the stated emotions may be increased if parents do not feel that they receive the necessary support from the community and education system. Parents of CWD have a variety of experiences, but all parents experience an elevated level of stress (Bull 2003:341-347;Dreyer 2015:96). A more recent study conducted in KwaZulu-Natal found that parents have insufficient knowledge of dyslexia and experience a lack of resources within the community (Mkhwanazi 2018:49, 59-60). Resources include insufficient support in mainstream schools, and the attendance of special schools is therefore required (Mkhwanazi 2018:59-60). It is evident from the few research studies available in the South African context that parents often have negative emotional experiences relating to the management of their CWD in South African schools. Parents play an important role in the management of dyslexia in their CWD's lives, as they spend the most time with their children in everyday contexts. Furthermore, it is needed for the skills that are taught in the clinical setting to be executed at home to improve generalisation of those skills to the child's daily living activities. South Africa faces many other challenges because of the lack of resources as well as the diversity regarding socio-economic status, languages and culture, and thus more specific research is required within this field to enhance the efficacy of dyslexia management in South African schools. This present study aimed to understand these specific factors by adding to the standing corpus of knowledge, within the entire South African context, on the parental perspectives and experiences relating to the management of their CWD in schools locally. Aims and objectives The aim of this study was to determine parental perspectives of the management of their CWD in South African schools. The subsequent objectives were to explore the emotional experiences of a South African parent with a CWD regarding the child's SLD and the social stigma surrounding the disorder. We also explored the resources of schools in South Africa to support a CWD and aimed to determine the South African parents' knowledge and beliefs relating to their CWD, as well as their level of knowledge regarding the management of dyslexia in schools. Additionally, we set out to determine South African parents' experiences and difficulties regarding the management of dyslexia in South African schools. Finally, we wanted to determine South African parents' understanding of their roles and their involvement in their CWD's education as well as their experiences regarding the teachers' and school staff's ability to manage dyslexia in the South African school environment. Study design The study design was an embedded design, as it includes both qualitative and quantitative data (Leedy & Ormrod 2016:313). The quantitative approach was the central approach, whereas the qualitative approach served as the complementary approach. The qualitative aspect of this study design aimed to obtain detailed data to understand the parental perspectives of the management of their CWD in South African schools. The quantitative aspect was useful in studying this phenomenon by determining the consensus of parental perspectives of the management of their CWD in South African schools. The quantitative research followed a descriptive design, which included survey research to obtain data about self-reported beliefs or opinions (Neuman 2014:192). All the data was collected from the participants at one point in time and is thus considered a cross-sectional study. Setting Data collection for this study was conducted in the form of a research survey, which was sent to South African parents with a CWD via an online questionnaire. Study population and sampling strategy The study population included parents or primary caregivers of school-aged CWD (6-18 years of age) who attended schools in South Africa during the academic year of 2021. The child must have been formally diagnosed with dyslexia. A total of 104 participants commenced the survey, of which only 93 participants completed the survey. The 11 incomplete responses were omitted within the data analysis, and thus only the 93 complete responses were analysed and the results of which are included in this article. Stratified sampling was implemented in which the study population was divided into two strata. The strata consisted of parents of CWD whose children have been diagnosed within the last 4 years (Group A) and parents of CWD who have been diagnosed longer than 4 years ago (Group B). The strata dividing allowed the researchers to learn about the different experiences parents may have, depending on how long ago their CWD was diagnosed. Group A had 64% of the participants and Group B had 35% of the participants who completed the survey. See Table 1 for a description of the demographic information of the participants. Data collection The data were collected by means of an online survey. The survey was created and distributed through the online survey software Qualtrics. Access to Qualtrics was granted through the Department of Speech-Language Pathology and Audiology, University of Pretoria. The survey was adapted from several existing questionnaires and interview schedules, namely the Teacher Awareness of Dyslexia questionnaire (Thompson et al. 2015), the Parent Perspectives of the Effects of Public, Private and Home School Learning Environments on Students with Dyslexia questionnaire (Haws 2017), the Experiences of Parents with Children Who Have Been Diagnosed with Dyslexia in Secondary Schools of Msunduzi Municipality, Pietermaritsburg interview guide (Mkhwanazi 2018) and the Experiences of Parents of Children with Reading Difficulties interview schedule (Du Plessis 2012). The survey was adapted to be better suited for studying the phenomenon of determining the parental perspectives of the management of their CWD in South African schools. The questions were designed to obtain both nominal and ordinal types of data. All the identified potential participants for this study were contacted via the social media platform Facebook. A link to the survey was sent to the participants by posting the link on the selected Facebook groups (Red Apple Dyslexia Association, Die Rooi Appel and Stark-Griffin Dyslexia Academy). Content validity was considered to determine how well the instrument represents the components of the variable being measured and therefore contributed to the validity of the study (Brink, Van der Walt & Van Rensburg 2018:160). A characteristic of reliability used in this study was stability. The stability was ensured by asking the same question twice, with each question being phrased differently to improve the reliability of the results. Data analysis The target population was divided into two strata, namely Group A and Group B. These two strata were compared during the data analysis process together with the analysis of all the data. Both descriptive statistics and inferential statistics were used. In descriptive statistics, frequency distributions (nominal data), measures of central tendency (mean), simple descriptive statistics (percentages) and measures of variability (standard deviation) were used to make conclusions about the data. In inferential statistics, the chi-square, t-test and p-value were used. The chi-square is commonly used to test nominal and ordinal data that were appropriate for this research study (Garth 2008:62). The t-test was used to compare the means of Group A and Group B. The p-value was used to identify the statistical significance of an observed difference. The chi-square, t-test and p-value were performed by using a variety of functions on Microsoft Excel. The data analysis process was conducted with the assistance of a statistician. Ethical considerations Ethical clearance was granted from the departmental research and ethics committee at the University of Pretoria. The participants were neither put at risk nor were they harmed whilst conducting this research study. An informed consent form inviting participants to participate and detailing what was expected of the participants as well as ensuring that they were aware that their participation was strictly voluntary and was attached to the survey. The informed consent form had to be read and agreed to before the survey could be commenced. The participants were allowed to opt out of completing the online survey at any time without any negative consequences. The researchers had to consider the privacy settings on social media to determine whether the information was public or private (Social Media Research Group 2016:16-18). The researchers protected the privacy and confidentiality of all the participants and stored all the data collected in a safe place. Knowledge about dyslexia The knowledge about parents with CWD was tested with regard to true and false answers to statements. The two strata were compared with each other, namely the group where CWD was diagnosed within the last 4 years (Group A) and the group where CWD was diagnosed more than 4 years ago (Group B). The participants were divided into these categories because it provided information on the differing knowledge base, perspectives and experiences of parents, depending on how long ago their child was diagnosed. As indicated in Table 2, a difference was noted between the two groups' knowledge in statements 1, 3, 6 and 8. In statements 1, 3 and 8, parents of Group B, displayed greater knowledge on that specific statement than those in Group A. In Question 6, Group A displayed better knowledge on the statement that CWD has trouble remembering letter symbols for sounds and forming memories for words. Both groups had poor knowledge about the following statements: 2 to 5 and 7. Group A displayed weaker knowledge vis-à-vis statements 1 and 8. The 11 statements that were not mentioned in Table 2 resulted in appropriate knowledge of parents and no differences between the two groups were noted. There was only a 1% difference between the two groups' total score of the 19 statements. These figures conclusively indicated that there was no difference between the knowledge of parents with CWD diagnosed recently or later. Parental experiences of the management of dyslexia in South African schools Warning signs prior to and influence of formal diagnosis It was evident that CWD in South African schools present with warning signs before being formally diagnosed with dyslexia. Warning signs were more clearly delineated by the parents of Group A, perhaps because of more recent memories. Nonetheless, most of the parents with CWD 62% stated that the teacher did not recognise the warning signs before their CWD got diagnosed. Yet when asked if the diagnosis of dyslexia helped their teacher to understand the disorder and to better support their CWD in the classroom, 57% of the parents in Group A and 51% of parents in Group B agreed or strongly agreed, and therefore there was a moderately significant difference between the two groups (p = 0.02). Teacher knowledge and attitudes of dyslexia and effort in managing dyslexia in the classroom Parents were asked if their teacher had a positive or negative view of their CWD. Eighty-three per cent of parents felt that their CWD's teacher had a positive view of their child. Only 17% of the parents with CWD responded negatively. Regarding teacher knowledge of dyslexia being adequate, a mere 6% of parents in Group A and 3% in Group B strongly agreed that there was no need for concern in this regard. When considering both groups, an average of 71% of parents who had a CWD in a South African school had at least one person in the school who understood their CWD's special needs. Conversely, there was an unfortunate 19% of the parents in Group A and a near-similar total of 15% in Group B who disagreed with this statement. Most parents with CWD (47%) agreed that their teacher put in extra effort for their CWD, could support their child's learning, fostered motivation and hope for their CWD and treated them fairly. In terms of the South African teacher being able to support the child's learning, a highly significant difference between Group A and B with a p-value of 0.001 was observed in this regard. Unfortunately, there were also 25% of parents who disagreed that their teacher fostered motivation and hope for their CWD and 21% who disagreed that their teacher treated their CWD fairly. In both Group A (49%) and B (58%), most parents (54% -combined) felt that their child's school did not have the resources to provide intensive treatment for dyslexia, including multisensory instruction and accommodations. A fair amount (24%) of parents in Group A and 9% of parents in Group B were unsure whether their CWD's school had the resources to provide intensive treatment for dyslexia. It was evident that there was a significant difference (p < 0.005) between Group A and B, especially in the number of parents who answered 'unsure' in this regard. Therefore, it could be seen that 67% of children of parents in Group A and 58% of children of parents in Group B did not receive multisensory teaching methods in their South African schools. Accommodating children with dyslexia in the classroom 41% of the parents in Group A felt as if their child's teacher had adequate knowledge and skills to provide appropriate accommodations for their child at school. With a p-value of 0.02, there was moderate evidence of a difference between Group A and B and therefore showed a small improvement over time as 42% of the parents in Group B felt the same. Many parents in both groups (54%) felt as if the South African school that their child was attending provided sufficient accommodations for their child. However, there was a concerning 32% of the parents in Group A who did not feel that there were sufficient accommodations provided. Thirtyfour per cent of CWD received extra time as an accommodation provided by the school, which was the most common accommodation. This was followed by 30% having a reader, 27% who received writing and spelling help, 4% were unsure about the accommodations, 3% were seated in front of the class and only 2% had one-on-one help. Parental involvement in the management of dyslexia Most parents (81%) perceived themselves to be involved in the management of their child's dyslexia in the school environment. According to the survey responses, most parents also reported interacting with the school or teacher(s) (96%) and frequently meeting with their child's teacher (64%) to assist their children in the academic environment. Only a few of the parents (47%) reported that they had had only positive experiences when interacting with the school or teacher(s), with an alarming 53% of parents who were unsure or who disagreed with this fact. The majority of parents (97%) felt as though they were aware of the role they played in their children's education; however, of these, 19% of parents reported that their opinion and views regarding their child's education were not considered in the academic environment. Furthermore, 23% of parents were unsure regarding whether their opinions and views were considered. Therefore, out of the 97% of parents who were aware of the role they played in their children's education, only 57% of those reported that their opinions and views were considered in their child's school context. Most parents (77%) did, however, report that they were well informed on what dyslexia was and the effective management techniques thereof in the school context. Seventy-one per cent of parents reported that they had a good relationship with their child's teacher, but only 47% of parents reported that their child's teacher(s) kept them informed about their child's progress in the classroom. Furthermore, most parents (64%) reported that they experienced difficulty accessing professionals and resources for their CWD. http://www.sajce.co.za Open Access The data obtained was then divided into Groups A and B. The mean and standard deviation from both groups are depicted separately in and strongly agree [5]). Additionally, the p-value comparing the responses from the two groups is also included in Table 3. A p-value of ≤ 0.05 indicates that a statistically significant difference exists between the responses of the two groups, and a p-value of > 0.05 indicates that no statistical difference was found between the responses of the two groups, and thus their responses were similar. Emotional perceptions Most parents (72%) reported that they wished they had had their child evaluated for dyslexia at an earlier stage. Most parents (60%) also reported that their emotional well-being improved after their child was diagnosed with dyslexia. Most parents felt as though they were able to support their CWD academically (67%); however, in contrast, a vast majority reported having difficulties helping their CWD with homework (73%). The minority of parents (33%) felt as though they received sufficient support from their child's school and teacher(s) regarding their child's education and the management of their child's dyslexia. However, a shocking 67% indicated that they were unsure or disagreed with that statement. a school or teacher. Figure 3 depicts the perceived ability to cope with a CWD of parents in both groups. Emotional perceptions regarding their child's specific learning disorder It is clear from the results obtained that SLD, specifically dyslexia, is a multifaceted disorder that affects CWD and their parents in a variety of ways. One important aspect to consider was their emotional perceptions and experiences regarding dyslexia and the management thereof. The results suggested that more parents from Group A wished that they had had their children evaluated for dyslexia at an earlier stage compared with the parents from Group B. Pertaining to their emotional well-being, most parents from both groups agreed that their emotional well-being did improve after their child was diagnosed with dyslexia. This could possibly be because of knowing why their children were struggling and thus being able to provide them with appropriate intervention and support to help them succeed in the future. It was evident that more parents in Group B perceived themselves to be able to support their child academically. However, in contrast, it was noted that Group B also stated having more difficulty helping their CWD with homework compared with Group A. It is evident that parents of CWD require support in all spheres of life, especially in the academic context. Results from this study determined that most parents from both groups respectively feel as though they do not receive sufficient support from their child's school and teacher(s) regarding their child's education and the management of their child's dyslexia. Furthermore, more parents from Group A reported not receiving sufficient support compared with Group B. This finding may delineate that the longer a CWD has been diagnosed, the more the parents are able to obtain the necessary support required. However, as most parents reported not having sufficient support in the academic context, it is of great concern that the South African schools and teachers are unable to provide the necessary support for managing dyslexia. As Dreyer (2015:96) stated, parental emotions may increase if parents do not receive the necessary support from the community and education system. When looking at parental emotions when their child was diagnosed with dyslexia in South Africa, many emotions corresponded with previous research. Emotions such as anger, frustration, guilt and helplessness are all common amongst parents with CWD, both in the international and local context (Banga & Ghosh 2017:959;Beck et al. 2017:152;Cook 2017:51;Du Plessis 2012:36-37;Hegstad 2017:8;Ross 2019:141, 153). This study had similar findings in that the parents mostly felt emotions such as frustration, guilt and helplessness, with the addition of emotions such as stress and anxiety. It was found that parents in Group B were able to better cope with the demands placed on them by having a CWD. This correlates with previous research conducted by Dreyer (2015:96), as Group B reported feeling more supported and thus experienced fewer negative emotional perspectives compared with those in Group A. Resources of schools in South Africa The results of this current study were indicative that South African schools lack resources in the treatment of dyslexia. It is observed that more parents in Group A have uncertainty regarding the resources that are currently available than those in Group B. This positively relates to both previous international and local research (Clark et al. 2019:2, 9;Rao et al. 2017:163). Research representing the South African context portrays that children in South Africa are being diagnosed late or remain unidentified because of limited resources (Clark et al. 2019:2, 9;Mkhwanazi 2018:76). Similar findings were noted in this research study as the majority of CWD in Group A (73%) and Group B (60%) were diagnosed with dyslexia after the age of 8 years. It was alarming to note that approximately half of the participants (48%) first noticed their child's dyslexia and only a few teachers (28%) were able to identify that the CWD has dyslexia. In the UK, the most effective management for dyslexia includes multisensory approaches including auditory, visual and kinaesthetic aspects (Tidy & Huins 2015:3). It is evident from the results that CWD who attend South African schools do not receive multisensory management approaches. In the South African context, multisensory teaching is especially needed because of the linguistically diverse environment, and the lack thereof is indicative of inadequate management. Similarly, the current study also found that both groups experience similar difficulties in accessing professionals and resources for their CWD. It was noted that 34% of the CWD in this study had to change schools, a lack of resources (44%) and learning difficulties (34%) being the overarching reasons for the change of schools. A similar finding was reported by Mkhwanazi (2018:59-60), who also stated that insufficient support in mainstream schools resulted in the attendance of special schools being required. These results further support the findings that South African schools do not have sufficient resources to provide effective services for CWD and their families. This lack is a cause of great concern, as it suggests that South African schools and teachers are not equipped to provide sufficient support to CWD and their parents. This finding is similar to those studies conducted in the international context, which also states that parents have difficulty accessing professionals and resources (Beck et al. 2017:152;Cook 2017:52;Ross 2019:141, 153). As dyslexia is a life-long learning disability, the necessary support during the school years is essential for future success; therefore, it is troublesome that South African schools do not have the appropriate resources to facilitate these CWD's potential success. Parents' level of knowledge regarding their children with dyslexia and the management of dyslexia in schools The results obtained reflected that, although there were some differences in the responses to the true and false questions pertaining to the parents' knowledge regarding their CWD and the management of dyslexia in schools, the parental knowledge of both groups was generally appropriate. However, there is still some discrepancy between what they believe dyslexia is, the difficulties that may be experienced by CWD, as well as how to appropriately assess and diagnose dyslexia. The overall finding that parental knowledge is relatively acceptable is contradictory to the findings from Mkhwanazi (2018), which stated that parents from the KwaZulu-Natal region specifically had insufficient knowledge of dyslexia. Perceptions regarding the South African parents' role and involvement in their children with dyslexia's education Although most parents in both groups reported being involved in the management of their child's dyslexia in the school environment, a difference was found between the groups. Group B seemed to be more involved in the management of their child's dyslexia compared with Group A, which could possibly be because of their children having been diagnosed with dyslexia for a longer period. Therefore, the parents are more aware of how to be involved in the management of their CWD as they have had obvious further experience with it. Most parents in both groups felt as though they interacted with the school or teacher(s) to assist their child in the academic environment. However, it was noted that Group A was slightly more in agreement to this statement. Similarly, most parents in both groups reported often meeting with their child's teacher(s) concerning their child's reading difficulties. However, in contrast to the aforementioned, more parents from Group B reported meeting with their child's teacher(s) compared with Group A. Most parents in both Group A agreed that they were aware of the role they played in their child's education and that their opinions and views were considered in the school context. Conversely, it was noted that more parents from Group B reported were aware of their role in their child's education than the parents from Group A, which can understandably be ascribed to increased exposure to the disorder. Similarly, more parents from Group B agreed that their opinions and views of their CWD's education were considered in the school context compared to the parents of Group A. These findings contradict previous research studies conducted in the international context, which states that parents' roles are uncertain and that their views are not considered (Beck et al. 2017:152;Cook 2017:52;Ross 2019:141, 153). It was evident that the parents from Group B believed that they were well informed on what dyslexia is and effective management techniques in the school context, compared with the parents from Group A. In conclusion, parents from Group B seemed to better understand their role and are more involved in their CWD's education compared with Group A. South African parents' experiences regarding the management of dyslexia in South African schools The Department of Basic Education in South Africa makes provisions for different accommodations for CWD (IEB 2017). Most parents agreed to the fact that appropriate and sufficient accommodations are provided. However, similar to the findings of Dreyer (2015:96), 30% of parents stated not having been granted accommodations, implying that there is still a cause of concern regarding teachers and government officials providing the granted accommodation. Largely, one can conclude that a substantive portion of parents did not feel as if dyslexia is managed effectively in South Africa. Although most of parents felt that their CWD's teacher had a positive view of their CWD, they felt concerned regarding their CWD's teacher's knowledge regarding dyslexia. A smaller portion of the parents felt that there was no one in the school who understood their CWD. It was clear from the results obtained that many parents (both Group A and B) had negative experiences whilst interacting with the school or teacher(s). It was, however, found that Group B had more negative experiences than Group A, where the CDW had been diagnosed more recently. The possible discrepancy between the school and teacher(s) knowledge regarding dyslexia during previous and recent years could be another reason for this finding. More parents from both groups reported having a good relationship with their CWD's teacher(s) than the parents who reported that their child's teacher(s) kept them informed on their CWD's progress in the classroom. This is contradictory, as one would assume that having a good relationship with the child's teacher(s) would result in the teacher keeping the parents informed regarding the child's progress. It was noted that more parents from Group B appeared to have a good relationship with their CWD's teacher(s) compared with Group A, regardless of their negative experiences earlier on. Nonetheless, the Group B parents also seemed to have better relationships with their CWD's school and teacher(s) currently. These findings are in line with a recent study conducted in the North-West province that reported that South African public school environments do not consider the needs of learners with disabilities despite the Educational White Paper policy (Leseyane et al. 2018:6). Yet most parents felt as if there was at least one person who understood their CWD and some teachers were putting in extra effort, supporting their child's learning, fostering hope and motivation and treating their child fairly. Strengths and limitations There are strengths and limitations that should be considered when interpreting the research findings. The data were obtained from participants in all nine provinces, which is representative of the South African context. A variety of different school contexts (i.e. private, public and LSEN) were also included in this study. However, only English and Afrikaans-speaking individuals participated in this research study. This indicates that the research findings cannot be generalised to the other official languages of South Africa, which is important as South Africa is a multilingual and multicultural context. Another limitation to this study was that the study population was small for survey research, and thus the findings could have been skewed. Some of the participants also did not complete the full survey, possibly indicating that participants were uncomfortable or did not understand some of the questions. Incomplete survey responses could also result in skewed data. Implications or recommendations As dyslexia is a life-long learning difficulty that places CWD at risk of a variety of difficulties such as illiteracy and/or social exclusion, it is of great concern that parents of CWD reported not having enough support. Recommendations for clinical practice are thus that South African professionals trained in managing dyslexia (such as SLTs) utilise the present results to improve their own management of dyslexia in South African schools. This may be achieved by advocating for CWD as well as providing teachers and schools with the necessary knowledge and support to better address these issues in the classroom and school context through specific training pre-and post-tertiary education. Future research should be conducted to determine the South African teachers' perspectives and knowledge regarding dyslexia, including their ability to identify warning signs and manage dyslexia in the school context. Further studies should also be conducted to determine whether parents from more culturally and linguistically diverse backgrounds have similar experiences than the parents in this study. These future research investigations may subsequently support to the knowledge base surrounding the current management of CWD in South Africa. Conclusion The overall aim of this study was to determine the parental perspectives of the management of their CWD in South African schools. The parents who participated are appropriately informed on what dyslexia is as well as the effective management thereof. The majority had sufficient knowledge of their role in their CWD's education. However, it is clear that parents of CWD feel as though they do not receive sufficient support from their CWD's school and teacher(s) in the education of their CWD and the management of their dyslexia. There is a discrepancy between the type and quantity of accommodations provided to CWD, and multisensory teaching methods are lacking for the most part in the South African school system. It was also evident that a lack of resources and access to appropriate services for their CWD were also perceived as problematic. As such, a vast majority of parents with CWD in South Africa do not perceive themselves as receiving enough support within the South African school environment to effectively address their CWD's specific needs.
2022-09-04T15:19:35.915Z
2022-08-30T00:00:00.000
{ "year": 2022, "sha1": "ea504fc4356adbfcfaca03a4808c52ddc94e9c8e", "oa_license": "CCBY", "oa_url": "https://sajce.co.za/index.php/sajce/article/download/1136/2225", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "d2895983316a07c851d6a99cde556b35036e222d", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [] }
247156692
pes2o/s2orc
v3-fos-license
Ecology and diversity of culturable fungal species associated with soybean seedling diseases in the Midwestern United States Abstract Aims To isolate and characterize fungi associated with diseased soybean seedlings in Midwestern soybean production fields and to determine the influence of environmental and edaphic factors on their incidence. Methods and Results Seedlings were collected from fields with seedling disease history in 2012 and 2013 for fungal isolation. Environmental and edaphic data associated with each field was collected. 3036 fungal isolates were obtained and assigned to 76 species. The most abundant genera recovered were Fusarium (73%) and Trichoderma (11.2%). Other genera included Mortierella, Clonostachys, Rhizoctonia, Alternaria, Mucor, Phoma, Macrophomina and Phomopsis. Most recovered species are known soybean pathogens. However, non‐pathogenic organisms were also isolated. Crop history, soil density, water source, precipitation and temperature were the main factors influencing the abundance of fungal species. Conclusion Key fungal species associated with soybean seedling diseases occurring in several US production regions were characterized. This work also identified major environment and edaphic factors affecting the abundance and occurrence of these species. Significance and Impact of the Study The identification and characterization of the main pathogens associated with seedling diseases across major soybean‐producing areas could help manage those pathogens, and devise more effective and sustainable practices to reduce the damage they cause. INTRODUCTION Soybean (Glycine max [L.] Merr.) is an economically important crop worldwide and is considered to be essential for global food security (Hartman et al., 2011). Soybean production is dominated by Brazil, the United States and Argentina, which were responsible for 81% of total global production in the 2019/2020 growing season (USDA 2020). The United States is the world's second largest soybean producer, with the majority of production concentrated in the Midwestern United States (USDA 2020). The impact of seedling diseases on soybean productivity is a major challenge to achieving maximum crop yield potential in soybean in all places it is grown. In fact, seedling diseases ranked third among diseases that consistently reduced soybean yields in the United States for the last 20 years (Bandara et al., 2020;Wrather et al., 2010). It is difficult to predict when seedling diseases will cause economic losses, due to environmental factors, variability in the pathogenicity of pathogen populations and pathogen interactions with the soil microbiome. Additionally, multiple pathogens often present in the same field require different management approaches, making seedling diseases difficult to manage and emphasizing the importance of accurate pathogen identification (Hartman et al., 2015). Seedling diseases in soybeans are caused by a complex of pathogen species. The most commonly reported species include Fusarium spp., Rhizoctonia solani, Phytophthora spp. and Pythium spp. Soybean roots can be colonized by different fungal endophytes, including pathogenic and nonpathogenic organisms (Fernandes et al., 2015;Impullitti & Malvick, 2013;Pimentel et al., 2006). Furthermore, multiple pathogenic species associated with seedling diseases can occur in the same field, which can hinder disease management (Díaz Rojas et al., 2017a). Several past studies aimed to identify pathogens associated with seedling diseases in soybean (Killebrew et al., 1993;Rizvi & Yang, 1996;Rojas et al., 2017a). In many instances, these studies focused on a limited geography or on a specific set of pathogens. In a study that was conducted in parallel to the research described in this paper, they concentrated on oomycete species (Rojas et al., 2017a(Rojas et al., , 2017b, whereas this study focused on fungal species. Rojas et al. (2017aRojas et al. ( , 2017b identified oomycete species associated with soybean seedling diseases and documented the diversity and ecology of these communities. They identified a total of 84 oomycete species, 43 of which were confirmed to be pathogenic to soybean. The identified species belonged predominantly to the genus Pythium (94.85%), and remaining species included Phytophthora, Phytopythium, Aphanomycces and Pythiogeton. A total of 13 oomycete species characterized by Rojas et al. (2017a) had not been previously reported as root pathogens of soybean. The abundance, diversity and pathogenicity can be influenced by edaphic and environmental factors (Rojas et al., 2017b;Srour et al., 2017;Yang & Feng, 2001). Soil temperature at planting, precipitation and soil type can influence pathogen development and exacerbate disease symptoms (Broders et al., 2007). Cultural practices that affect the composition of the soil microbial community can affect populations of soilborne pathogens and consequently the incidence of seedling diseases. Adequate planting depth, early planting, cropping history, cultivar selection, the adoption of cover crops and tillage, which can reduce the presence of primary inoculum in the vicinity of seedling roots, are practices that have been reported to potentially affect the incidence and severity of seedling diseases (Broders et al., 2007;Pankhurst et al., 1995). Undesirable shifts in populations of soil microbes may result from edaphic modifications in adopted production systems that provide pathogen populations with competitive advantages, thus reducing the native disease suppressive capacity of the soil (Hartman et al., 2018;Srour et al., 2020;van Elsas et al., 2002). Therefore, the incidence Conclusion: Key fungal species associated with soybean seedling diseases occurring in several US production regions were characterized. This work also identified major environment and edaphic factors affecting the abundance and occurrence of these species. Significance and Impact of the Study: The identification and characterization of the main pathogens associated with seedling diseases across major soybeanproducing areas could help manage those pathogens, and devise more effective and sustainable practices to reduce the damage they cause. K E Y W O R D S environmental factors, Fusarium spp., Glycine max, seedling diseases, soilborne pathogens, Trichoderma spp. and severity of diseases caused by soilborne pathogens are exacerbated by conditions favourable to pathogen development. Cool temperatures, compaction and wet soils can favour Pythium spp. and Fusarium spp. increasing the severity of root rot and or seedling damping off on soybean, whereas similar conditions with higher temperatures (>15°C) can favour Phytophthora spp. and Rhizoctonia solani causing root and stem rot (Winsor, 2020). Identifying environmental conditions that affect the incidence of specific pathogens in symptomatic soybean seedlings could ameliorate the current understanding of the aetiology of these diseases. This knowledge might have significant implications on the development and optimization of management strategies targeting seedling diseases. For instance, Rojas et al. (2017b) determined that latitude, longitude, precipitation, clay content and soil electrical conductivity were the most impactful factors that affected oomycete community composition in soybean fields with a history of seedling diseases. The identification and characterization of the different fungi associated with seedling diseases in major soybean-producing areas can provide valuable resources for research focused on seedling disease management, including the evaluation of fungicide resistance and development of effective and sustainable seed treatments, aiding breeding programmes set up priorities when targeting resistance to fungal pathogens, and testing and evaluating different management practices and their impact on seedling diseases. In this study, we characterized the culturable fungal community associated with soybean seedlings across eight large soybean-producing states in the Midwestern United States. The objectives of the study were to: (i) identify the fungal species associated with soybean seedlings across major soybean-producing states in the United States; and (ii) determine the influence of several environmental and edaphic factors on the occurrence and abundance of these fungal species. Sample collection and fungal isolation A survey was conducted across eight US states-Arkansas, Illinois, Indiana, Iowa, Kansas, Michigan, Minnesota and Nebraska-during 2012 and 2013 ( Figure 1). Between five and eight fields were sampled per participating state, with a total of 49 fields sampled in 2012 and 47 fields sampled in 2013. Fields with history of seedling disease or plant stand issues were selected. Collaborators from each state collected 25 soybean seedlings from each of the fields in that state following a standard sampling procedure described by Rojas et al. (2017a). Twenty-five seedlings with above-ground symptoms were collected from a W-shaped transect across each field. In fields where not enough symptomatic seedlings were found, seedlings were randomly sampled. Due to crop rotation practices, the fields sampled in 2012 were different from the fields sampled in 2013. The growth stage at which seedlings were sampled varied from VE to V4. Seedlings were transported in coolers with ice and processed within 24 h after collection by the collaborators in each state following a standard protocol as follows: seedlings were prepared for isolation by washing their roots under running tap water until all visible soil was removed. Seedling roots were then disinfected by soaking in a 1% NaOCl solution for 30 s, followed by a thorough rinse in distilled water for 1 min. Seedlings were then dried with a sterile paper towel to remove excess water. Root sections (0.5-1 cm) were cut from diseased tissue, including the edge of the disease lesions, using a sterile scalpel. The root sections were placed onto water agar media plates amended with streptomycin (30 mg/L). Ko and Hora media (Ko & Hora, 1971) was used in 2013, in addition to water agar media, to increase the recovery of R. solani. The plates were incubated in the dark at 20-22°C for 7 days and were checked daily for hyphal growth. Hyphal tips that were observed growing were transferred to new potato dextrose agar (PDA) amended with ampicillin (50 mg/L) and tetracycline (50 mg/L). Pure colonies were labelled and stored as plates at 4°C until transferred to new PDA plates for molecular identification. The objective of this research was to focus on fungal isolates, while the oomycete isolates were identified and characterized by Rojas et al. (2017aRojas et al. ( , 2017b. Collaborators Isolate identification and fungal storage Fungal isolates were identified using PCR and subsequent sequencing of the internal transcribed spacer (ITS) of nuclear ribosomal RNA, using the primer pairs ITS1 and ITS4 (White et al., 1990). Speciation within the Fusarium genus was based on the translation elongation factor (EF1α) gene using a nested PCR with the primers EF1 and EF2 (O'Donnell et al., 1998) and Alfie1 and Alfie2 (Yergeau et al., 2005). To confirm the identity of ambiguous isolates, the intergenic spacer (IGS) region of the ribosomal RNA was also sequenced using the primers LR12R and invSR1R (Vilgalys et al., 1994). As per Alshahni et al. (2009), total of 70 μl of alkaline lysis buffer (ALB; 20 mM Tris HCl pH 8.0, 5 mM EDTA, 400 mM NaCl, 0.3% SDS, 200 μg/ml proteinase K) was added to a 2-ml Eppendorf tube. A pin headsized piece of mycelia was collected from 7-to 10-day-old cultures using a sterile toothpick. Mycelia were added to the ALB buffer and incubated at 55°C for 2 h, 95°C for 10-15 min, placed on ice for 3 min and centrifuged for 5 min at 10,600 g. PCR mixes consisted of 1 U Taq DNA Polymerase (GenScript), 1× reaction Buffer (containing 2 mM Mg 2+ ), 0.2 mM dNTP mix, 0.25 μM of each primer, 0.6 μg/ml of bovine serum albumin, 0.5-2 μl of fungal DNA and sterile distilled water to a final 50 μl reaction volume. The amplification program for the reactions targeting the ITS region consisted of 95°C for 2 min initial denaturation; 35 cycles of 95°C for 30 s denaturation; 50°C for 1 min annealing; 72°C for 1 min elongation; and 72°C for 10 min final extension. The amplification program for the reactions targeting the EF1-α region consisted of 94°C for 3 min initial denaturation; 35 cycles of 94°C for 30 s denaturation; 52°C for 30 s annealing; 72°C for 1 min elongation; and 72°C for 10 min final extension. The amplification program for the reactions targeting the IGS region consisted of 94°C for 3 min initial denaturation; 35 cycles of 94°C for 30 s denaturation; 55°C for 1 min annealing; 72°C for 2 min and 30 s elongation; and 72°C for 10 min final extension. Amplicons were purified by adding 5 μl of a mixture of 3 U of exonuclease I (Thermo Scientific) and 0.5 U of shrimp alkaline phosphatase (Affymetrix) with 30 min incubation at 37°C, followed by 85°C for 15 min to deactivate the enzymes. Amplicons were sequenced by Sanger sequencing at SIUC. The Four Peaks software (www.nucle obytes.com) was implemented to visualize the sequences and to crop unnecessary noise on the 3′ and 5′ ends. Sequences were deposited in GenBank under accession numbers MK593627-MK595448 and MN451718-MN452860 for ITS, MN553711-MN555299 for EF, and MN555300-MN555327 for IGS. To identify the fungal isolates, the ITS sequences were primarily queried against the NCBI fungal database using a BLASTn search algorithm (Altschul et al., 1990). The isolates were assigned to distinct species using blastn with an e-score cutoff of <10 −4 and minimum 97% percent similarity. To identify species within the Fusarium genus, EF sequences were queried against the latest versions of two curated databases, Fusarium MLST (http://www.cbs. knaw.nl/Fusarium) and Fusarium-ID (http://isola te.fusar iumdb.org). All identified fungal isolates were used to build a fungal collection for long-term storage using a filter paper method described by Fong et al. (2000). The fungal isolates were grown for 5-7 days over sterile Grade 3 Whatman filter paper pieces placed on PDA. Full strength PDA was used, and the plates were incubated at 25°C in the dark until sporulation. The filter paper pieces covered with spores were air dried for 8 h in a laminar flow hood, placed into sterile labelled glassine envelopes and stored at −20°C. Environmental and edaphic variables Edaphic and environmental parameters prevailing in the sampled fields were collected as described by Rojas et al. (2017b). Information about environmental and edaphic (soil) factors associated with each sampled field was obtained using GIS coordinates to retrieve data from publicly available databases. The variables of interest were precipitation (millimetres), temperature (°C), previous crop, slope ( 0 ), available water capacity (cm water/cm soil), cation exchange capacity (milliequivalents/100 g of soil at pH 7.0), clay content (%), sand content (%), silt content (%), soil organic matter (%), soil bulk density 1/3 bar (gr/cm 3 ), water content 1/3 bar (volumetric percentage of the whole soil), surface texture, soil pH and water source (irrigated or rain-fed). Data pertaining to the soil physical and chemical characteristics were retrieved from the Natural Resources Conservation Service soil database (https://www.nrcs.usda.gov/). The yearly temperature and precipitation data were retrieved from the PRISM Climate Group (http://www.prism.orego nstate.edu/). Information about topology was obtained from the United States Geological Survey (https://www.usgs.gov/), while data related to land usage were retrieved from the USDA National Agricultural Statistics Service (https://nassg eodata.gmu.edu/ CropScape/). Statistical analysis A fungal species table was created based on the molecular identification of the isolates recovered from the collected soybean seedlings. Species with an abundance and frequency <10% were excluded from further analysis. The diversity within each field (alpha diversity) was estimated using the Shannon-Wienner Index, Simpson Index, Pilou's Evenness and richness through the vegan package (Oksanen, 2015) in R. In order to study the fungal diversity between communities (beta diversity), the Bray-Curtis dissimilarity index (Bray & Curtis, 1957) was calculated based on the species abundance. The resulting dissimilarity matrices were used to perform a Permanova analysis for the categorical variables 'state', 'year', 'water source', 'previous crop' and 'surface texture'. The Permanova analysis was followed by a pairwise Adonis test (Martinez, 2019) to test the statistical significance of all pairs of samples with regard to 'previous crops', 'water source' and 'surface texture', with 9999 permutations. To investigate the effect of the main environmental factors on beta diversity, a canonical correspondence analysis (CCA) (Økland & Eilertsen, 1994) was performed using the species composition data over the study sites as a function of different environmental variables. A Kendall's correlation test was performed to determine the effect of each environmental and edaphic parameters on each fungal species (Kendall & Gibbons, 1990). Fields were kept separate and were not grouped by state given the variation in environmental and edaphic characteristics across these geographic locations, even within a state. A total of 12 environmental and edaphic factors were tested using the Kendall's correlation against the top abundant fungal species. Species diversity per field across states The diversity within the surveyed fields (alpha diversity) was assessed by calculating the Shannon-Wiener index, the Simpson index and the Pilou's Evenness (Pielou, 1966). The average number of species per field across different states ranged from 5.8 to 13.8 ( Figure 4; Table S1). Indiana 2013 and Michigan 2012 had the highest species richness, with an average count of 13.8 and 12.6 observed species per field, respectively. Arkansas in 2012 and Kansas in 2012 had the lowest richness, with 5.8 and 6.3 average number of species, respectively (Figure 4; Table S1). The average Shannon-Wiener index (H′) per state and year ranged from 1.51 to 2.13, which reflects a low to moderate diversity (Figure 4) Table S1). Intermediate evenness values were evidenced by relatively higher abundance of two species in comparison to the others, such as F. sporotrichioides and F. oxysporum in Illinois 2013 and R. solani and F. solani in Arkansas 2013 (Figure 3). Influence of abiotic factors on the community structure Yearly average precipitation, soil bulk density 1/3 bar, cation exchange capacity, soil pH and organic matter appeared to be the major abiotic factors associated with the fungal community structure (p = 0.05) ( Figure 5). In total, 32 fungal species profiles were obtained from 96 field samples collected in 2012 and 2013, which were used to determine correlations with 12 environmental variables (File S1). A heatmap of the correlations between the environmental variables and filtered fungal species associated with diseased seedlings is depicted in Figure 6. Among the investigated environmental variables, yearly average precipitation and yearly average temperature showed the strongest correlations with fungal species. Most isolated fungal species apart from Fusarium spp. were negatively correlated with yearly average precipitation (τ = −0.1 to −0.3) ( Figure 6). Conversely, the incidence of most fungal species correlated positively with average temperature (τ =0.1 to 0.3). Among the edaphic factors, sand content showed positive correlation with Trichoderma species (τ =0.2) and Fusarium oxysporum (τ =0.3), and negative correlation with F. proliferatum (τ = −0.2) and F. acuminatum (τ = −0.2). Water content was negatively correlated with T. spirale, T. harzianum and T. hamatum (τ = −0.2). Soil bulk density was positively correlated with F. oxysporum (τ =0.3), whereas F. proliferatum and F. acuminatum were negatively correlated (τ = −0.3) with soil bulk density. In general, we noted opposite trends with F. oxysporum in comparison to the other Fusarium species. The correlations with available water capacity, cation exchange capacity, clay content, organic matter, water pH, silt content and slope with individual species were not significant ( Figure 6). The distribution of the abundance of the top isolated fungal species across the temperature and precipitation gradients are displayed in Figure 7. There were significant differences between different states in the incidence of fungal species isolated from soybean seedlings (p =0.0001; Table 1). The effect of the 'year factor' on the incidence and abundance of fungal isolates was also significant (p =0.0001, Table 1). 'Water source', 'previous crop' and 'surface texture' also influenced the community structure of fungal isolates associated with diseases based on the Permanova tests (p <0.05, Table 1). Different fungal communities were noted following soybean, corn and grassland/pasture, suggesting an effect of F I G U R E 5 Canonical correspondence analysis (CCA) scaling type 2 plot of the fungal community structure isolated from diseased soybean seedlings in the Midwest USA. Environmental variables that significantly influence the community structure are plotted as vectors based on correlations with species composition. CEC: Cation exchange capacity (milliequivalents/100 g of soil at pH 7.0), DB: soil bulk density 1/3 bar (g/cm 3 ), OM: soil organic matter (%), PH: soil pH, PYR: precipitation year (ml) previous crops on the fungal community. Soil type and texture (amount of clay, sand and/or silt) also seemed to have influenced the community. Fungal communities in silty clay and silty loam soils (sand content <40%) were significantly different from those in loam soils (sand content >40%) as shown in (Table 1). Moreover, water source exhibited a significant effect on fungal diversity (p < 0.05), with rain fed soybean plots harbouring fungal communities distinct from those identified in irrigation supplied fields (Table 1). DISCUSSION Seedling diseases and root rot pathogens of soybean reduce yields in the major US soybean-producing states F I G U R E 6 Heatmap showing correlations between environmental and soil edaphic factors with the top abundant fungal species isolated from diseased seedlings. Colours represent Kendall's correlation coefficients (τ) (Kendall & Gibbons, 1990) between relative abundances of the top fungal species and environmental parameters. Asterisks (*) indicate the significance level for Kendall's rank correlation (p <0.05*, 0.01**, 0.001***). AWC: available water capacity (cm water/cm soil), CEC: cation exchange capacity (milliequivalents/100 g of soil at pH 7.0), CLAY: clay content (%), DB: soil bulk density 1/3 bar (g/cm 3 ), OM: soil organic matter (%), PH: soil pH, PYR: precipitation year (ml), SAND: sand content (%), SILT: silt content (%), SLOPE: slope (°C), TAVG: temperature year (°C), WC: water content 1/3 bar (volumetric percentage of the whole soil) significantly. Their diagnosis and management can be challenging, and it is often difficult to predict when seedling diseases will be severe in a specific location and year given the complexity of factors affecting their incidence and severity. Characterization of predominant pathogens associated with seedling diseases across major soybeanproducing areas could improve management efforts, ultimately leading to more effective and sustainable practices to mitigate impacts of seedling disease. In this study, we identified 76 fungal species associated with soybean seedlings collected from fields where seedling diseases have been problematic. Although no pathogenicity tests were conducted in this study to determine the pathogenicity of the collected isolates on soybean, several of the isolates belonged to species previously documented to be soybean pathogens. Regardless of location, the majority of fungal isolates recovered in this study were of the order Hypocreales, with Fusarium (71%) being the most abundant genus. Of the 17 Fusarium species isolated, F. oxysporum, F. solani, F. equiseti, F. acuminatum, F. graminearum, F. sporotrichioides and F. proliferatum were the most abundantly recovered. Similarly, a F I G U R E 7 Distribution of the abundance of the top isolated fungal species across the temperature and precipitation gradients measured at the beginning of the growing season (April to July) and as a year average 3-year survey conducted in Iowa identified 15 Fusarium spp. associated with soybean roots, with F. oxysporum, F. acuminatum, F. graminearum and F. solani as the most frequent and widespread species (Díaz . Several of the Fusarium species identified in the present study are reported to be pathogenic to soybean. For instance, F. solani, F. oxysporum, F. proliferatum, F. graminearum and F. sporotrichioides are known causal agents of soybean root rot (Abdelmagid et al., 2020;Broders et al., 2007;Chang et al., 2015;Farias & Griffin, 1989;Killebrew et al., 1993;Pioli et al., 2004;Rizvi & Yang, 1996). Fusarium redolens has been reported to cause root rot in Minnesota soybean fields (Bienapfl et al. 2010). Fusarium fujikuroi has also been reported to cause pre-and postemergence damping-off on soybean (Chang et al., 2020;Pedrozo et al., 2015). Fusarium thapsinum and F. equiseti have also been reported to be seedborne pathogens of soybean (Pedrozo & Little, 2014). Phomopsis longicolla and Alternaria alternata that were also recovered in this study are known seedborne pathogens of soybean (Kunwar et al., 1986;Li et al., 2010) It is to be noted that, in this study, the seeds were not tested prior to planting to ensure that they pathogen free. Therefore, it is possible that some of the pathogens that we isolated originated from contaminated seeds. Other well-known soybean pathogens isolated in this study were R. solani, the causal agent of Rhizoctonia damping-off and root rot of soybean (Ajayi-Oyetunde & Bradley, 2017) and M. phaseolina, the causal agent of charcoal rot (Romero Luna et al., 2017). In this study, T. harzianum, T. spirale, T. koningiopsis, T. virens and T. hamatum were isolated at relatively high frequency from diseased roots. These isolated Trichoderma spp. have been reported in the literature to mycoparasitize and antagonize plant pathogens such as Fusarium spp., R. solani, A. alternata and M. phaseolina (Harman, 2000;Howell, 2003;Mukherjee et al., 2012;Verma et al., 2007). In a separate study evaluating the diversity of endophytic fungi in soybean, Trichoderma was the second most abundant genus (16.9%) after Fusarium (39.7%), to be isolated from soybean roots (Yang et al., 2018) which is consistent with our findings. Conversely, other studies did not report Trichoderma spp. as part of the soybean fungal T A B L E 1 Permanova analysis of categorical variables influencing fungal community structure (beta diversity) associated with soybean seedlings across different states on Bray-Curtis distances community (Fernandes et al., 2015;Dean et al., 2016;Pimentel et al., 2006) or reported substantially lower frequency-<4%-of isolation of Trichoderma spp. (Impullitti & Malvick, 2013) from soybean roots. In this study, the higher abundance of Trichoderma spp. recovered in 2012 (17.6%) versus 2013 (6.9%) might have been due to the lower precipitation and warmer temperatures at the beginning of the growing season in 2012 compared to 2013, which might have favoured Trichoderma spp. over other fungal species. In addition, differences in culture media used in both years possibility affected the recovery rate of isolates. The high frequency of recovery of Trichoderma spp. may be attributed to the saprophytic habits of those species. It should be noted that all Trichoderma species isolated in this study are considered to be endophytes (Contreras-Cornejo et al., 2016;Druzhinina et al., 2011). They have been reported to associate with the roots of host plants and to perform critical ecological functions, including disease suppression, improving nutrient solubilization and uptake, stimulation of plant growth and health, and reduction of abiotic stresses (Bucio et al., 2015;Harman, 2006;Shi et al., 2012;Yedidia et al., 1999). Nevertheless, the beneficial attributes of these associations to soybean plants may also depend on other factors such as the abundance of these beneficial species in the soil and other abiotic and biotic factors that affect their activity (Burpee, 1990). It is to be noted here that several Trichoderma isolates from this study have been tested in related work and were shown to demonstrate a strong antagonistic activity against F. virguliforme. In fact, some of the tested isolates significantly reduced sudden death syndrome (SDS) foliar symptoms and root rot on soybean caused by F. virguliforme in both greenhouse and microplot experiments . Clonostachys rosea-constituting 2% of the fungal isolates recovered in this study-has been reported to be a biological control agent with activity against important phytopathogens, including Sclerotinia sclerotiorum, F. graminearum, and R. solani (Gimeno et al., 2019;Wu et al., 2018;Karlsson et al., 2015;Salamone et al., 2018). However, C. rosea was also reported to be a potential pathogen, capable of causing root rot, interveinal chlorosis and marginal necrosis on soybean seedlings (Bienapfl et al., 2012). Our results also suggest that the incidence and severity of soybean seedling diseases and the composition of pathogen populations that cause them are dependent on abiotic factors and location. This is in accordance with other studies indicating that microbial community patterns in the soil are primarily related to spatial, biotic, and abiotic factors (Srour et al., 2017;Yergeau et al., 2010). Similarly, studies have shown that several biotic factors, such as the susceptibility of the plant host and the interaction with other microbes present in the soil are key factors that shape the composition of the fungal community. The previous crop used in a cropping system impacts the composition of the microbial community in a given location by favouring reproduction of either pathogenic and/ or mutualistic organisms that are closely associated with that plant host (Benitez et al., 2017;Edwards et al., 2015). Our results suggest that the Fusarium community structure was strongly influenced by the 'previous crop' factor. Fusarium graminearum, F. oxysporum, F. proliferatum, F. acuminatum, and F. equiseti, were prevalent when corn was the previous crop, whereas F. solani had higher abundance under continuous soybean. Interestingly, T. harzianum, T. koningiopsis, C. rosea and R. solani were also more abundant under continuous soybean, whereas T. hamatum, T. spirale, T. virens and M. elongata were more abundant in soybean after corn. Although not tested in this study, the pathogen populations and the overall plant host associated microbial community can be affected by other crucial factors such as: cover cropping, host genotype, plant growth stage, seed treatment, fertigation and tillage practices (Acharya et al., 2019;Longley et al., 2020). In this study, the abundance of specific pathogens associated with seedling diseases was significantly affected by environmental and soil edaphic factors, previous crop, year and by field location. The complexity of fungal communities in soil ecosystems is related to compositional changes caused by differences in edaphic and environmental variables (Dean et al., 2016;Jonhman et al., 1995). Rojas et al. (2017b) studied the oomycete community structure associated with soybean seedling diseases in a similar study. Their results also indicate seasonal temperature, seasonal precipitation, clay content, latitude and longitude as factors explaining the variability observed in the community composition. Environmental factors and conditions are especially influential on seedling diseases which are more likely to occur under cool, wet conditions at the early stages of plant development ). The average yearly temperature across the various locations ranged from a minimum of 4°C to a maximum of 19°C. Our results highlighted a positive correlation between the abundance of several fungal species and the average yearly temperature, including R. solani which is known to be favoured by warmer temperatures (>15°C) (Dorrance et al., 2003). Due to shifts in the global climate patterns influenced by climate change, temperature and precipitation are expected to increase. In the United States, many regions are already experiencing dramatic changes in weather patterns which can alter the traditional range of some pathogens, and thus affect the disease pressure they cause on crops (Delgado-Baquerizo et al., 2020; Velásquez et al., 2018). In fact, increasing temperatures may favour pathogens such as M. phaseolina and C. sojina, which may consequently be capable of surviving winters in more Northern growing regions. Conversely, increasing temperatures might reduce the abundance and the impact of other pathogens such as Sclerotinia sclerotiorum in more northerly surveyed areas (Velásquez et al., 2018). In this work, key fungal species associated with soybean seedling diseases occurring in several US production regions were characterized. This work also identified major environment and edaphic factors that affected the abundance and occurrence of these species. Not surprisingly, most of the recovered species were pathogens known to cause root rot, seedling decay and damping-off of soybean. However, non-pathogenic organisms, including putative biocontrol agents, were also frequently isolated from the roots. We have shown that crop rotation history, soil density, water source and environmental variables, such as precipitation and temperature, within an agricultural ecosystem directly influence the richness and abundance of fungal species colonizing plant roots. The information provided in this study can be used to improve management strategies for soybean seedling pathogens, for example by guiding seed treatment packages and fungicide products to target the most abundant and prevalent known pathogenic species within soybean fields. The potential biocontrol agents identified in this study might also be efficacious against soybean pathogens. The activity of these specific isolates should therefore be explored in future research. Additionally, the fungi incidence and distribution data generated in this study may serve as a benchmark that could be used in future research to monitor changes in the composition of the profiles of the predominant species in these locales due to management practices or changes in environmental conditions.
2022-03-01T06:23:11.610Z
2022-02-28T00:00:00.000
{ "year": 2022, "sha1": "66110b8f05c5afd707e29934ef10722b716565c7", "oa_license": null, "oa_url": null, "oa_status": "CLOSED", "pdf_src": "Wiley", "pdf_hash": "8ad2659b282e3c016b5b04464f37b9c6a308165a", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
233961327
pes2o/s2orc
v3-fos-license
Isolation and identification of exosomes from feline plasma, urine and adipose-derived mesenchymal stem cells Background Exosomes, internal proteins, lipids, and nucleic acids coated by phospholipid bilayer membranes, are one type of small extracellular vesicles, which can mediate cell-cell communication. In recent years, exosomes have gained considerable scientific interest due to their widely applied prospect in the diagnosis and therapeutics of human and animal diseases. In this study, we describe for the first time a feasible method designed to isolate and characterize exosomes from feline plasma, urine and adipose-derived mesenchymal stem cells. Results Exosomes from feline plasma, urine and adipose-derived mesenchymal stem cells were successfully isolated by differential centrifugation. Quantification and sizing of exosomes were assessed by transmission electron microscopy, flow nano analysis and western blotting. Detected particles showed the normal size (30–100 nm) and morphology described for exosomes, as well as presence of the transmembrane protein (TSG101, CD9, CD63, and CD81) known as exosomal marker. Conclusions The results suggest that differential centrifugation is a feasible method for isolation of exosomes from different types of feline samples. Moreover, these exosomes can be used to further diagnosis and therapeutics in veterinary pre-clinical and clinical studies. Supplementary Information The online version contains supplementary material available at 10.1186/s12917-021-02960-4. Exosomes isolated from biological fluids such as plasma and urine hold diagnostic potential [11]. Plasmaderived exosomes (Plasma-exo), which are simple collected, have no adverse effects on health, are considered diagnostic markers for several diseases such as oncology [12], hematonosis [13], angiocardiopathy [14] or ischemic disease [15]. The study of their content (protein or nucleic acid) components is also helpful for the treatment of diseases. Urine-derived exosomes (Urine-exo) are secreted by various cells in the urinary system and released into the urine. The changes of urinary exosome-derived miRNAs and proteins can be used as biomarkers in kidney diseases for monitoring the changes of diseases and judging prognosis, and also have important value in the disease treatment [16,17]. Mesenchymal stem cells (MSCs) have emerged as a promising therapeutic strategy for several diseases. There is accumulating evidence suggesting their therapeutic effects are largely mediated by paracrine factors including cytokines, growth factors, and exosomes [18,19]. Numerous studies have revealed that MSC-derived exosomes (MSC-exo) might represent a novel cell-free therapy with compelling advantages over MSCs such as lower immunogenicity and no tumorigenicity [20][21][22][23]. In recent years, natural or artificially engineered exosomes as new carriers for drug delivery in clinics, have a good development prospect. It is particularly important to establish a fast, simple and stable separation method for the research and application of exosomes [24,25]. The isolation and identification methods of different tissues-derived exosomes from dogs [26], horses [27] and cattle [28] samples have been established, but the exosomes from feline samples have been rarely reported. The objective of this study was to develop an efficient and robust method for MSC-exo, Plasma-exo and Urine-exo from feline samples (Fig. 1). This study provides comprehensive techniques such as transmission electron microscopy, flow nano analysis and western blotting to identify and characterize exosomes, allowing them to be quantified and sized, as well as characterized through specific morphology and a distinct protein expression. Identification of adipose-derived mesenchymal stem cells (AD-MSCs) Differentiation of AD-MSCs After induction with adipogenic medium for 14 days, AD-MSCs gradually changed from fibroblast-like cells to flattened cells, and many different sizes lipid droplets appeared in the cytoplasm. Cellular staining was positive and the multiple lipid droplets in differentiated cells were stained red by staining with Oil red-O. After incubation with osteogenic medium for 5 days, MSCs exhibited obvious morphological alterations. Calcium nodules appeared on the 10th day of induced differentiation and tightly packed colonies forming nodule-like structures were observed and deposition of calcium in these cells was observed by staining with alizarin red ( Fig. 2A). Flow cytometry analysis of AD-MSCs AD-MSCs were highly-expressed mesenchymal stem cell surface markers CD44, CD90 and CD105, while for the lowly-expressed haematopoietic stem cells surface markers CD34, leukocyte common antigen CD45 and major histocompatibility complex class II HLA-DR (Fig. 2B). That is, the isolated and cultured cells conformed the characteristics and identification criteria of mesenchymal stem cells. Transmission electron microscopy (TEM) TEM confirmed 3 different soures-derived exosomes showed the cup-shaped spherical morphology with of Fig. 1 Procedures of the methods used for the isolation of exosomes from feline samples by differential ultracentrifugation. A The schema of the isolation procedure for feline adipose-derived mesenchymal stem cell-derived exosomes, B the schema of the isolation procedure for feline plasma-derived exosomes, and C the schema of the isolation procedure for feline urine-derived exosomes , and CD105 were highly expressed on feline AD-MSCs, whereas the expression of hematopoietic stem cell markers CD34, leukocyte common antigen CD45, and major histocompatibility complex HLA-DR were rarely expressed exosomal vesicles that are concave in the middle (Fig. 3). The vesicles observed ranged in size from 30 to 100 nm. Western blotting Our analysis revealed detection of four surface-marker proteins (TSG101, CD9, CD63, and CD81), with results showing all samples isolated by our ultrafiltration technique were positive for TSG101, CD9, CD63, and CD81, indicating the presence of exosomal marker proteins ( Fig. 5 and Additional file 1). Discussion Exosomes are released by virtually every cell type in the body cells into biological fluids in vivo and cell culture conditioned media in vitro [29,30]. Exosomes have been shown to be key mediators of cell to cell communication, delivering a distinct cargo of lipids, proteins and nucleic acids that reflects their cell of origin [31,32]. As a new biomarker, exosomes have been widely used in the diagnosis and therapeutics of human diseases, but there are few researches in related fields of pet medical. The research interest in exosomes is continuously increasing however the lack of standard methods for isolation and quantification, limits the reliability and reproducibility of exosome use [33,34]. This study provided a method based on differential centrifugation of exosome isolation for 3 different biofluids from feline samples, laying a foundation for the application of exosomes in disease diagnosis and treatment of pet cats in the future. The differential centrifugation method is based on the difference in size and density between the exosome sample and other substances, through a series of centrifuges with different centrifugal forces and different centrifugal time lengths, non-exosomes are gradually removed after precipitation, and then exosomes are precipitate by ultracentrifugation and re-suspended finally [35,36]. Ultracentrifugation is the most widely used method for exosome isolation and was once called the "gold standard" for exosome preparation [37,38]. Due to its simple operation and stable separation effect, about more than half of exosomes related researcher used this method to extract exosomes [39]. In this study, the ultrastructure, particle size and surface markers of exosomes were identified by transmission electron microscopy, flow nano analyzer and western blot. The results showed that the three exosomes were round or elliptic vesicles with membranous structures around the vesicles, similar in shape to those previously described in mammals. The particle size of Urine-exo detected by flow nano analyzer is the largest of the three exosome samples, while Plasma-exo is the smallest, but all within the range of 30-100 nm. Compared to plasma and urine samples, the number of exosomes found in MSCs cell culture medium was significantly lower. This may be because the volume of 50 ml cell culture medium is too small, and a larger volume of medium is needed to obtain higher production of exosomes. Tetraspanins (including CD81, CD63 and CD9 protein) are common exosomal specific markers for extracellular vesicles such as exosomes and were suggested by the International Society of Extracellular Vesicles (ISEV) for the identification of exosomes [27]. As a cytosolic protein, Tumor Susceptibility Gene 101 (Tsg101) is involved in multivesicular body formation of exosome, is considered to be another important exosome marker [40]. Our western blotting result showed that the marker proteins were detected to be all positive in exosomes from 3 different biofluids. But all proteins signal strengths of MSC-exo are weaker than those in the serum and urine, probably because number of Fig. 3 Transmission electron microscopy of exosomes from feline samples. A representative TEM image of isolated exosomes from feline adiposederived mesenchymal stem cell culture medium(A), from feline plasma (B), and from feline urine (C). Exosomes isolated by differential ultracentrifugation were cup-shaped and in size from 30 to 100 nm Fig. 4 The size and concentration of feline samples-derived exosomes measured by flow nano analyzer. A Nano track analysis size distribution of exosomes isolated from feline adipose-derived mesenchymal stem cell culture medium, feline plasma, and feline urine. B Diameter of isolated particles (exosomes). The mean diameters of exosomes from feline adipose-derived mesenchymal stem cell culture medium, plasma, and urine were 74.76 nm, 66.62 nm, and 72.88 nm, respectively. C Counts of particles (exosomes). The concentration of exosomes from feline adiposederived mesenchymal stem cell culture medium, plasma, and urine were 2.62 × 10 10 /ml, 6.42 × 10 10 /ml, and 8.49 × 10 11 /ml, respectively exosomes are fewer of them. Therefore, combined with the above results, it is demonstrated that the methods of exosome isolation we established is feasible and effective, allowing nanoparticles to be analysed in downstream applications. Conclusions Overall, our results evidence the feasibility to easily isolate exosome from the supernatants of feline adipose derived mesenchymal stem cells, as well as from plasma and urine of feline. This method for isolating exosomes from feline samples can be used to further diagnosis and therapeutics in veterinary pre-clinical and clinical studies. Isolation, culture and identification of adipose-derived mesenchymal stem cells Abdominal subcutaneous adipose tissues were collected aseptically at Affiliated Animal Hospital, Department of Veterinary Medicine of Foshan University. The tissues were cut into tissue blocks about 1 mm 2 in size and were digested with 1 mg/mL collagenase type I at 37°C for 2 3 h. The digestive juices were filtered with 200-mesh cell strain and centrifuged at 800×g for 5 min to collect AD-MSCs. Approximately 5000 isolated suspended cells per cm 2 were transferred to cell culture flask (Corning, USA) in Dulbecco's Modified Eagle's Medium supplemented with 10% exosome-free Fetal Bovine Serum (FBS, Biological Industries, Israel), 1% Pen-Strep (Gibco, USA), and 1% L-glutamine (Gibco, USA) and placed into the incubator at 37°C in a humidified incubator containing 5% CO 2 . After 24 h, the medium was replaced for the first time to remove most of the blood cells and replaced every 3 d thereafter. AD-MSCs were digested with 0.25% trypsin and passaged routinely when 809 0% confluence was reached. The AD-MSCs were characterized by multipotential differentiation and flow cytometry analysis. In vitro adipogenic and osteogenic differentiation were examined using MSCs Adipogenic Differentiation Kit (Cyanogen, China) and MSCs Osteogenic Differentiation Kit (Cyanogen, China) following the manufacturer's protocol for each kit. Cells were stained with Oil Red O solution to assess adipogenic differentiation and alizarin red solution to assess osteogenic differentiation. Preparation of cell culture medium samples FBS added to the cell culture medium should be depleted of exosomes by ultracentrifugation at 120000 x g over night at 4°C prior to use.50-80% confluent AD-MSCs at passage 2-5 were washed twice in PBS and further cultured in an exosome-free medium as described above. Briefly, cell culture medium was harvested after 48 h of incubation with exosome-free medium and stored at − 80°C for subsequent experiments. Preparation of plasma samples Samples were mixed from 3 female and 2 male felines presented at Affiliated Animal Hospital, Department of Veterinary Medicine of Foshan University. Blood samples are collected into acollections tubes containing anticoagulant and the cell components were removed by centrifugation (800×g, 4°C, 15 min). The supernatant was diluted with phosphate buffered saline of the same volume (1:1) and stored at − 80°C for subsequent experiments. Preparation of urine samples Urine samples were mixed from 1 female and 2 male felines presented at Affiliated Animal Hospital, Department of Veterinary Medicine of Foshan University. Samples are collected into tubes and stored at − 80°C for subsequent experiments. Isolation of exosomes Exosomes were isolated by differential centrifugation. Briefly, Cell culture medium (50 mL) were centrifuged at 4°C, 300×g for 10 min to remove dead cells, followed by centrifuging at 12,000×g for 30 min at 4°C to remove cell debris. Supernatant was collected and filtered through 0.22 mm filters (Merck Millipore, USA) to remove contaminating microvesicles larger than 200 nm. Following this, the filtered supernatant was transferred to new polycarbonate tubes for ultracentrifugation in ultra-speed centrifuge (Beckman Coulter XL-90, SW28Ti rotor; Beckman Coulter; USA) at 100,000×g for 70 min at 4°C and if not completely full add PBS. Discard the supernatant. For maximal exosome retrieval, resuspend the exosome enriched pellet repeatedly in 100 μl PBS. Diluted plasma samples (5 ml) were centrifuged at 12,000×g for 30 min at 4°C to remove cell debris. Transfer the supernatant to new ultracentrifuge tubes and if not completely full add PBS. Clarified supernatant was ultracentrifuged at 50,000×g for 70 min at 4°C to remove large proteins and microvesicles. Following this, supernatant was ultracentrifuged at 100,000×g for 70 min at 4°C. Discard the supernatant. For maximal exosome retrieval, resuspend the exosome enriched pellet repeatedly in 100 μl PBS. Urine samples (15 ml) were centrifuged at 4°C, 300×g for 10 min to remove dead cells, followed by centrifuging at 12,000×g for 30 min at 4°C to remove cell debris. Transfer the supernatant to new ultracentrifuge tubes and if not completely full add PBS. Clarified supernatant was ultracentrifuged at 50,000×g for 70 min at 4°C in ultra-speed centrifuge remove large proteins and microvesicles. Following this, supernatant was ultracentrifuged at 100,000×g for 70 min at 4°C. Discard the supernatant. For maximal exosome retrieval, resuspend the exosome enriched pellet repeatedly in 100 μl PBS. Transmission electron microscopy (TEM) Exosome samples were diluted in PBS, dropped on a carbon-coated copper grid, and then stained with 1% uranyl acetate for 1 min. Grids were imaged under a Hitachi H-7650 transmission electron microscope. Flow nano analyzer Exosome samples were diluted 1:100 and analyzed using the Flow Nano Analyzer (NanoFCM Inc.) according to manufacturer's protocol. Briefy, the lasers were calibrated using 200 nm control beads (NanoFCM Inc.), which were then analyzed as a reference for particle concentration. Additionally, a mixture of diferent sized beads (NanoFCM Inc.) was analyzed to set reference for size distribution.
2021-05-08T00:02:55.509Z
2021-02-25T00:00:00.000
{ "year": 2021, "sha1": "e7749856c65ddee0a4204b892be9c5268bf5b347", "oa_license": "CCBY", "oa_url": "https://bmcvetres.biomedcentral.com/track/pdf/10.1186/s12917-021-02960-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e46a862415e34d4354dd17db676a61eea19e5b9c", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
6712577
pes2o/s2orc
v3-fos-license
Automatic Context-Specific Subnetwork Discovery from Large Interaction Networks Genes act in concert via specific networks to drive various biological processes, including progression of diseases such as cancer. Under different phenotypes, different subsets of the gene members of a network participate in a biological process. Single gene analyses are less effective in identifying such core gene members (subnetworks) within a gene set/network, as compared to gene set/network-based analyses. Hence, it is useful to identify a discriminative classifier by focusing on the subnetworks that correspond to different phenotypes. Here we present a novel algorithm to automatically discover the important subnetworks of closely interacting molecules to differentiate between two phenotypes (context) using gene expression profiles. We name it COSSY (COntext-Specific Subnetwork discoverY). It is a non-greedy algorithm and thus unlikely to have local optima problems. COSSY works for any interaction network regardless of the network topology. One added benefit of COSSY is that it can also be used as a highly accurate classification platform which can produce a set of interpretable features. Automatic Context-Specific Subnetwork Discovery from Large Interaction Networks (Supporting Information) Ashis Saha, Aik Choon Tan, Jaewoo Kang S1. Estimated t-score The Welch's t-test is a widely used metric for measuring the differential expression of a probe or gene (see Eq. S1). whereX + andX − are the mean expressions, σ + and σ − are the standard deviations, and n + and n − are the number of samples of positive and negative classes, respectively. The higher |t| is, the higher the differential power. We use a slightly modified version of the t-test to avoid the noise of the microarray data. We use the median instead of the mean, and we estimate the standard deviation from the interquartile range (IQR) which is defined as the difference between the upper and lower quantiles. The IQR contains 50% of the data within 1 2 IQR of the median. Our estimation comes from the empirical rule -about 68.2% of the values of a normal distribution lie within 1 standard deviation of the mean. The estimated standard deviation (σ) is given by Eq. S2,σ So, our estimated t-test score (t) is given by Eq. S3. whereX + andX − are the median expressions,σ + andσ − are the estimated standard deviations, and n + and n − are the number of samples of positive and negative classes, respectively. The higher |t| is, the higher the differential power. We sort the probes of each MIS according to the absolute value of the estimated t-scores (|t|) in decreasing order, and select the top five probes as the representative probeset for the corresponding MIS. S2. Binary Vote Binary voting is applied when the voting weights for both classes become equal, which would be very infrequent. In the binary voting system, each top MIS casts a vote in favor of either the positive or negative class, i.e., the voting weight for each class will be either 1 or 0. Comparable to weighted voting, which was described in the main paper, binary voting for a new sample is also determined from the closest cluster. The majority class in the closest cluster gets the total vote (weight=1). IfP c >N c , then W i (positive) = 1 and W i (negative) = 0. Similarly, ifP c <N c , then W i (positive) = 0 and W i (negative) = 1. IfP c =N c , the voting weight is determined in the same way, from the normalized number of positive and negative samples in the next closest cluster from x new , and so on. If T , the number of voting MISs, is odd, then in binary voting will never be equal. If W (positive) > W (negative), the class, binary(x new ), predicted from the binary voting is positive; otherwise, it is negative. S4. An Experiment with the appropriate range S3. Dataset Download Sources We set the appropriate range, [minRange, maxRange], to generate the molecular interaction subnetworks. We experimented with different ranges and chose the optimal range producing the highest LOOCV accuracy over the datasets. Initially, we set minRange = 3, 5, 7 and maxRange = 15, 20, 25 for KEGG, and minRange = 5, 7, 10 and maxRange = 15, 20, 25 for STRING. Later, we expanded the range list based on the results observed. The results of the MISs with different appropriate ranges using KEGG and STRING are shown in Table S2 and S3, respectively. 'Appro. Range' denotes appropriate range. * The optimal appropriate range producing the highest average LOOCV accuracy is shown in bold font. Figure S1. An illustration of the MIS generation. Let us consider one example of a connected molecular interaction network (MIS) and its community dendrogram as shown in the figure. Let the appropriate range be [5,10]. The size (the total number of leaf nodes) of the dendrogram is 24. As it is greater than the maxRange (10), we divide the dendrogram by removing edge E1 so we are left with two dendrograms (A-Q and R-X). The right dendrogram's (R-X) size is 7 (5 ≤ 7 ≤ 10), so we take it as an appropriate community (C 3 ). However, because the left dendrogram's (A-Q) size is above 10, we divide it again by removing edge E 2 . We have to further divide the dendrogram by removing edge E 3 . Thus we get four parts of the original community dendrogram -C 1 , C 2 , X 1 , and C 3 . Three of their sizes fall within the appropriate range [5,10] (C 1 , C 2 , and C 3 ), so we take them as appropriate communities. However, because X 1 's size is less than 5, we discard it. Now, we shall assign the nodes in X 1 -N, O, P, and Q -individually to their closest communities from the original network. P is 1-hop away from C 1 , so P is merged with C 1 ; N and O are 1-hop away from C 2 , so they are merged with C 2 . In the next iteration, Q is merged with its closest community, C 2 . Thus, we get three MISs -C 1 (A,B,C,D,E,F,P), C 2 (G,H,I,J,K,L,M,N,O,Q), and C 3 (R,S,T,U,V,W,X). Figure S2. Topology of two out-of-size MISs generated with an appropriate range of 5-15 from STRING network. A) MIS with 46 nodes has a star topology. B) MIS with 55 nodes is too dense. ( U80040 at(ACO2), V00572 at(PGK1), X07834 at(SOD2), Z68129 cds1 at(IDH3G), X65965 s at(SOD2) 10 AB003177 at(PSMD9), D26599 at(PSMB2), D38047 at(PSMD8), D78151 at(PSMD2), X71874 cds1 at(PSMB10) 11
2016-05-12T22:15:10.714Z
2014-01-01T00:00:00.000
{ "year": 2014, "sha1": "c2761e984c014c69786adb063af96597b7d9ad01", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0084227&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "41abb17ea4c96ff20f009748800bcffaffaf32e1", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
164971287
pes2o/s2orc
v3-fos-license
Attribute Based Storage Supporting Secure Deduplication of Encrypted Data in Cloud Attribute-based encryption (ABE) has been widely used in cloud computing where a data provider outsources his/her encrypted data to a cloud service provider, and can share the data with users possessing specific credentials (or attributes). However, the standard ABE system does not support secure deduplication, which is crucial for eliminating duplicate copies of identical data inorder to save storage space and network bandwidth. In this paper, we present an attribute-based storage system with secure deduplication in a hybrid cloud setting, where a private cloud is responsible for duplicate detection and a public cloud manages the storage. Compared with the prior data deduplication systems, our system has two advantages. Firstly, it can be used to confidentially share data with users by specifying access policies rather than sharing decryption keys. Secondly, it achieves the standard notion of semantic security for data confidentiality while existing systems only achieve it by defining a weaker security notion. In addition, we put forth a methodology to modify a ciphertext over one access policy into ciphertexts of the same plaintext but under other access policies without revealing the underlying plaintext. EXISTING SYSTEM: the file, which is treated as a "proof" for the entire file, is vulnerable to being leaked to outside adversaries because of its relatively small size. a data owner uploads data that do not already exist in the cloud storage, he is called an initial uploader; if the data already exist, called a subsequent uploader since this implies that other owners may have uploaded the same data previously, he is called a subsequent uploader. DISADVANTAGES: User deduplication on the client-side, cannot generate a new tag when they update the file. In this situation, the dynamic Ownerships would fail.As a summary, existing dynamic Ownerships cannot be extended to the multi-user environment.Whenever data is transformed, concerns arise about potential loss of data. By definition, data deduplication systems store data differently from how it was written. As a result, users are concerned with the integrity of their data.One method for deduplicating data relies on the use of cryptographic hash functions to identify duplicate segments of data. If two different pieces of information generate the same hash value, this is known as a collision. The probability of a collision depends upon the hash function used, and although the probabilities are small, they are always non zero. PROPOSED SYSTEM: This Project the goal of saving storage space for cloud storage services also is used for secure deduplication .but several process have been this same concept for deduplication. however this project flow some different modules in there . In this case, if two users upload the same file, the cloud server can discern the equal ciphertexts and store. only one copy of them. This process some authentication available in some issue for security purpose . through this process for ensure secured deduplication. A owner wants to outsource data to the cloud and share it with users possessing certain credentials.The Attribute Authority issues every user a decryption key associated with users set of attributes. which is considered to be the most important challenge for efficient and secure cloud storage services in the environment where ownership changes dynamically. Every time data provider upload file checking from cloud for save storage purpose . Most of the schemes have been proposed to provide data encryption, while still benefiting from a deduplication technique. every user get secured key form admin for security purpose .user can not take any key he can not downloadchipertext file .they can download only encrypted data. every details manage and maintain by Attribute authority. In this way, any user who downloads the file, after decryption, can check the correctness of the decrypted plaintext by matching it to the corresponding tag.To keep the notation succinct, we use c to denote the combination of the encrypted data and the corresponding access structure ADVANTAGES:-system has two advantages. Firstly, it can be used to confidentially share data with users by specifying access policies rather than sharing decryption keys. Secondly, it achieves the standard notion of semantic security for data confidentiality while existing systems only achieve it by defining a weaker security notion. In addition, we put forth a methodolog ciphertext over one access policy into ciphertexts of the same plaintext but under other access policies without revealing the underlying plaintext. System architecture of attribute-based storage with secure Deduplication.: Modules: In this project we have following Four modules . system has two advantages. Firstly, it can be used to confidentially share data with users access policies rather than sharing decryption keys. Secondly, it achieves the standard notion of semantic security for data confidentiality while existing systems only achieve it by defining a weaker security notion. In addition, we put forth a methodolog ciphertext over one access policy into ciphertexts of the same plaintext but under other access policies without revealing the underlying plaintext. based storage with secure Deduplication.: In this project we have following Four modules . Data provider uploading file to cloud with tag , label and security key , the proposed scheme integrity against any tag inconsistency attack. Thus, security is enhanced in the National Conference on Advancement and Applications of Cloud computing and Big data Analytics 3 system has two advantages. Firstly, it can be used to confidentially share data with users access policies rather than sharing decryption keys. Secondly, it achieves the standard notion of semantic security for data confidentiality while existing systems only achieve it by defining a weaker security notion. In addition, we put forth a methodology to modify a ciphertext over one access policy into ciphertexts of the same plaintext but under other access Data provider uploading file to cloud with tag , label and security key , the proposed scheme integrity against any tag inconsistency attack. Thus, security is enhanced in the derived key so that identical plaintexts are encrypted to the same ciphertexts. In this case, if two users upload the same file, the cloud server can discern the equal ciphertexts and store only one copy of them. which may violate the privacy of the data if the cloud server cannot be fully trusted . This is a client who owns data, and wishes to upload it into the cloud storage to save costs. A data owner encrypts the data and outsources it to the cloud storage with its index information, that is, a tag. Deduplication:- Data deduplication is a specialized data compression technique for eliminating duplicate copies of repeating data. Related and somewhat synonymous terms are intelligent (data) compression and single-instance (data) storage. This technique is used to improve storage utilization and can also be applied to network data transfers to reduce the number of bytes that must be sent. In the deduplication process, unique chunks of data, or byte patterns, are identified and stored during a process of analysis.Deduplication techniques take advantage of data similarity to identify the same data and reduce the storage space. In contrast, encryption algorithms randomize the encrypted files in order to make ciphertext indistinguishable from theoretically random data. Attribute Authority: The AA issues every user a decryption keyassociated with user set of attributes At the user side, each user can download an item, and decrypt the ciphertext with the attribute-based private key generated by the AA if this user's attribute set satisfies the access structure. RSA Algorithm: RSA is an algorithm used by modern computers to encrypt and decrypt messages. It is an asymmetric cryptographic algorithm. Asymmetric means that there are two different keys. This is also called public key cryptography, because one of them can be given to everyone. The other key must be kept private. It is based on the fact that finding the factors of an integer is hard ENCRYPTION ALGORITM:- Encryption allows information to be hidden so that it cannot be read without special knowledge (such as a password). This is done with a secret code or cypher. The hidden information is said to be encrypted. DECRYPTION ALGORITHM:- Decryption is a way to change encrypted information back into plaintext. This is the decrypted form. The study of encryption is called cryptography. Cryptanalysis can be done by hand if the cypher is simple. Complex cyphers need a computer to search for possible keys. Decryption is a field of computer science and mathematics that looks at how difficult it is to break a cyphe Attribute-based encryption (ABE) has been widely used in cloud computing where a data provider outsources his/her encrypted data to a cloud service provider, and can share the data with users possessing specific credentials (or attributes). However, the standard ABE system does not support secure deduplication, which is crucial for eliminating duplicate copies of identical data inorder to save storage space and network bandwidth. In this paper, we present an attribute-based storage system with secure deduplication in a hybrid cloud setting, where a private cloud is responsible for duplicate detection and a public cloud manages the storage. That can be used to confidentially share data with users by specifying access policies rather than sharing decryption keys. In addition, we put forth a methodology to modify a ciphertext over one access policy into ciphertexts of the same plaintext but under other access policies without revealing the underlying plaintext.
2019-09-15T03:04:09.262Z
2018-06-30T00:00:00.000
{ "year": 2018, "sha1": "9a9f07354c5d67fe95e35a1e57b3a21deaa12733", "oa_license": "CCBY", "oa_url": "https://www.ijtsrd.com/papers/ijtsrd13014.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "2a5b0f064fcc2eff12a36c6b4ebc27e60a40fcf8", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
6386509
pes2o/s2orc
v3-fos-license
Menu Pricing and Learning We address the question of designing dynamic menus to sell experience goods. A dynamic menu consists of a set of price-quantity pairs in each period. The quality of the product is initially unknown, and more information is generated through experimentation. The amount of information in the market is increasing in the total quantity sold in each period, and the firm can control the information flow to the market by adjusting the level of sales. We derive the optimal menu as a function of consumers' beliefs about product quality, and characterize the changes in prices and quantities resulting from information diffusion. The equilibrium menu prices are the result of a dynamic trade-off between immediate gains from trade, information production, and information rents. The firm initially charges lower prices, in order to increase sales above the static optimum, sacrificing short-term gains in order to invest in information. As the market obtains more information, the firm gradually shifts to a policy designed to extract revenue from high-valuation buyers. This policy may eventually exclude low-valuation buyers from the market, even if the product's underlying quality is in fact high. Motivation Learning plays a crucial role in many markets and other strategic environments. In particular, in markets for new products and services, sellers face uncertainty over the product's …t to consumers'needs. Consider, for example, new software products and new online services, such as DVD rentals, data backup, Internet telephony, and Internet access itself. The quality of these products is only revealed to market participants through consumption, as buyers learn from their own experience and from that of others. In these markets, heterogeneity in consumers'willingness to pay for the product creates the opportunity for …rms to pro…tably adopt price discrimination techniques, such as menu pricing. In addition, information about a product's performance is widely and publicly accessible through an increasing number of channels. 1 The availability of such aggregate information in a dynamic environment enables …rms to modify their menu prices on the basis of the opinion of their customers. This aspect is particularly relevant in markets for experience goods, because the di¤usion of information is endogenous to the behavior of market participants: consumers'purchasing decisions and …rms'pricing strategies determine the level of sales, and hence the amount of information conveyed to the market. In this scenario, a forward-looking …rm must screen consumers in order to maximize revenues, while taking into account the informational value of sales. By selling additional units of the product (for example, by o¤ering introductory discounts), the …rm accelerates the buyers' learning process, thereby trading o¤ (a) the long-run pro…ts that accrue due to the di¤usion of information against (b) the maximization of current revenue. In this paper, we address the issue of designing dynamic menus to sell experience goods. We characterize the evolution of menu prices as information about product quality is gradually revealed, and examine the interaction of the screening and learning problems. We develop a dynamic model in which a monopolist in each period o¤ers a menu of contracts to a population of buyers. These buyers have private information about their willingness to pay, providing the …rm with an incentive to price di¤erentially. The quality of the product is unknown initially; more information is generated through experimentation. As purchases are made, both the …rm and the consumers observe signals about the product's quality and, as a result, revise their beliefs. The amount of information in the market is increasing in the total quantity sold in each period. As a result, the …rm can control the information ‡ow to the market by adjusting the level of sales. Learning about the product occurs faster as more units are sold; hence, the …rm might use low introductory prices. The uncertainty about the quality of the product introduces a new dynamic element into the standard trade o¤ between e¢ ciency and rent extraction. More speci…cally, the quantity of the product that is supplied to each buyer is determined by the combination of three components. The …rst of these components is the generation of information. Learning occurs through consumption, and each unit sold provides additional information. Thus the …rm wants to sell additional units to gain more information when uncertainty about quality is high and beliefs are more responsive to news. The second component is related to e¢ ciency. As consumers grow more optimistic about the quality of the product, their willingness to pay increases, thereby creating the opportunity for the …rm to realize larger gains from trade. Therefore, the …rm o¤ers larger quantities in this case. The third component is adverse selection. Positive signals about quality increase the spread in buyers' valuations for the product. This makes the incentive compatibility constraints more di¢ cult to satisfy and induces the …rm to o¤er fewer units to buyers who have a lower willingness to pay. The …rm pursues the dual objectives of generating information and screening consumers simultaneously. However, the balance between the two goals shifts over time. Initially, the …rm increases the level of sales to all buyers above the static optimum: it sacri…ces short-term gains in order to invest in information. As more information is gained, the …rm gradually adopts a policy that targets the consumers with the highest valuations, in order to extract more surplus. This policy may eventually exclude low-valuation buyers from the market, even if the product's underlying quality is high. In greater detail, as all consumers become more optimistic about the quality of the product, the cost of providing incentives to high-valuation buyers increases due to the adverse selection e¤ect. This leads the …rm to reduce the supply of its product to low-valuation buyers. Consequently, the combination of the learning and screening goals has three main e¤ects: (i) the quantity o¤ered to a lowvaluation buyer need not be a monotonic function of her posterior beliefs about the product's quality; (ii) successful products are characterized by a greater price dispersion and a wider variety of available quantities; (iii) for successful products, the …rm expands the range of o¤ered quantities through the addition of new options, at both the top and at the bottom of the menu. In the model, learning occurs on the basis of aggregate information. More precisely, we assume that each consumer's action (quantity choice) and payo¤ (experienced quality) is observable to other buyers and to the …rm. In other words, all information is publicly available to the market. While this is an important assumption, it suits the purpose of this study for two reasons. First, in large markets, consumers realize that others' experience is also indicative of the underlying quality of the product and take public information into account. More importantly, the study presented herein is interested in modeling the …rm's optimal response to variations in demand that arise from the arrival of new information. As such, it focuses only on information that the …rm can use in order to determine its strategies. In an alternative model, demand for the product would be determined by consumers'private experiences, while the …rm only observes the market's average experience. In the context of this study, the introduction of private information would add noise to the demand process, but would not alter the qualitative properties of the …rm's behavior. We therefore abstract away from further heterogeneity in demand, and consider only the market's observable aggregate experience. Examples The model is well suited to analyzing several di¤erent markets. The market for enterprise software provides an interesting application. An emerging contractual arrangement in this industry is given by software-as-a-service (SaaS). Under this contractual form, …rms have the option of renting a given number of licenses for the use of a given software product (for example, a customer database system or an online backup program). Larger …rms need to rent more licenses, and the renting of more licenses enables the …rm to bene…t more from a higher quality product. This is so because, in this market, each employee using the software constitutes an experiment for product quality, so that the number of seats may be tied directly to the rate at which information arrives. Moreover, the rental contracts and their corresponding prices can easily be adjusted. Finally, network externalities between …rms are not a signi…cant issue in enterprise software, because it is designed for internal use; hence, the private values framework is realistic. 2 As an alternate example, consider the market for online DVD rentals. Companies such as Net ‡ix o¤er membership plans that charge a …xed monthly fee and specify the number of movies a consumer may rent at the same time. While buyers di¤er in their personal willingness to pay for watching DVD movies, the quality of the recommender system (suggesting new titles based on each buyer's ratings of other …lms) is a common component in determining the overall quality of the service. 3 With this interpretation, each movie rented constitutes an informative experiment about the product's quality. It is reasonable to assume that customers with a higher willingness to pay also care more about the …t of the recommendation to their own preferences. Furthermore, both the prices for each plan and the choice of plan made by the consumer can easily be adjusted. Finally, Net ‡ix subscribers exchange information about their experience through a surprisingly large number of channels. 4 This means that information about the overall performance of the service circulates very rapidly. Net ‡ix launched their rental service in 2001 and held a near-monopoly position for several years. Figure 1 reports the menus o¤ered by Net ‡ix over the years 2002 through 2005, that is, immediately before Blockbuster established itself as a serious competitor. In 2002, the Net ‡ix menu o¤er consisted of two plans, which allowed for the simultaneous rental of two and four titles, respectively. The variety of the plans o¤ered increased over time, as the service soon proved to be a clear success. 5 In 2003 and 2004, Net ‡ix modi…ed its o¤er of plans to a four-item menu, while raising unit prices across the product line. It added several more options in 2005, while at the same time reducing all prices slightly, possibly due to competitive pressures from Blockbuster. Consistent with the model's predictions, the range of total charges (in dollars per month) went from a minimum of $12 and a maximum of $20 in 2002 to $5 and $48, respectively, in 2005. At the same time, the set of available quantities increased to eight in 2005. Finally, the lowest quantity o¤ered decreased from two rentals at a time in 2002 to one in 2005. 6 Related Literature This study enriches the literature on screening by extending nonlinear pricing techniques beyond the canonical, static environment to a model in which information is revealed over time. It therefore builds upon the classic studies in price discrimination, such as Mussa and Rosen (1978) and Maskin and Riley (1984). At the same time, it is tightly connected to continuous time models in which the ‡ow of information is controlled by one or more of the agents. These papers include the works of Bolton and Harris (1999) and Keller, Rady, and Cripps (2005) on strategic experimentation, Keller and Rady (1999) on experimentation by a monopolist in a changing environment, Moscarini and Smith (2001) on the optimal level of experimentation, and Faingold and Sannikov (2010) on reputation in continuous time. In particular, we use the method of Keller and Rady (1999) to show the existence and uniqueness of a solution to the …rm's problem. Our analysis also complements several models of introductory and dynamic pricing under uncertainty about product quality. The main work in this area is due to Bergemann and Välimäki (1997, 2002 and Villas-Boas (2004. In particular, Bergemann andVälimäki (1997, 2002) analyze a duopoly model of price competition where market participants are uncertain about the degree of horizontal or vertical di¤erentiation of the two …rm's products, while Bergemann and Välimäki (2006) consider dynamic monopoly pricing in a private values environment. We discuss these papers at length in Section 6. Our paper is also related to the dynamic pricing models in Bose, Orosel, Ottaviani, andVesterlund (2006, 2008), in which buyers take actions sequentially, based on the history of previous purchases, prices, as well as their private information about a common value component. In contrast, in our model, each buyer's willingness to pay is determined by her own (and others') past experience with the product, and her private information concerns an idiosyncratic component. The problem of generating information through sales was …rst studied, in the context of a screening model, by Braden and Oren (1994), who introduce uncertainty over the distribution of buyers' willingness to pay. In their model, one buyer arrives in each period, and her choice from the …rm's menu provides information about the true distribution of types. Braden and Oren (1994) and our paper share the conclusion that excluding types early on reduces the amount of information generated and is therefore suboptimal. However, in Braden and Oren (1994) information is only obtained by avoiding bunching and exclusion of types. The learning problem is therefore separate from pro…t maximization, because learning considerations do not a¤ect the quantity levels o¤ered to each buyer. Generating (public) information proves to be bene…cial in our context. This is indeed similar to the …ndings of Ottaviani and Prat (2001). However, our result is based on the convexity of the …rm's pro…ts as a function of the unknown product quality, as opposed to the e¤ect of an a¢ liated public signal on buyer's information rents. Finally, the techniques used in this study also relate to the models developed by Lewis and Yildirim (2002) and by Boone and Shapiro (2009). In particular, in the dynamic regulation of Lewis and Yildirim (2002), a planner o¤ers a menu of contracts to a …rm whose production costs decrease by a deterministic amount, but where innovation follows a stochastic process. In another closely related contribution, Gärtner (2010) analyzes a two-period regulation model in which the rates of learning-by-doing of di¤erent types a¤ect the dynamics of output distortions. In addition to this theoretical body of work, recent empirical literature has attempted to quantify the importance of learning considerations on consumers' dynamic purchasing behavior. In these studies, consumers learn from their individual experience, revise their beliefs about product quality, and consequently modify their choices. A non-exhaustive list of empirical papers on learning and dynamic consumer choice includes Ackerberg (2003), Akçura, Gonul, and Petrova (2004), Crawford and Shum (2005), Erdem and Keane (1996), Goettler and Clay (2010), Gowrisankaran andRysman (2009), andIsrael (2005). From a di¤erent perspective, Hitsch (2006) and Song and Chintagunta (2003) analyze learning about the demand on the …rm's side, but focus on investment decisions, such as product adoption or exit, not on pricing strategies. The study reported herein complements this literature with a theoretical framework for nonlinear pricing, in which …rms'learning is just as important as buyers', and in which information is obtained from aggregate experience. Payo¤s We consider a dynamic model with a monopolist …rm and a continuum of small consumers. Consumers purchase repeatedly and have multi-unit demands in each period. Each consumer's valuation of the …rm's product depends on both a private value and a common value component. We denote by an idiosyncratic, private value component, representing the buyer's personal willingness to pay for the product. For each buyer, belongs to the interval = [ L ; H ]. The idiosyncratic component is the consumer's private information. It is distributed in the population according to a continuously di¤erentiable distribution F ( ). We denote by a common value component that represents the quality of the match between the product and the needs of the market. This parameter may only take one of two values, 2 f L ; H g with 0 < L < H . Each consumer's valuation for q units of a product is a separable function of the product's quality and of the consumer's willingness to pay . The complete information utility of a consumer with willingness to pay , who purchases q units of a product of quality , for a total charge of p, is given by The function u (q) is assumed to be strictly increasing. As a consequence, the consumer's utility function U ( ; ; q; p) displays the single crossing property in ( ; q). Furthermore, product quality and personal taste interact multiplicatively. Hence, buyers with a higher willingness to pay bene…t more from a higher quality product. We assume that each buyer makes a purchase decision in every period, and that she can freely switch between purchasing di¤erent quantities. We normalize each buyer's outside option to zero. Finally, we assume that production costs are given by a strictly increasing function c (q). Product quality is unknown initially to both the …rm and the consumers, and all market participants share the common prior belief At each time t, the expected product quality, given current beliefs t , is denoted by In each period, a monopolist posts a menu of price-quantity pairs. We require the …rm to price anonymously, and we allow for prices and quantities to be adjusted ‡exibly. In a direct mechanism, the …rm's strategy is a pair of piecewise di¤erentiable functions q t : ! R + and p t : ! R + in each period. These functions determine the quantity and the total charges assigned to each buyer . Suppose each buyer purchases quantity q t ( ) and pays total charges of p t ( ). The …rm then obtains ‡ow pro…ts of The social gains from trade that are realized by selling quantity q to type , when the product quality is , are given by u (q) c (q). We assume it is always e¢ cient to sell a positive quantity level to every buyer. We now de…ne the virtual valuation of buyer as Under assumption 1, virtual valuations are increasing in . We then consider the virtual surplus, ( ) u (q) c (q), and we introduce the following assumption. Assumption 3 (Concave Virtual Surplus) 1. The virtual surplus is strictly concave in q for all and . 2. For all and , lim We use this assumption in our dynamic analysis to ensure that the optimal quantity is bounded, and can be characterized by a …rst order condition whenever strictly positive. It is satis…ed, for example, by the model in Mussa and Rosen (1978). An alternative assumption, that does not require concavity of the virtual surplus, is the following. This is the case, for example, in the model of Maskin and Riley (1984). This assumption ensures that the monopolist does not want to serve types with a negative virtual valuation, and that the …rst order conditions are su¢ cient whenever quantity provision is positive. Our results hold under either of these assumptions. In this sense, our model can accommodate (among others) both the Mussa and Rosen (1978) speci…cation with linear utility and convex cost, and the Maskin and Riley (1984) formulation with concave utility and constant marginal cost. Information and Learning Information about product quality may only obtained through consumption. We now provide a formal treatment of the aggregate market experience and the associated law of motion of beliefs. In particular, we adapt the model in Bergemann and Välimäki (1997) to allow for multi-unit demand. We begin with a …nite number of buyers and discrete time. We are going to suggest a model in which the informativeness of the aggregate market experience is held constant as the number of buyers increases. In other words, each additional buyer does not lead to a larger, more informative market. Instead, we interpret a larger number of buyers as a more fragmented consumer population, in which each individual buyer purchases units of a smaller size. Formally, this is achieved by decreasing the informativeness of each individual buyer's experience proportionally to the size of the market. Let K be the number of buyers. Each buyer's willingness-to-pay, i , is independently and identically drawn from a distribution F ( i ). Each unit j purchased by buyer i generates a normally distributed signalx ij N ( =K; 2 =K). We refer to the realization ofx ij , denoted by x ij , as the experience of buyer i with unit j of the product. We assume that the individual experience of each buyer i, x i , is observed by all market participants, i:e: all the buyers and the seller. If each buyer i consumes a quantity level q ( i ), the market experience is the sum of the individual experiences The market experience is now normally distributed with mean Q K and variance Q K 2 . An important feature of this construction is that, as we increase the number of buyers (i:e: the number of draws, K, from the distribution F ( )), the realized distribution of willingnessto-pay will coincide with the theoretical distribution, and is thus deterministic. In particular, we obtain that the average number of units converges to the expected purchased quantity, The market experience, on the other hand, will remain a true random variable. Thus, in the limit for K ! 1, the aggregate market experience is normally distributed with mean Q and variance Q 2 . As we take the continuous-time limit and use subscripts for time dependence, the ‡ow of new information follows a Brownian motion with drift Q t and variance Q t 2 , With this structure for the information ‡ow, one can use the …ltering equations 7 to derive the evolution of the posterior beliefs t : In our model, the information contributed by the experience of each buyer becomes 7 See Theorem 9.1 in Liptser and Shiryaev (1977). Equivalently, under our two-point prior assumption, one can use Bayes' rule to compute the posterior belief t+ t , and take the limit for t ! 0: See, for example, the steps in Bolton and Harris (2000). in…nitely small. An alternative model would keep the informativeness of each individual signal constant. As a result, the law of large numbers would imply that the quality of the product is learned instantaneously. In contrast, by holding the aggregate market experience constant as we increase the number of buyers, we intend to capture the relevant features of new online services with a large di¤usion, such as Net ‡ix. In particular, at each instant, the quality of the product is not perfectly revealed, and the individual's experience is of negligible importance, relative to the entire market's. Finally, we point out that several other signal structures would also preserve the imperfect informativeness of the market experience. For example, we could consider common quality shocks to the individual experiences, under the assumption that larger production levels reduce the variance of these shocks. To summarize, information is imperfect but symmetric at all points in time. The posterior beliefs follow a martingale and as a result the process has a zero drift. The level of aggregate sales determines the total number of experiments with the product, and hence the rate at which the …rm and the consumers learn about its quality. We now de…ne the following function: This function captures the marginal contribution of each unit sold to the variance of the belief process (d t ) 2 = 2Q t ( t ) dt. The variance is increasing in the degree of dispersion t (1 t ) and in the signal-to-noise ratio ( H L ) / . Posterior beliefs evolve more quickly when current uncertainty is high and when signals are precise. Finally, we stress that the changes in beliefs are determined endogenously, because the total quantity sold Q t depends on the …rm's pricing and on the consumers'purchasing decisions. In other words, the …rm can control the rate of information ‡ow to the market by adjusting the level of sales. Equilibrium Analysis As a …rst step in the equilibrium analysis, we characterize the incentive-compatible menus of contracts. Each individual buyer has a negligible impact on the information ‡ow. Therefore, each buyer chooses the price-quantity pair that maximizes her expected utility, given her beliefs t and the …rm's menu o¤er. Because quality may only take one of two values, the current posterior belief t is a su¢ cient statistic for the …rm's problem at every point in time. Therefore, we denote by (q ( t ; ) ; p ( t ; )) the menu o¤ered by the …rm when the posterior beliefs are given by t . We also denote by U ( t ; ; 0 ) the expected utility of a buyer with willingness to pay who purchases the item (q ( t ; 0 ) ; p ( t ; 0 )) intended for a buyer of type 0 : Let U ( t ; ) = U ( t ; ; ) denote buyer 's indirect utility when reporting truthfully. The incentive compatibility constraints for the …rm's problem are then given by the consumer's …rst-and second-order conditions for truthful revelation. By standard arguments, these are equivalent to: Equation (4) shows that the …rm must concede higher information rents when beliefs become more optimistic. This e¤ect is due to the complementarity between product quality and buyers'willingness to pay. Buyers'valuations depend positively on the posterior beliefs t , hence positive news allow the …rm to charge higher prices. However, as t increases, the di¤erence between any two buyers' willingness to pay also increases, thereby creating stronger incentives to misreport one's type. This means that for high values of t , the incentive compatibility constraints are more di¢ cult to satisfy. Finally, the buyers'participation constraints are given by U ( t ; ) 0, for all t and . Myopic Benchmark Consider the problem of an impatient (myopic) …rm, who only maximizes the current ‡ow pro…ts. By expressing total charges p ( t ; ) in terms of the buyers'indirect utilities U ( t ; ), we can rewrite the …rm's ‡ow pro…ts as The myopic …rm maximizes ( t ; q; U ), subject to the incentive compatibility constraints (4) and (5) and to the participation constraint (6). Following the standard procedure for onedimensional screening problems, we substitute constraint (4) in the objective, and integrate by parts. As a result, we can express the …rm's ‡ow pro…ts as a function of only the posterior probability t and quantities q ( t ; ). Assumption 1 ensures that constraint (5) holds in equilibrium. The …rm's ‡ow pro…ts are given by where ( ) denotes the virtual valuation, and the myopic equilibrium pro…t function is de…ned as The myopic solution is obtained by maximizing (7) pointwise. The …rst-order condition for the provision of quantity is given by The myopic equilibrium quantity level q m ( t ; ) is then given by the solution to (8), whenever this solution is positive, and by zero otherwise. The …rm equalizes marginal cost and the buyer's marginal utility. The expected product quality ( t ) acts as a scale parameter for marginal utilities, and hence for equilibrium quantity provision. The following proposition describes the key properties of the myopic solution. Proposition 1 (Myopic Solution) 1. Whenever positive, the myopic quantity q m ( t ; ) is strictly increasing in t and . 2. The myopic pro…t function m ( t ) is strictly increasing and strictly convex in t . The convexity of the myopic pro…t function has implications for the …rm's incentives to learn about the quality of its product. This result is quite intuitive. More optimistic beliefs improve every buyer's willingness to pay, and the …rm can charge higher unit prices. Moreover, the …rm …nds it pro…table to sell a larger number of units. As a result, the myopic pro…t function increases more than linearly with the posterior beliefs t . Therefore, a myopic …rm would be willing to pay in order to enter a fair bet between the two states = L and = H . De…ne the expected payo¤ of this lottery as the complete information average For all interior t , we then have Dynamic Solution In order to design the dynamically optimal menu prices, we consider a sequence of quantity supply functions q t : ! R + . The incentive compatibility and participation constraints (4)-(6) uniquely determine the corresponding sequence of total charges p t : ! R + . Therefore, we can express the (forward-looking) …rm's objective function as Our …rst result is instrumental to determining whether the …rm assigns a positive value to information. Theorem 1 (Convexity of the Value Function) The value function V is continuous and convex. The intuition for Theorem 1 is straightforward. For a …xed quantity supply function, pro…ts are linear in t . Clearly, the …rm can improve on these linear pro…ts by reacting to information. The main implication of Theorem 1 is that the forward-looking …rm is willing to give up some revenue in the short run (i.e. to depart from m ( t )), in exchange for more information generated through sales. The evolution of the posterior beliefs t is controlled by the law of motion (1). Using the law of motion for beliefs and Itô's Lemma, we can write the Hamilton-Jacobi-Bellman (HJB) equation for the …rm's problem as The …rm's value function di¤ers from the myopic pro…t function only through the term , which is positive by Theorem 1, and proportional to the total quantity sold Q. Remember that each unit sold provides an informative signal whose e¤ect on the posterior beliefs depends on the variance ( t ). Therefore, the term Q ( t ) can be interpreted as the amount of information generated through sales. The term V 00 ( t ) represents the marginal value of information. As such, it determines the …rm's incentives to increase the speed at which customers learn about the product's quality. Note that information has no value when t = 0 and t = 1, because beliefs no longer change in those cases. Writing the HJB equation (11) more explicitly, we obtain an expression that may be maximized pointwise: We now prove the existence of a solution to this problem. We then return to the optimal menu of contracts and illustrate the role of the value of information in determining the equilibrium prices and quantities. Our approach consists of turning the HJB equation into a second-order di¤erential equation with the two boundary conditions rV (0) = m (0) and rV (1) = m (1). Since t is the independent variable in our boundary value problem, we drop time subscripts. Theorem 2 (Existence and Uniqueness) 1. There exists a unique solution V ( ) to the HJB equation (12). V ( ) is C 2 and satis…es m ( ) rV ( ) v ( ) for all . 2. The policy function q ( ; ) maximizing the right hand side of (12) pointwise is the unique optimal control. It is continuous and di¤erentiable in and . The proof of (1.) and (2.) adapts the method in Keller and Rady (1999), which is based on super-and subsolutions to a two-point boundary-value problem and deals with the singularities of the di¤erential equation at both ends of the unit interval. 8 The proof of (3.) uses a standard veri…cation theorem. We now derive some elementary properties of the policy function. In the following comparative statics result, we normalize the …rm's payo¤s by focusing on the return (or annuity) function rV ( ). Proposition 2 (Value of Information) 1. The return function rV ( ) and the value of information ( ) V 00 ( ) are decreasing in and in r, for all . 2. Fix an , and consider all pairs ( L ; H ) such that H + (1 ) L = for some > 0. Then rV ( ) and ( ) V 00 ( ) are increasing in the di¤erence H L . As expected, the precision of the individual signals and the …rm's patience level increase the value of information. Proposition (2) also shows that the returns to experimentation increase in the relevance of the learning process, as measured by the di¤erence in the possible quality levels. As we discuss in the next section, higher returns to experimentation induce the monopolist to increase the quantity sold. Properties of the Equilibrium Menus Pointwise maximization of the …rm's objective (the right-hand side of equation (12)) yields an intuitive expression for the optimal quantity provision. In particular, the equilibrium quantities q ( t ; ) are given by the solution to the …rst-order condition whenever this solution is positive, and by zero otherwise. This condition di¤ers from that of the myopic …rm because of the marginal value of information. In particular, the forwardlooking …rm equalizes marginal cost to the buyer's marginal utility, augmented by the marginal value of information ( t ) V 00 ( t ). Notice that the …rm's incentives to experiment, captured by ( t ) V 00 ( t ), are uniform across buyers, because this term does not depend on the buyer's type . We summarize our comparative statics results for the forward-looking …rm's problem in the following proposition: Proposition 3 (Equilibrium Quantities) 1. Quantities q ( ; ) are everywhere higher than in the myopic solution (q m ( ; )). Proposition 3 shows that the …rm induces market experimentation by selling quantities in excess of the myopic optima for all and . This leads inter alia to a (weakly) larger set of types receiving positive quantities in the dynamic solution than in the myopic one. Combining the results of Propositions 2 and 3, we obtain that the number of additional units sold is increasing in the …rm's degree of patience, and in the precision of the signals. However, the value of information ( ) V 00 ( ), as well as quantities and levels of market coverage, are typically not monotonic in the posterior beliefs . In particular, the …rm has no incentive to experiment when beliefs are degenerate and 2 f0; 1g. The quantities q ( ; ) in the direct mechanism can be linked to the actual price-quantity menus o¤ered by the …rm in an indirect mechanism. We do so through a nonlinear price functionp ( ; q). This function de…nes the total amount charged by the …rm for q units of the product, when the posterior beliefs are given by . Consumers maximize their utility given the …rm's current menu o¤er. This allows us to characterize the marginal prices charged on each unit via the buyer's …rst-order condition In equation (14), ( ; q) denotes the buyer who purchases quantity q in equilibrium. Since any quantity sold is de…ned by equation (13) for some type , the equilibrium marginal prices are given byp Proposition 4 (Marginal Prices) 1. Marginal pricesp q ( ; q) are everywhere lower than in the myopic benchmark. A precise characterization of prices requires knowledge of the distribution of types F ( ). However, regardless of the distribution of types, Proposition 4 shows that experimentation reduces the marginal prices paid by each consumer. The …rm is willing to give up revenue (by lowering prices) to further experimentation, while the consumer has no incentives to pay for information. To summarize our results so far, the solution to the …rm's dynamic optimization problem implies higher sales and lower marginal prices, compared to the myopic benchmark. The level of experimentation depends positively on the …rm's patience level and on the precision of the available signals. It also depends positively on the di¤erence between the two possible levels of quality of the product, but it is not monotonic in consumers'posterior beliefs about quality. Linear-Quadratic Model We now specify our model to the Mussa and Rosen (1978) functional form assumptions of linear utility (u (q) = q) and quadratic costs (c (q) = q 2 =2). These assumptions allow us to identify separately the role of the value of information in determining the changes in the equilibrium menus as a function of beliefs. 9 In particular, the …rst-order condition (13) now provides an explicit expression for the provision of quantity In this section, we …rst characterize the solution for a setting in which all buyers participate, and the …rm has a positive discount rate r. We turn to the undiscounted limit to describe the e¤ects of information in …ner detail. We then discuss the properties of the equilibrium menu that extend to the case of small positive discounting. Finally, we extend the analysis to the case of imperfect market coverage. Full Market Coverage and Positive Discounting Full market coverage is obtained in equilibrium when ( L ) > 0. In this case, the myopic solution is given by In addition, it is immediate to show that the myopic pro…ts m are a quadratic function of . The following proposition relates the equilibrium menus to the myopic benchmark. Proposition 5 (Quantities and Prices) 1. The equilibrium quantities and prices are given by The marginal value of information is given by A few remarks are in order. First, each type receives ( ) V 00 ( ) units over and above the myopic quantity supply. These additional units constitute the marginal level of experimentation by the …rm, which is constant across buyers . However, prices only exceed the corresponding myopic level p m ( ; ) by ( ) L ( ) V 00 ( ). This means that each additional unit sold is priced uniformly at ( ) L . In other words, the …rm charges the lowest type's willingness to pay. Hence, it cannot extract any more surplus on the additional units sold. This is a consequence of the fact that buyers are not willing to pay for experimentation and need to be o¤ered a price that is low enough to convince them to purchase more. Second, the number of additional units ( ) V 00 ( ) need not increase monotonically in the posterior beliefs . Third, the marginal value of information does not depend solely on the di¤erence rV ( ) m ( ), but also directly on the current level of demand, which is captured by This term is equal to the total quantity sold by the myopic seller, which can be viewed as the "default" amount of experimentation. In other words, the interaction of the monopolist's screening and learning goals yields optimal quantities that depend on the speed of the learning process in the absence of any additional investment in information production. A further implication of Proposition 5 is that the e¤ects of new information on the supplied quantities depend on the consumer's willingness to pay . Combining the …rst result of Proposition 5 with equation (16), the equilibrium quantity levels may be written as Consequently, types with a virtual valuation above the average E [ ] bene…t more from an increase in the posterior than those with below-average virtual valuations. At the same time, di¤erences between the quantities o¤ered to di¤erent buyers do not depend on the level of experimentation. The following proposition focuses on the variations in the price-quantity pairs o¤ered to each consumer. This result is informative of the dynamics of the variety of the equilibrium menu. If we let = H and 0 = L , we obtain that increases in the posterior beliefs bring about a wider range of options, in terms of o¤ered quantities, and a higher dispersion of total charges. We would now like to characterize explicitly the behavior of the equilibrium menus as a function of . This requires solving the di¤erential equation (16) for the …rm's value function. Unfortunately, this di¤erential equation is a second-order, nonlinear problem that does not have an analytical solution. However, we are able to obtain closed-form solutions by analyzing the undiscounted version of the …rm's problem. No Discounting For the analysis of the undiscounted version of the problem, we adopt the strong long-run average criterion. 10 This approach identi…es the limit of the discounted policy functions as the discount rate approaches zero. The solution provided through the strong long-run average criterion therefore preserves the qualitative properties of the optimal solution for small discount rates. This criterion also allows us to preserve the recursive formulation of the problem and to obtain analytical solutions for the policy function. With reference to our model, the strong long-run average criterion may be summarized as follows. By the martingale convergence theorem, beliefs converge to either L or H . In the limit for r ! 0, the return function rV ( ) converges to the complete information average payo¤ v ( ) de…ned in (9). However, many policy functions attain the long-run average value v ( ), independently of their …nite time properties. Dutta (1991) considers the undiscounted stream of payo¤s, net of their long run averages, Dutta (1991) proves that the policy q t maximizing (18) represents the limit for r ! 0 of the policy functions that maximize the discounted stream of payo¤s (10). The strong longrun average solution combines the …nite time properties of catching-up optimality and the recursive representation of such criteria as the limit of the means. We can therefore write the undiscounted analog of the HJB equation (11) as where now ( ) V 00 ( ) represents the limit marginal value of information. This value does not vanish as r ! 0. On the contrary, Proposition 2 shows that the value of information increases as the …rm's discount rate decreases. In the linear-quadratic, undiscounted case, we can solve for this value in closed form, and express the equilibrium quantities as This expression is obtained by substituting (17) into equation (19), and then solving for The key properties of the equilibrium quantity supply are given in the next Theorem. Theorem 3 (Undiscounted Equilibrium Quantities) 1. The equilibrium quantities q ( ; ) are strictly concave in for all : 2. There exists a threshold type~ such that q ( ; ) is …rst increasing then decreasing in for all types ~ , and strictly increasing in for all types >~ . The main result of Theorem 3 is that experimentation has buyer-dependent qualitative implications for the evolution of equilibrium quantities. Contrary to the myopic case, a set of types [ L ;~ ] does not always receive greater quantities as the posterior beliefs increase. The threshold type~ identi…ed in Theorem 3 satis…es the following equation: Therefore, the fraction of types who receive nonmonotonic quantities is increasing in (a) the relative di¤erence between the two quality levels ( H L ) = H and (b) the dispersion of buyers' valuations Var [ ( )]. The latter result follows from the fact that the …rm's equilibrium pro…t on each type , given by p ( ; ) c (q ( ; )), is convex in . Therefore, an increase in the spread of the distribution F ( ) improves the …rm's pro…ts, thereby making the learning process more signi…cant. The concavity of equilibrium quantities suggests that experimentation is greater when beliefs about the quality of the product are intermediate. Figure 2(a) con…rms this intuition. In this …gure, we show the quantities supplied to three di¤erent buyers < 0 < 00 as a function of . The lowest type has a zero virtual valuation and would never be served in the myopic case. Figure 2(b) illustrates the equilibrium total charges. Consistent with the result from Proposition 6, the di¤erences between the total charges paid by di¤erent buyers are increasing and convex in . The main properties of the equilibrium menu are best understood by decomposing the implications of the arrival of information into three e¤ects. The …rst e¤ect is related to information value. Each unit sold generates additional value to the …rm by facilitating learning. This e¤ect is strongest when beliefs are intermediate, and uncertainty is highest. Conversely, as beliefs approach zero or one, the value of information declines, and so do the incentives to provide greater quantities. This e¤ect in ‡uences all types in the same way, because the informational content of a unit that is sold is independent of the buyer who purchases it. The second e¤ect is related to e¢ ciency. When positive news arrive, consumers are willing to pay more for each unit; hence, gains from trade increase. This e¤ect is stronger for high consumer types, who bene…t the most from a quality increase. The third e¤ect is related to adverse selection. The di¤erential increase in buyer's valuations tightens the incentive compatibility constraints and increases the information rents. This raises the cost of screening consumers. To understand this, remember that in a two-type static model, the information rent of the high type is equal to U ( H ) = U ( L ) + ( H L )q L : The equivalent formulation for this model would be U ( H ) = U ( L ) + ( ) ( H L ) q L , which is increasing in . In other words, positive news generates an additional cost to the seller, thereby driving down consumption for low-valuation buyers as beliefs approach one. The combined e¤ects of information value, e¢ ciency, and adverse selection determine a set of types for which the provision of quantity is nonmonotonic in : These types consume the largest quantities for intermediate values of , where the value of information is highest. Figures 3(a) and 3(b) show the construction of the equilibrium quantities for two di¤erent buyers. The equilibrium quantities are given by the vertical sum of the marginal value of information ( ) V 00 ( ) with the myopic solution q m ( ; ). A peculiar result of this model is that the value of information (and hence the di¤erence between the equilibrium and the myopic quantities) peaks at a value of lower than one-half. To understand why this is the case, consider equation (19). The total information value Q ( ) V 00 ( ) is equal to the di¤erence between long-run average and current- ‡ow pro…ts v ( ) ( ; q). The marginal value of information ( ) V 00 ( ) therefore indicates the contribution of each unit sold to this di¤erence. The value of v ( ) ( ; q) depends positively on the degree of uncertainty (1 ), which is a measure of how much posterior beliefs can be in ‡uenced by the signals observed in the current period. At the same time, the total quantity Q is increasing in , which implies that the ratio (v ( ) ( ; q))/ Q is decreasing at = 1=2. In other words, since the myopic …rm's total sales are increasing in , learning will occur faster when beliefs are high, even in the absence of any (additional) experimentation. This lowers the gap between the full information and the incomplete information pro…ts, and reduces the information value of each (additional) unit sold. Nonlinear Prices Our result on the nonmonotonic provision of quantity can be related to introductory pricing. When uncertainty is high, even low-valuation buyers are induced to purchase larger quantities through quantity discounts. As the market obtains positive signals, buyers'valuations increase, but introductory discounts are greatly reduced. As a consequence, low-valuation buyers reduce their demands. This feature distinguishes the response of the equilibrium menu to the arrival of information from that of the myopic …rm's menu. As the market obtains positive signals, the myopic …rm increases the quantity supplied to all buyers. Figure 4 compares the equilibrium menus (q;p ( ; q)) o¤ered by a myopic …rm (4(a)) with those o¤ered by a forward-looking …rm (4(b)), as described in this section, for several values of . Figure 4(b) also highlights the response of the lowest available quantity to the arrival of information. The slope of the equilibrium menus corresponds to the marginal prices. When buyers' types are distributed uniformly, marginal prices are given bŷ In the undiscounted case, learning has an intuitive e¤ect on marginal prices. Marginal prices are increasing in for each quantity, provided the di¤erence in quality levels H L is not too high. Conversely, for high values of H L , marginal prices are U-shaped in for all q. As we have shown, the analysis of the undiscounted problem under full market coverage delivers explicit solutions that provide insights into the properties of the equilibrium menus. We now extend our …ndings, by separately relaxing the assumptions of in…nite patience and full market coverage. Small Positive Discounting Many results obtained in the undiscounted limit extend to the case of a positive discount rate. In particular, we can use bounds for the convexity of the value function to establish the concavity of equilibrium quantities under small positive discounting. This procedure presents some di¢ culties, because the second derivative of the value function is unbounded when goes to zero or one. The only exceptions are given by the myopic pro…ts (because 00 m ( ) is a constant), and by the undiscounted pro…ts (because v 00 ( ) 0). Our …rst result extends the concavity property of the provision of quantity through a careful treatment of the order of limits. Proposition 7 (Concave Quantities) For any " 2 (0; 1), there exists a value of the discount rate r " such that, for all r < r " , the quantity supply function q ( ; ) is concave in for all 2 ["; 1 "] and for all 2 . Our second result establishes that all o¤ered quantities are increasing in the posterior beliefs when = 0. More importantly, it identi…es the minimum degrees of patience required to extend the nonmonotonic quantities result to an arbitrary set of low-valuation buyers. For this purpose, let~ be the threshold type de…ned by (21). Proposition 8 (Nonmonotonic Quantities) 1. The quantity q ( ; ) is increasing in at = 0 for all r and all . Partial Market Coverage When the distribution of types is such that ( L ) < 0, it is not optimal for the monopolist to serve the entire market for all values of the posterior beliefs. In what follows, we focus on the undiscounted version of the problem and apply the strong long-run average criterion. For buyers who are o¤ered positive quantities in equilibrium, the optimal sales level is characterized by the …rst-order condition (15): However, the equilibrium value of information also a¤ects the set of buyers who receive positive quantities. In other words, ( ) V 00 ( ) determines the lowest type served, which we denote by ( ). After substituting the optimal policy rule as a function of ( ) V 00 ( ), we can rewrite the …rm's problem as follows: The critical type is determined through the equation q ( ; ( )) = 0. In order to obtain a closed-form expression for ( ) V 00 ( ), and hence for q ( ; ), we assume that types are distributed uniformly. We then obtain the following characterization of the equilibrium quantities and market coverage levels. Proposition 9 (Market Coverage) Assume types are uniformly distributed on [ L ; H ]. The undiscounted equilibrium level of market coverage and equilibrium quantities are given by The incentives to experiment lead the …rm to serve a larger fraction of types, compared to the myopic solution. These incentives are clearly strongest when the value of information is greatest. Market coverage is therefore highest for intermediate values of , where information is more valuable. However, as in the case of full market coverage, the marginal value of information (and hence the fraction of buyers who are served) attains a maximum when is lower than 1=2. The case of partial market coverage allows us to show clearly how the arrival of new information bene…ts some high valuation buyers, but not others. Figure 5 shows the indirect utility levels for three buyers, as a function of . In particular, the lowest valuation buyer shown ( ) is excluded for some high and low values of , while buyers 0 and 00 are served for all values of . However, buyer 0 does not always bene…t from the arrival of new (positive) information. Intertemporal Patterns We are now interested in deriving predictions for the intertemporal evolution of the equilibrium menus. We …rst consider the point of view of participants in the market. Their posterior beliefs t follow the di¤usion process described by equation (1). Therefore, by Itô's Lemma, any twice di¤erentiable function h ( t ; ), such as prices and quantities, also follows a di¤usion process. In particular, the law of motion of h ( t ; ) is given by Given that E [d t ] = 0, the sign of the drift component of the process dh ( t ; ) is determined by the second partial derivative @ 2 h ( t ; ) = (@ ) 2 . In other words, the concavity and convexity properties of any function h ( t ; ) may be translated directly into statements about the sign of its expected changes. Throughout this section, we maintain the linear-quadratic functional form assumptions. Proposition 10 shows that, from the point of view of the agents, quantities are expected to decrease over time. Conversely, di¤erences between quantities o¤ered to di¤erent buyers are expected to remain constant over time. Finally, di¤erences in the total prices charged to di¤erent buyers are expected to increase. All these …ndings are consistent with the use of introductory pricing by the …rm, which combines lower charges and larger quantities when uncertainty is higher. The posterior beliefs t of market participants follow the di¤usion process (1). However, from the point of view of an external observer (i.e. the econometrician), the evolution of the process d t depends on the true underlying quality level. Therefore, any empirical prediction about the intertemporal patterns of prices and quantities must be based on the conditional law of motion of beliefs. The conditional changes in beliefs have a non-zero drift component, whose sign depends on the true : In particular, for 2 f L ; H g, the general …ltering equation (see Liptser and Shiryaev (1977)) is given by The drift component of a dh ( t ; ) is no longer uniquely determined by the second partial derivative @ 2 h ( t ; ) = (@ ) 2 , but also depends on the …rst partial derivative @h ( t ; ) =@ . In particular, using expression (22), and factoring out common terms, the sign of the drift component of the process dh ( t ; ) is determined by the following expressions: These expressions can be used to derive su¢ cient conditions under which the expected change in quantities and total charges has an unambiguous sign. In this case, the concavity of the equilibrium quantities is no longer su¢ cient to conclude that supplied quantities decrease in expectation for all buyers. However, conditional on the bad state L , quantities are expected to decrease over time for all high-valuation buyers, since their equilibrium quantities are increasing in . If we let = H and 0 = L , Proposition 11 suggests that the variety of the o¤ered menu for high-quality products increases over time. Opposite conclusions hold for low-quality products. To summarize, our model predicts that successful product lines should be characterized by increasing dispersion in prices and in the range of o¤ered quantities. Figure 6(a) shows the results of numerical simulations for the quantities o¤ered to two di¤erent buyers, with a prior belief 0 = 1=20, and assuming that the actual quality is high. Figure 6(b) shows the results of numerical simulations for the total charges paid by the same two buyers. As time passes, the quantity supplied to the lower-valuation buyer decreases. However, total charges stay approximately constant, as the …rm exploits the consumer's increasing willingness to pay per unit of the product. Discussion We now discuss the relationship between our results and other dynamic pricing models. The main questions of interest are: (i) Which results are due to the combination of learning and price discrimination? (ii) What is the role of the multiplicative interaction between consumers'tastes and product quality? Single-Price Benchmarks The papers in the literature that best serve as single-price benchmarks for the present work are (monopoly versions of) the models in Bergemann andVälimäki (1997, 2002). In these papers, the utility level of a buyer is in ‡uenced by two random variables: her willingness-topay and her experience with the product. The main di¤erence with the work of Bergemann and Välimäki is that we allow consumers to have multi-unit demands, and the …rm to price discriminate. In the work of Bergemann andVälimäki (1997, 2002), the …rm charges lower prices, relative to the myopic solution. Furthermore, when the value of information is su¢ ciently high, the equilibrium prices can increase following both good and bad news about the quality of the product. Our model shares the same intuition for the positive value of information, and hence for introductory pricing. The novelty of our framework is in the set of instruments available to the …rm, namely the ability of the monopolist to choose both the price and the quantity o¤ered to each buyer. In our setting, the di¤usion of information impacts buyers di¤erentially. In particular, good news can bene…t high-valuation types and hurt low-valuation buyers. With a slight change in interpretation, we can view q as a one-dimensional product characteristic (e:g: "quality"), and as the match value of the product's features with the consumers'tastes. 11 In such a model, the …rm o¤ers di¤erent versions of the product as a function of the market's beliefs about the value of the match. Furthermore, the equilibrium product line variety does not respond to good and bad news symmetrically. Indeed, bad news lead to a contraction of the …rm's menu, while good news lead to an increase in product line variety. It can also be useful to contrast our framework with the idiosyncratic learning model in Bergemann and Välimäki (2006). This paper examines dynamic pricing of experience goods when buyers are ex-ante identical and learn their true value through consumption. As a result of the private values environment, and in contrast to our model, the equilibrium price patterns are deterministic. Bergemann and Välimäki (2006) show that the equilibrium prices can be either increasing or decreasing over time. In particular, in mass markets, the …rm serves informed buyers with progressively lower valuations, as the size of the uninformed consumer population decreases. This causes prices to decrease. However, the reason for decreasing prices is related to the …rm moving along the demand curve. In our model, prices are stochastic and decline following negative signals about a common value component. Finally, when the price in Bergemann and Välimäki (2006) is decreasing, it always lies above the static monopoly price, which further highlights the di¤erent role of experimentation in the two models. Product Quality and Idiosyncratic Tastes In our model, the e¤ects of information on the quantities o¤ered by the …rm depend on the interaction between consumers'willingness to pay and product quality. We have assumed a multiplicative interaction, but depending on the application, di¤erent demand speci…cations may be more appropriate. A plausible alternative speci…cation for each buyer's complete information utility is an additive one, such as U = ( + ) u (q). In this case, product quality shifts the distribution of consumers'willingness to pay. Under full market coverage, changes in beliefs modify the quantity sold to each buyer in the same direction. This is in contrast with our …nding in Section 4.1, in which the amount of experimentation is constant across buyers, but information may increase one buyer's consumption level, and decrease another's. Nevertheless, under a linear speci…cation, the …rm still adopts introductory pricing, and serves more buyers than in the myopic solution. An even simpler demand function would be U = + u (q). This is equivalent to shifting the buyers'participation constraint. If is allowed to take negative values, the …rm solves a standard optimal stopping problem, in order to determine for which beliefs it should quit the market. When in the market, the …rm sells nonmonotonic quantity levels to all buyers. This occurs because product quality and the number of units purchased by each buyer do not interact, and only the learning e¤ect is present. Therefore, the di¤usion of information a¤ects all buyers in the same qualitative way. The most interesting alternative formulation is perhaps one in which the consumers' tastes are closer together when the product is of high quality. This could be the case when user-friendliness or other characteristics make a high quality product more easily accessible by many users. 12 Indeed, consider the utility speci…cation U = ( + = ) u (q), with > 0. For simplicity, we focus on the linear-quadratic model under full market coverage. The dynamic optimal quantity levels are given by For large enough, both the myopic quantity provision and the dynamic quantity levels are increasing in for all types. In this model, the optimal myopic quantity is convex in , and hence buyers expect quantity levels to increase over time when the discount rate is su¢ ciently high. However, for a low discount rate (or a large enough di¤erence H L ), the optimal dynamic quantity provision is again a supermartingale, and hence expected to decrease over time. This again highlights the value of information, and emphasizes how the results on introductory pricing do not rely on the multiplicative speci…cation of buyers' preferences. Finally, in contrast to the results in Proposition 6, di¤erences in the quantity levels provided to any two types are decreasing in . This result is in line with the …ndings of Gärtner (2010). Concluding Remarks We have analyzed the dynamic menu pricing strategy of a new …rm, when the quality of its product is initially unknown. Buyers assess the quality of the product uniformly, but have di¤erent willingness to pay, which makes it pro…table to practice second-degree price discrimination. By adjusting the quantities o¤ered to each buyer, the …rm can manage the ‡ow of information to the market, and balance the di¤usion of information with the maximization of short-run revenue. The model yields tractable closed-form solutions that enable us to predict the intertemporal patterns of the equilibrium prices and quantities. It also has clear welfare implications, and extends quite naturally to the analysis of competitive environments. We now provide some remarks on these two issues. Welfare Analysis: The information value of each unit sold induces the …rm to increase the quantity supplied to each buyer beyond the ideal point of a myopic seller. This e¤ect counters the downward distortions induced by adverse selection. As a consequence, experimentation by the monopolist increases each buyer's utility level, as well as the e¢ ciency of the allocation. However, the gradual resolution of the uncertainty is not equally bene…cial to all buyers. Low-valuation buyers (who may be excluded as learning occurs) expect their utility level to decrease over time. This is also the case for intermediate-valuation buyers, who face higher prices once low-valuation buyers have been excluded. When posterior beliefs become more optimistic, high-valuation buyers consume larger quantities, and assign a higher value to each unit. Their indirect utility is therefore a convex function of the posterior beliefs and consequently these buyers bene…t from the di¤usion of information. The aggregate quantity sold in equilibrium is ine¢ ciently low. This is due both to a (static) adverse selection e¤ect and to lower (dynamic) incentives to experiment. In each period, the social planner would not impose downward quantity distortions. Rather, the planner would shift rents from the …rm to the consumers, and achieve incentive compatibility by lowering prices. This shift is welfare-improving. Due to the fact that quality and quantity are complements in the buyers'utility function, these gains in e¢ ciency are ampli…ed when the product is of high quality. Compared to a monopolist, the social planner assigns a larger value to information, and hence sells even larger quantities in order to experiment more. Dynamic Competition: The assumption of a monopoly environment is appropriate in some cases. An example is the early days of Net ‡ix. However, markets for experience goods are often characterized by imperfect competition, and pricing is strategic. We are therefore motivated to extend our analysis of dynamic menu pricing to a competitive setting. For example, consider a model in which a new entrant faces a single "safe" incumbent. We assume the two products are horizontally di¤erentiated, or, in other words, that consumers have idiosyncratic preferences for the products of each …rm. In this environment, the role of information becomes even more important. A crucial issue for both …rms is whether to invest in learning about the entrant's product. In particular, the entrant can a¤ect the speed of information di¤usion on both the intensive margin, through the number of units sold to each buyer, and the extensive margin, by controlling market shares. The incumbent, who is selling a product of known quality, can only a¤ect learning on the extensive margin: pricing aggressively reduces learning about the entrant, while accommodating the new …rm accelerates it. We …nd that the entrant is always willing to invest in acquiring information, while the incumbent regards acquiring information as bene…cial only if it believes the relative quality of its product is not very high. Furthermore, experimentation drives the entrant's market share above its myopic equilibrium level, for all values of the posterior beliefs. As in the monopoly case, the amount of experimentation is nonmonotonic, and the entrant's market share is largest when uncertainty about the quality of its product is high. Relaxing the symmetric learning assumption and extending the competitive analysis to richer speci…cations of brand preferences are two directions for future research that should provide more insights into the dynamics of (competitive) menu pricing in markets for experience goods. Appendix Proof of Proposition 1. (1.) By the implicit function theorem, whenever (8) admits a positive solution, the partial derivatives of the myopic supply function are given by When q m ( ; ) > 0, both these expressions are positive under either assumption 3 or 4. Note that whenever ( ) 0, the myopic quantity is zero. (2.) Apply the envelope theorem and use part (1.) to obtain the following expressions for the derivatives of m ( ): which ends the proof. The proof of the next theorem adapts the one in Keller and Rady (1997) to the case of nonlinear pricing. Proof of Theorem 1. Fix a quantity supply function q : ! R + . De…nition (7) shows that ( ; q) is linear in : Therefore, we can write the expected discounted stream of pro…ts as by de…nition of the value function V . Taking the supremum of the left-hand side with respect to q establishes the convexity of V . Therefore, to establish continuity, we only need to check at = 0 and = 1. Suppose V were not continuous at = 0: Because of convexity, this implies lim !0 + V ( ) < V (0), which in turn means there exists a policy q such that V (0; q) > lim !0 + V ( ) : But the strict inequality would continue to hold in a neighborhood of = 0, contradicting the de…nition of V : An identical argument can be used to show continuity at = 1. The next lemma follows the steps in Keller and Rady (1997), and shows that the HJB equation (11) may be reformulated as a boundary value problem. Lemma 1 (Boundary-Value Problem) Let ( ) be de…ned by (2) and let Q = R q ( ) dF ( ). The HJB equation (11) may be reformulated as with boundary conditions for all ( ; V; V 00 ) with V 00 0 and ( ; V ) lying in the set We now state an existence theorem for boundary value problems due to Bernfeld and Lakshmikantham (1974), which we then use to prove Theorem 2. This result requires the concept of supersolution and subsolution and the introduction of a regularity condition. Consider a second order di¤erential equation of the form on an open interval J " = ("; 1 ") with " 0. Let V L and V H be functions with continuous second derivatives on J. The function V L is a called a subsolution of (28) if V 00 If these inequalities are strict, these functions are called strict sub-and supersolutions. Fix two functions V H and V L such that V L V H on J. The function G ( ; V; V 0 ) is said to be regular with respect to V H and V L if it is continuous on S " = f( ; V 0 ; V 1 ) 2 J " R R : V L ( ) V 0 V H ( )g and there is a constant C (") such that jG ( ; V; V 1 )j C (") 1 + jV 1 j 2 on S " . We can adapt Theorem 1.5.1 in Bernfeld and Lakshmikantham (1974) to our framework, to show existence of a solution. Lemma 2 (Existence and Uniqueness) Consider an interval J " , ("; 1 "). Suppose V L is a subsolution and V H a supersolution of (28) on J " , and V L V H : Suppose further that G is regular with respect to V L and V H on J " . Given any pair of boundary conditions V (") 2 [V L (") ; V H (")] and V (1 ") 2 [V L (1 ") ; V H (1 ")], (28) has a C 2 solution on J " which satis…es the boundary conditions. Moreover, for all 2 J " , V L ( ) V ( ) V H ( ). If V L is a strict subsolution, V > V L and if V H is a strict supersolution V < V H on J " . Moreover, for all 2 J " , jV 0 ( )j < N , where N only depends on C (") and on the functions V L and V H . We also adapt Corollary 1.5.1 from Bernfeld and Lakshmikantham (1974) to show the convergence properties of our solution. Lemma 3 (Uniform Convergence) Under the assumptions of Lemma 2, any in…nite sequence of solutions of (28), with V L ( ) V ( ) V H ( ) on J " has a uniformly convergent subsequence converging to a solution of (28) on J " . We can now use these results to prove existence and uniqueness of a solution. Claim 3 Fix an interval J " = ("; 1 "). The boundary value problem (25) is regular with respect to m and v on J " . Proof. It su¢ ces to show that there exists a constant C > 0 such that, for all ( ; V ) 2 , the following obtains: We know that this ratio is always positive and that the …rst term in the numerator is bounded from above by v ( ). Furthermore, we can show that Q is bounded from below by Q m . Suppose in fact thatq = arg min q ((rV ( ; q)) / ( ) Q) , and thatQ < Q m . Then we would have (rV ( ;q)) / ( )Q < (rV ( ; q m )) / ( ) Q m , which yields a contradiction. In fact,Q < Q m implies the right hand side's denominator is larger than the left hand side's, while ( ; q m ) = m ( ) > ( ;q) implies the numerator of the right hand side is smaller than the left hand side's. Moreover, if the solution to (29) is di¤erent from q m , then it must achieve a lower value than q m does. We can then de…ne the uniform bound as which ends the proof. Proof of Theorem 2. (1.) We know the HJB is equivalent to the boundary value problem (25). Furthermore, this problem satis…es all conditions of Lemma 2. Therefore, for all " > 0, the boundary value problem (25) admits a C 2 solution on ["; 1 "] with boundary conditions rV (") 2 [ m (") ; v (")] and rV (1 ") 2 [ m (1 ") ; v (1 ")]. Now let " = 1=n and …x the closed interval J n , [1 /n ; 1 1 /n ]. Similarly, let s n and consider a solution V s ( ) to (25) on the interval [1 /s ; 1 1 /s ]. De…ne the function V n s as the restriction of V s to J n . By Lemma 3, for each n, the sequence V n s has a converging subsequence. By a standard diagonalization argument, there exists a convergent subsequence (which we de…ne as V n ) converging pointwise to a function V : (0; 1) ! R. By Lemma 2, jV 0 n j is uniformly bounded, hence V n ! V uniformly on any closed subinterval of (0; 1). Moreover, the constant C (1=n) de…ned in (30) yields a uniform bound for jV 00 n j on J n . Therefore, V 0 n is locally Lipschitz, hence converges uniformly to V 0 on any closed subinterval of (0; 1). Finally, a standard continuity argument shows that the limit function V actually solves (25). The function G is strictly increasing in V by the envelope theorem. Since the boundary conditions are identical, the function V 2 V 1 attains a local maximum on (0; 1) with V 2 > V 1 . At the maximum, V 00 2 V 00 1 0; therefore, the HJB equations imply G ( ; V 1 ; V 0 1 ) G ( ; V 2 ; V 0 2 ) which contradicts V 1 < V 2 . (2.) Under either assumption 3 or 4, the pointwise maximization of (12) admits a unique solution. We know from part (1.) that a solution V ( ) exists. Therefore q ( ; ) is the only policy attaining it. We can then apply the implicit function theorem to obtain the following expressions for the …rst partial derivatives: Under either assumption 3 or 4, these ratios are well de…ned whenever (13) admits a positive solution. Formulation (25) and the envelope theorem imply that (d=d ) ( ( ) V 00 ( )) is equal to , and therefore it is continuous in . (3.) We verify three conditions for the application of a veri…cation theorem. First, by part (1.), there exists a C 2 solution V ( ) to the HJB equation. Second, the solution to the HJB equation delivers bounded expected pro…ts for all (since V ( ) is bounded by v ( ) =r). It follows that lim sup t!1 e rt E (V ( t )) = 0. Third, from part (2.), there exists a C 1 policy q : [0; 1] ! R + that maximizes the right-hand side of the HJB equation (11). We can therefore apply Theorem III.9.1 in Fleming and Soner (2006) and conclude that V ( ) achieves the maximum of (10). (2.) Holding ( 0 ) constant while increasing ( H L ) induces a mean-preserving spread in the process t . Since the pro…t function is linear in , the value function V ( 0 ) increases, and so does the return function W ( 0 ; r). Since ( ) V 00 ( ) is related to W r ( ) by equation (25), a straightforward application of the envelope theorem delivers that the value of information depends positively on the value of the problem, and hence on the return function W r ( ), and on the di¤erence H L . Proof of Proposition 3. (1.) Let ( ) = ( ) V 00 ( ). From …rst order condition (13) and the implicit function theorem, we have @q ( ; ; ) @ = 1 ( ) ( ) u 00 (q) c 00 (q) 0, with a strict inequality if q ( ; ; ) > 0. This is clear under assumption 3 (concave virtual surplus). Under assumption 4, it is immediate to show that we must have c > for all , and therefore ( ) 0 implies the optimal quantity is zero. It follows that the denominator in (31) is strictly negative whenever quantity is strictly positive. Finally, because the value of information is identically equal to zero in the myopic case and it is given by ( ) > 0 in the forward-looking case, quantity is higher in the latter setting. (2.) Similarly, quantity is increasing in the value of information ( ) for all and . Proof of Proposition 4. (1.) Since ( ) V 00 ( ) 0 and ( ) is increasing, for any quantity q o¤ered both by the myopic and the forward looking …rm, the corresponding marginal pricep q ( ; q) is lower in the latter case. (2.) Since ( ) is increasing, the higher the value of information, the lower the marginal prices. The …rst term in (33) is exactly the expression for the …rm's myopic pro…ts m ( ) in this context. We can then solve explicitly for ( ) V 00 ( ) and obtain which ends the proof. Proof of Theorem 3. (1.) The …rst term in expression (20) is linear in : The term inside the square root is concave, since its second derivative with respect to is given by Therefore, q ( ; ) is a concave function of . Proof of Proposition 7. Consider …rst order condition (15) for the equilibrium quantity function q ( ; ). Parametrize the solution q ( ; ) and the value function V ( ) by the discount rate r. The …rst derivative with respect to is given by The second derivative is given by (35) Now consider an interval ["; 1 "] with " > 0. We know that the second derivative of the value function V 00 ( ; r) is uniformly bounded from above for all r. The bound C (") is de…ned in equation (30). From expression (35), we know that if rV 00 ( ; r) ( H L ) 2 Var [ ] 0, then @ 2 q ( ; ; r) (@ ) 2 < 0. Therefore, if the discount rate r is lower than the threshold r " , ( H L ) 2 Var [ ] /C ("), then quantity provision q ( ; ; r) is concave in over the interval ["; 1 "]. Furthermore, since the second derivative @ 2 q ( ; ; r) (@ ) 2 does not depend on the buyer's type, the result holds for all . Proof of Proposition 8. (1.) Consider again the derivative @q ( ; ; r) =@ , given in equation (34). Evaluate expression (34) at = 0. Since we know that rV ( ; r) m ( ) for all and for all r, we can conclude that rV 0 (0; r) 0 m (0). Using the fact that rV (0; r) = m (0) for all r, and that 0 m ( ) = ( H L ) ( ) E 2 , we obtain the following expressions: (36) We know the derivative W 0 r (1) is increasing in r, since W r ( ) is convex in and decreasing in r for all , and, at = 1, we have W r (1) = m (1) for all r. It follows the right-hand side of (36) is increasing in r. The right-hand side of (36) is also increasing in , since it depends positively on ( ). In the undiscounted case, we have W 0 r (1) = v 0 (1). When r = 0, we can identify a threshold type~ that solves @q(1;~ ; 0)=@ = 0. Moreover, since W 0 r (1) is increasing in r, for each " we can …nd a discount rate r " such that jW 0 r (1) v 0 (1)j < " for all r < r " . Since the right-hand side of (36) is increasing in , for any 0 lower than the undiscounted threshold~ , we can …nd a value for the discount rate r low enough so that 0 solves @q(1; 0 ; r) /@ = 0. For all r < r 0 , we then obtain decreasing quantities q ( ; ) at , for the value of information ( ) V 00 ( ), and obtain the result in the text. Proof of Proposition 10. The drift component of the process dh ( t ; ) is given by The result then follows directly from equation (37), Proposition 6 and Theorem 3. Proof of Proposition 11. This result follows directly from equations (23) and (24), from Proposition 6, and from Theorem 3.
2014-10-01T00:00:00.000Z
2010-11-18T00:00:00.000
{ "year": 2010, "sha1": "5f413d210bbe8c347c4ed37368621a77879ba628", "oa_license": "CCBYNCSA", "oa_url": "https://dspace.mit.edu/bitstream/1721.1/65904/1/aej.pdf", "oa_status": "GREEN", "pdf_src": "ElsevierPush", "pdf_hash": "c608ee7e4385cf87717486960fb32c0d1029e60c", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
84184751
pes2o/s2orc
v3-fos-license
Association of HLA-B27 and Behcet’s disease: a systematic review and meta-analysis Background To calculate the genetic impact of the “HLA-B27” allele on the risk of Behcet’s disease (BD) progression using a systematic review and meta-analysis on case control papers. Methods A systematic review search was conducted on the MeSH keywords of Behcet’s disease, HLAB27 and B27 in PubMed, Scopus, ProQuest, EMBASE, SID, Magiran, IranDoc and IranMedex databases from 1975 to Aug 2017. Data underwent meta-analysis (random effect model) in CMA2 software. Pooled odds ratios with 95% confidence intervals were calculated for each study. The heterogeneity of the articles was measured using the I2 index. Results Twenty two articles met the inclusion criteria for 3939 cases and 6077 controls. The pooled OR of “HLA-B27” in BD patients compared with controls was [1.55 (CI 95% 1.01–2.38), P = 0.04]. The OR differ among different countries or geographical areas, focus on domination the European countries. Quality of studies was moderate and heterogeneity was relatively high (I2 = 66.9%). Conclusions There is a significant correlation between HLA-B27 and Behcet’s Disease, but it was weak. Environmental and genetic factors might determine which the “HLA-B27” alleles manifest Behcet’s disease progression. Future researches is required to perform about what factors can do to positively and separately influence Behcet’s disease. Introduction Behcet's disease (BD) is a recurrent inflammatory disease characterized by four main symptoms, including recurrent oral ulcers, genital ulcers, skin lesions and uveitis [1,2]. BD has been spread worldwide; however, it is observed more commonly along the Silk Road [3]. The etiology and pathogenesis of BD is unknown [2,4]. However, environmental and genetic factors are important agents in the development of the disease [2,3]. Complex HLA/MHC is a genetic region with a biologically important action that is strongly associated with autoimmune diseases such as BD, ankylosing spondylitis (AS) and reactive arthritis [3,4]. Although MHC class I specially "HLA-B5/B51" have the strongest associations with BD; data about "HLA-B27"are conflicting and may increase susceptibility to BD [2,5,6]. HLA-B27" is one of the attractive issues in medicine that play an important role in the pathogenesis of diseases like seronegative spondyloarthropathies and have a protective role in some infections [2,7]. The frequency of "HLA-B27" varies among populations which may be the result of genetic and environmental factors [8,9]. Furthermore, linkages between HLA-B alleles with other MHC and non-MHC genes could alter the penetration and clinical expression of the disease [7,10]. Although these antigens are not as diagnostic criteria but are used for the conforming of diagnosis and the assessment of complications [10]. We conducted a systemic review and meta-analysis on casecontrol articles in order to the evolution of Behcet disease-gene association with the goal calculating the risk increase for BD progression related to "HLA-B27" and comparing among across the continents. [11] used for the search. References of the related review studies were assessed manually. In addition, unpublished studies and documents (grey literatures), and studies offered at congresses were scanned. In the item of unpublished studies or ambiguous data, we contacted the authors to gain further information. Besides, because of the rareness of BD in certain areas, a number of articles were also included that did not conform to the PRISMA checklist to allow coverage of specific items in the results. Papers designed as a case-control study and had acceptable information to create a 2*2 table in BD patients and controls for the frequency of "HLA-B27" were included the study. Moreover, all subtypes of "HLA-B27" with or without concomitance with other kinds of HLA class I alleles and all ages, genders, countries and ethnicities were reflected as inclusion criteria. Studies recording animal samples, case reports, case series, letters, disagreements between the authors, as well as studies in which a subset of BD patients (such as ankylosing spondylitis and uveitis) were excluded. This study has been conducted at Tissue Disease Research Center (TDRC) of the Tabriz University of Medical Sciences/Iran. Eligible studies and data extraction All articles were independently assessed by two appraisals (K.A. and V.L.). Any disagreement between the authors was referred to third author (G.M.) for eligibility. Extraction from each 6 article was conducted independently by two authors. The following data were collected: the name of first author, published date, study population location, BD samples, control samples and frequency of HLA-B27-positive cases and controls. Then countries divide to four groups, including of the Far East, Middle East, Europe and Africa. Nonetheless Turkey is geographically portion of Europe; however, it was included in the Middle East because of genetic matters of population. Statistical analysis Odds ratio (OR) with 95% confidence interval (95% CI) were measured for all studies. Data were based on pooled to compare the frequency of "HLA-B27" between BD cases and controls using a random effect model of meta-analysis. Heterogeneity between studies was measured using Cochran's Q and I 2 tests to determinate the percentage of changes among studies using the software CMA v.2.0. To survey of publication bias Funnel plot analysis was used for investigating the publication bias. Moreover, publication bias was evaluated using the Egger's test. The results of the metaanalysis were presented as forest plots. All statistical tests were 2-tailed and a p-value lesser 0.05 was considered statistically significant. Endnote X5 was used to classify the data, review the titles and abstracts, as well as identify duplicated studies. Ethics statement The protocol of the study was approved by the Ethics Committee of Tabriz University of Medical Sciences and all data were kept without stating the patients' names and addresses. The final data contained a total of 3939 cases and 6077 controls with a larger sample size for Xavier et al. from Iran [21]. Furthermore, a higher case and control populations related to the Middle East and a higher ratio control than case population related to the Europe. The pooled OR for BD susceptibility was 1.55 (95% CI 1.01-2.38) with significant difference (P = 0.04) that the results are shown in Fig. 2. In most areas, the pooled ORs for HLA-B27-positive to progress BD were > 1 through the countries. Comparison among continents illustrated that Europe had higher the pooled OR of "HLA-B27" than Africa, Middle East and Far East. The pooled OR was more than one for all continents exception the Far East ( Table 2). The between-study heterogeneity was moderate (Total I 2 = 66.97%), so the Random-Effect Model was the selected method for statistical analysis at this study. The tests with P50% stated significant heterogeneity. A funnel plot was used to detect publication bias in metaanalyses had a slightly asymmetric shape and Egger's test (t-value = 0.006, df = 20, P-value = 0.99)) was not statistically significant (Fig. 3). In addition, Meta-regression was performed based on year of publication. The results of Metaregression showed that the slope of the regression line was not significant. Such that with an increase of 1 year, 0.012 units decreases the log odds ratio for the incidence of the disease (β = − 0.012, sd = 0.015, P-value = 0.44) (Fig. 4). Discussion The pooled estimates of this meta-analysis of 3939 cases and 6077 controls indicate that the risk of "HLA-B27" for BD progression is increased by a factor of 1.55. In current study, 22 articles were included in a meta-analysis in order to assess the association between "HLA-B27" frequency and BD. To the best of our knowledge, this study is the first meta-analysis study which compare the frequency of "HLA-B27" in BD patient and healthy individuals. Previous studies have been performed on "HLA-B51/B5" which has been known as the strongest association in BD [22,30] with an odds ratio of 5.78 in carriers than non-carriers [2]. However, recent studies showed new susceptibility genes in the rest regions of the HLA class I and numerous non-HLA genes for BD [20,31,32]. This study has been conducted regarding to the role other genes and it seems that there is a correlation between BD and "HLA-B27"; however, this correlation is weak in comparison with "HLA-B51/B5". Behcet's disease is a type auto inflammatory disorder and numerous genetic and environmental factors rising interest to develop BD [3,33]. Mc Gonagle et al. revealed that BD, psoriasis, psoriatic arthritis (PsA) and spondyloarthropathies (SpAs) can be considered to have a strongly shared immunopathogenetic basis [34]. All these diseases clinically overlapping together and associated with MHC class I (MHC-I) alleles such as "HLA-B51", "HLA-B5", "HLA-B27"and HLA-C0602, although these connections are stronger in some diseases [20,21]. However, other class 1 alleles are also expected to be involved. Indeed, the risk of the HLA-B27 allele appears to be shared in all SpA and Behcet's disease groups [20,31]. In another study, Gül has stated that, despite the strong relationship of HLA-B51 with BD, there is a controversy about this relationship because of role of the other variants of MHC class I [35,36]. In another comprehensive review has been emphasized on the role of other class 1 alleles in BD based on the results of recent studies [35]. Indeed, it seems that the role of "HLA-B51" on BD is unclear whether "HLA-B51" gene is BD susceptibility itself or other genes are effective in the disease [35,36]. Therefore, this meta-analysis was conducted to gain information concerning relationship between BD and the HLA-B27 allele. As BD is a chronic and relapsing vasculitis, with involvement several body systems and significant morbidity and mortality [37] a comprehensive disease management is necessary to prevent or minimize the effects of the disease [3,32]. On the other hand, due to progression in knowledge the pathogenesis of BD, and introduction immunologic agents as risk factors [3], a good management could perform according to pathogenesis of BD [3]. For instance, in patients with BD due to the lack of diagnostic pathognomonic laboratory tests, disease identified according to clinical criteria [32]. It is expected that these factors could be used as diagnostic criteria in the future. As a consequence, on time diagnosis, appropriate treatments and good fallow up could reduce morbidity and mortality or improve patient outcomes [32]. In subgroup analysis, the higher pooled OR related to of European, followed by African and Middle East population. In the Far East, "HLA-B27" was a protective factor for developing BD. Ultimately, according to the result this study, the chance for "HLA-B27" to develop BD differ among different countries or geographical areas with an increasing rate from East to West. In general, the result of numerous studies shows the difference in HLA-B27 distribution and its subtypes between population groups in worldwide [38][39][40] ranging from about 0% in Australian Aborigines, rare in the American black [8], 0.1-0.5% in Japanese, 2-9% in Chinese, 4% in North Africans and 8% in Caucasians [8,38]. These differences are influenced by several factors such as ethnicity, geographic alterations, interaction with other genes, connection alleles with some forms of diseases, and environmental agent. Moreover, migration from Asia to Europa or genetic drift and bottleneck effect, especially at the Middle East could express these changes [39]. In a study, Akassou was implied to relationship the HLA-B27 with high concentrations of testosterone in men [41]. In another study, Reveille JD in US observed a significantly higher odds for "HLA-B27" in younger than older adults [9]. Interaction between the HLA-B27 and foreign agents, such as human immunodeficiency virus (HIV), hepatitis C, Klebsiella, Shigella and Salmonella has been reported in a number of studies [9,42]. In another study, Shimizu et al. reported the changes of microbes in the intestines of BD-patients with perdominity of Actinobacteria and Lactobacillus species following the stimulation of T helper 17 (Th17) cells [42]. Gene association studies displaying interaction of "HLA-B27" with ERAP1, ERAP2 and HLA-B60 genes [8,43]. Also, the antigen of β2-GPI which lead to the presence of antiphospholipid antibodies has been considered in the development of autoimmune diseases [44]. It was clear that the accurate diagnosis of disease, improvement in the detection of pathogenic mutations in DNA and increased number of reports were going to have effect on the distribution of the "HLA-B27". Strength and limitation The advantage of the present study was the use of pooled OR for analysis and, comparison "HLA-B27" frequency between BD patients and controls for the first time. This study was limited by incomplete information that prevented subgroup analysis for sex and age. Analysis of pooled OR for "HLA-B27"in subsets of BD or independent of HLA-B51/B5 is 10 recommended. As "HLA-B51"is the strongest associated with BD in Conclusion Previous studies have been established the efficacy of HLA class I gens especially "HLA-B51" on BD. Based on the results of this study, the relationship between disease and "HLA-B27" could be explained. On the other hand, this study consider the impression of other HLA-B alleles ("HLA-B27") on the risk of BD and the reinforcement of hypothesis Behcet disease-gene association. Fluctuations in risk ratios were explained due to the interconnection between environmental and genetic factors with "HLA-B27". Future research should be conducted into the more evaluation of "HLA-B27" in BD, independent or concomitance of other agents, assessment the relationship this gene with clinical presentations and detection application these alleles in the management of disease.
2019-03-21T13:02:53.005Z
2019-03-19T00:00:00.000
{ "year": 2019, "sha1": "082573f25767c1e9e6f194f4f7beba601f06b858", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s13317-019-0112-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a08666d03eca5c8db247d8b761cac4cbc1027955", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245538051
pes2o/s2orc
v3-fos-license
Short‐term safety of an anti‐severe acute respiratory syndrome coronavirus 2 messenger RNA vaccine for patients with advanced lung cancer treated with anticancer drugs: A multicenter, prospective, observational study Abstract Background Since 2020, severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2) has become prevalent worldwide. In severe cases, the case fatality rate is high, and vaccine prevention is important. This study evaluated the safety of receiving SARS‐CoV‐2 vaccine in patients with advanced lung cancer receiving anticancer therapy. Methods We prospectively enrolled patients receiving anticancer drugs for advanced lung cancer who planned to receive SARS‐CoV‐2 vaccination. Early adverse events within 7 days of vaccine injection were evaluated using patient‐reported surveys. The chi‐square test and multivariate logistic regression analyses were used. Results Among 120 patients receiving lung cancer treatment, 73 were men; the mean age of the patients was 73.5 years. The treatments received for lung cancer at the time of the first vaccine injection were chemotherapy, ICIs, combined chemotherapy and ICIs, and targeted therapies, including tyrosine kinase inhibitors, in 30, 28, 17, and 45 patients, respectively. All patients received SARS‐CoV‐2 messenger RNA (mRNA) vaccine. After the second mRNA vaccine dose, 15.4% of patients had fever of 38°C (95% confidence interval: 9.34%–23.2%); this rate was slightly higher than that for healthy participants at the time of the BNT162b2 trial. Patients treated with cytotoxic anticancer drugs tended to have high fever. In the multivariate analyses, male sex was associated with higher fever frequencies. However, there were no serious early adverse events due to vaccination. Conclusions Anti‐SARS‐CoV‐2 mRNA vaccination tends to be safe, but fever following vaccination tends to be more common among patients undergoing lung cancer treatment than among healthy individuals. INTRODUCTION Since 2020, there have been outbreaks of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection worldwide. SARS-CoV-2 infection is severe in patients with lung cancer. [1][2][3][4] Steroids, antiviral drugs, anti-interleukin-6 drugs, etc. are currently used for treating SARS-CoV-2. However, there has been no silver bullet breakthrough as yet. The effectiveness of treatment that prevents aggravation, such as antibody cocktail therapy 5 and new antiviral drugs, 6 has been reported. The usefulness of the messenger RNA (mRNA) vaccine has already been reported 7,8 and BNT162b2 and mRNA-1273 are typical SARS-CoV-2 mRNA vaccines currently is used worldwide. Vaccination is recommended for patients with cancer by the Center for Disease Control and Prevention and National Comprehensive Cancer Network. 9,10 However, few data are available to establish vaccine safety and efficacy in patients with advanced cancer. The SARS-CoV-2 vaccine has a high incidence of side effects such as fever even in healthy individuals, but there are few serious adverse events. In the BNT162b2 phase 3 trial, only approximately 4% of enrolled patients had a malignancy of any type, and these patients were not analyzed separately to assess vaccine efficacy. 7 In the mRNA-1273 trial, patients with cancer were not enrolled. 8 There are reports that the antibody titer after vaccination does not change between patients with cancer and healthy people, [11][12][13] while others have reported that antibody titers after mRNA vaccination are low in patients with solid cancer receiving anticancer drug treatment. 14,15 The SARS-CoV-2 vaccine has a high incidence of side effects such as fever even in healthy individuals. However, there are few serious adverse events. 7,8 The side effects of the vaccine in patients with chronic inflammatory diseases who are receiving immunosuppressive treatment are the same as those of healthy individuals. 16 Although some reports have been published on the safety of SARS-CoV-2 mRNA vaccines in patients with cancer and it has been reported that there is no problem with safety, 17 there are also case reports of cytokine release syndrome. 18 Therefore, this study aimed to evaluate the safety of the vaccine in patients with lung cancer receiving anticancer drug therapy. Ethics statements All participants provided written informed consent. This study was approved by the relevant institutional review board (National Hospital Organization Iwakuni Clinical Center Institutional Review Board, Iwakuni, Yamaguchi, Japan) (no. 0262) and was conducted in compliance with the Declaration of Helsinki and Ethical Guidelines for Medical and Health Research Involving Human Subjects. The study protocol was registered on the website of the University Hospital Medical Information Network, Japan (protocol ID: UMIN000043918). Study design and participants This multicenter prospective observational study, the OLCSG2102 study, included patients with advanced lung cancer who were receiving anticancer therapies such as chemotherapy, immune checkpoint inhibitors (ICIs), and molecular targeted therapy. Patients who met the following eligibility criteria were enrolled at seven hospitals in Japan. They included those aged 20 years or older, diagnosed with unresectable cancer or recurrent lung cancer, receiving anticancer drug therapy, and scheduled for SARS-CoV-2 vaccination. Patients with a history of coronavirus disease (COVID-19), patients with a history of SARS-CoV-2 vaccination, patients who were considered inappropriate for SARS-CoV-2 vaccination, or patients with an estimated prognosis of <2 months were excluded. Outcomes The coprimary outcomes were to assess the frequency of fever and other side reactions 7 days after the second dose of SARS-CoV-2 vaccination based on a patient-reported survey, secondary outcomes of the frequency of fever and other side reactions 7 days after the first dose of SARS-CoV-2 vaccination based on the patient-reported survey, incidence of grade 3 or worse immune-related adverse events after SARS-CoV-2 vaccination in patients receiving ICIs, incidence of COVID-19 after vaccination, overall survival after vaccination, and progression-free survival of anticancer drug therapy. Body temperature of the axilla was measured in degrees Celsius. Data collection The side reaction rating scale for the vaccine was based on the BNT162b2 report. 7 Data on local and systematic reactions and use of medication were collected from patients who had been surveyed for 7 days after each vaccination. Pain at the injection site was assessed according to the following scale: mild, does not interfere with activity; moderate, interferes with activity; severe, prevents daily activity; and grade 4, emergency department visit or hospitalization. Redness and swelling were measured according to the following scale: mild, 2.0-5.0 cm in diameter; moderate, >5.0-10.0 cm in diameter; severe, >10.0 cm in diameter; and grade 4, necrosis or exfoliative dermatitis (for redness) and necrosis (for swelling). The scales of systematic events were as follows: fatigue, headache, chills, muscle pain, joint pain (mild, does not interfere with activity; moderate, some interference with activity; or severe, prevents daily activity), vomiting (mild, 1-2 times in 24 h; moderate, >2 times in 24 h; or severe, requires intravenous hydration), and diarrhea (mild, 2-3 loose stools in 24 h; moderate, 4-5 loose stools in 24 h; or severe, ≥6 loose stools in 24 h). Grade 4 for all events indicated an emergency department visit or hospitalization. Vaccine adverse reactions were reported daily by the patients on a predistributed questionnaire. Patients measured and recorded body temperature daily for 8 days from the day before vaccination to the seventh day after vaccination. Statistical analyses Data from the BNT162b2 clinical trial showed that the frequency of fever >38 C after the second vaccination was 11% in the setting of patients aged 56 years or older. 7 We assumed that a 10% increase in the frequency of fever >38 C in patients who were undergoing treatment for lung cancer would be acceptable. Accordingly, we estimated that the required number of patients for the early safety assessment would be 104, with a one-sided significance level of 0.05 and a power of 80%. An interim analysis was conducted when 120 cases were collected considering that 5% of cases would drop out of the survey. Differences were assessed using analysis of variance or chi-square test. Adjusted odds ratios were calculated using multivariate logistic regression analyses with the following covariates: sex, age, smoking history, the presence of respiratory complications, and type of treatment. All statistical analyses were performed using a standard software package (STATA version 17; StataCorp). The significance threshold was set at p < 0.05 for the two-sided unpaired tests. RESULTS We report the results of the interim analysis of early adverse events owing to vaccination. Between April 8, 2021 and August 31, 2021, >400 patients undergoing lung cancer treatment were enrolled to assess vaccination safety and immune-related adverse events. At the time of obtaining the patient-reported survey from 120 patients to assess postvaccination safety, the initial adverse events of vaccination were analyzed. All patients received two doses of the vaccination. Patient characteristics of the initial 120 patients are presented in Table 1. The patients comprised 73 men (61%) and 47 women (39%). The median age was 73.5 years (range, 64-86 years), and 41% of patients were 75 years or older. There were 74 (62%) smokers. All patients had advanced lung cancer, and the histological subtypes were mostly adenocarcinoma (n = 94). The treatments received for lung cancer at the time of the first vaccine injection were chemotherapy in 30 patients, ICIs in 28 patients, combination of chemotherapy and ICIs in 17 patients, and targeted therapies such as tyrosine kinase inhibitors in 45 patients. Two patients changed their treatment regimens between the first and second injections. Both patients were treated with a combination of chemotherapy and ICIs, and one patient's treatment was changed to chemotherapy alone and the other to ICI alone. In this study, 115 of the 120 patients received the BNT162b2 vaccine (Table 1). Systemic reactions to the first and second injections are shown in Figure S1 and Figure 1, respectively. The frequency of fever >38 C after the first injection was 2.5%, and the frequency of fever >38 C after the second injection, the primary outcome, was 15.4% (95% confidence interval [CI]: 9.4%-23.2%). The frequency of fever for each treatment regimen is shown in Table 2. Fever after the second injection tended to be slightly more frequent with chemotherapy regimens and less frequent with targeted therapy. The most frequent systemic reactions after the second injection were myalgia (54.2%) and fatigue (49.2%), and there was no difference according to treatment regimens. The local reactions after the first and second inoculations are shown in Figure S2 and Figure 2, respectively. After the second injection, 46.7% of patients had pain at the injection site. However, there was no difference between the treatments. In total, no serious adverse events were observed in this study, and there were no cases in which the treatment schedule was postponed owing to adverse events of the vaccine. Patients receiving anticancer therapy, except targeted therapy, had the date of vaccine injection determined by their physician. There was little association between the period between anticancer drug administration and vaccination and adverse events, especially fever (Table S1). In addition, medications such as steroids and antipyretics, as well as adverse events such as fever had negligible effect on the results (Table S2). Univariate and multivariate analyses were performed to investigate the effects of fever. The frequency of fever was significantly higher in men than in women (adjusted odds ratio: 8.87; 95% CI: 1.25-62.8; p = 0.029). There was no difference in the frequency of fever between patients older and younger than 75 years (adjusted odds ratio: 1.73; 95% CI: 0.55-4.47; p = 0.350) (Table S3). Patients treated with cytotoxic anticancer drugs tended to have a high fever, and patients who received targeted therapy tended to have a lower frequency of fever, although the difference was not significant ( DISCUSSION In the present study, the frequency of fever >38 C after the second injection, the primary outcome, was 15.4%. Compared with the findings in previous reports, 7 the present findings suggested a higher risk in patients with lung cancer who are, or will be, receiving anticancer medicine than in healthy individuals. However, regarding other adverse events, many patients had muscle pain, although the degree was mild; in addition, other adverse events were similar to those reported in the BNT162b2 phase 3 trial, 7 and the frequency of antipyretic use was also low (Figures 1 and S1). As a local reaction, pain was observed in many patients, but redness and swelling were less, and local pain tended to be less than the data reported in the BNT162b2 phase 3 trial (Figures 2 and S2). 7 Although the frequency of fever after vaccination tends to be high, it is considered that the SARS-CoV-2 mRNA vaccine could be safely administered to patients with lung cancer. Previous studies have reported that the side effects of the vaccine are low in patients undergoing cancer treatment. 13 However, we obtained different results in our study. This may be because of differences in the methods of data collection or between races. In our study, men tended to have fever more frequently than did women. A study by Menni et al. evaluated the safety and efficacy of the BNT162b2 and ChAdOx1 COVID-19 vaccines in the UK and reported that women tended to have more adverse events than did men. 19 The higher frequency of fever in men may be related to the fact that men had a higher frequency of a smoking history than,women, and smokers tend to have more fever than non-smokers. Furthermore, because of the higher smoking rate among men than among women, the proportion of patients receiving targeted therapy was low, and the proportion of those receiving chemotherapy was high. Although adverse events tended to be more frequent in the chemotherapy group than in the nonchemotherapy group, the number of cases remains small at this time, and more data need to be collected to determine whether chemotherapy treatment will truly increase adverse events related to the vaccine. Patients who received chemotherapy tended to have more fever than those without chemotherapy, and none of them developed febrile neutropenia. The reason patients who received chemotherapy tended to have higher temperatures is unclear. Drug-induced fever from anticancer drugs may have affected patients who were receiving chemotherapy. Currently, we are further accumulating cases, and we plan to verify our findings after the number of cases increases. Our study has several limitations. First, our study included patients who had been using corticosteroids or antipyretic analgesics for treating lung cancer and complications prior to vaccination (Table S2). In these patients, preused drugs may have helped reduce adverse events. In addition, some patients had symptoms owing to lung cancer before vaccination, such as fatigue or any pain ( Figures S3 and S4), and it is possible that the adverse events of the vaccine were overestimated in such patients. Second, our study could not determine whether the antibody titer increased; thus, the effectiveness of the vaccine could not be evaluated. Finally, since the main purpose of this report was to evaluate the short-term safety of the COVID-19 vaccine in those receiving lung cancer treatment, we could not examine its long-term safety. Serious complications (stroke and myocardial infarction) and adverse events of immunotherapy have been the focus of recent attention 20,21 and will be examined in a number of cases during an observation period. At this time, our study is considered to be the result of ensuring the safety of vaccination in patients with lung cancer. F I G U R E 2 Local reactions reported after the second vaccine injection by treatment regimen. ICI, immune checkpoint inhibitor; Chemo, chemotherapy In conclusion, vaccine-related adverse events tend to increase in patients with lung cancer undergoing cytotoxic chemotherapy. However, serious adverse events in the short-term are comparable to those observed in healthy individuals. This cohort study provided data on the safety of using the mRNA vaccine for SARS-CoV-2 in patients with advanced lung cancer who are receiving anticancer therapies such as chemotherapy, ICIs, and targeted therapy. ACKNOWLEDGMENTS The authors thank all the investigators at the participating institutions. All authors contributed to the coordination of this study at each hospital. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
2021-12-30T06:22:20.298Z
2021-12-28T00:00:00.000
{ "year": 2021, "sha1": "535c1d7235869176246ee5d5481b018144b6b9ad", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/1759-7714.14281", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f625073de11b13d980df8e24b2b33621c4acc191", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
235281769
pes2o/s2orc
v3-fos-license
Methane (CH4) Emission Flux Estimation in SRI (System of Rice Intensification) Method Rice Cultivation Using Different Varieties and Fertilization The agricultural sector is one of the contributors of the greenhouse gasses emission especially CO2, CH4 and N2O. In its agricultural practice, rice paddy fields in Indonesia is cultivated twice to three times a year. Conventional rice planting method using water inundation and chemical fertilizer can result in the increase of greenhouse gasses emission. One of those gasses is methane (CH4). Methane is formed through the decomposition of organic materials anaerobically in the rhizosphere with the help of methanogenic microbes. The release of methane can be influenced by several factors among them are the nature of the soil, irrigation system, fertilization and varieties used. The strategy to reduce emission conducted in this research are the usage of varieties that are considered to be low on methane emission such as Ciherang and IR64, rice paddy cultivation SRI (System of Rice Intensification) method using intermittent irrigation as well as fertilization. The setting of the intermittent irrigation uses IoT based (Internet of Thing) sensor technology for water level adjustment. The aim of this research is to analyze the methane flux produced from SRI method rice paddy cultivation based on the varieties used and different kinds of fertilization. Introduction Agriculture is one of the main contributors to the greenhouse gas (GHG) emissions and it comprises of carbon dioxide (CO2), methane (CH4) and nitrous oxide (N2O). One of the sources of the greenhouse gas emissions is from rice field cultivation. On the other hand, agricultural sector especially rice paddy is one of the main commodities in actualizing Indonesia's food independence. This arises concern that it can increase the greenhouse gas emissions that can lead to climate change. There needs to be mitigation strategy to decrease the greenhouse gas emissions, one of them is by implementing SRI (System of Rice Intensification) that known to be an environmentally friendly and water saving rice paddy field cultivation method. The land used for rice field contributes 11% CH4 globally [1]. The rice plant has an important role in releasing methane (CH4) to the atmosphere because it can increase the methanogenesis process through the release of root exudate that is rich in carbon sources. The root of the rice plant is able to exchange oxygen and it can form thermodynamic balance because 60-90% CH4 produced by the rhizosphere layer This research uses two varieties commonly used by rice paddy farmers in Indonesia and are known to be the varieties that are prone to climate change namely Ciherang and IR64 [3]. Another factor taken into consideration is the use of organic and inorganic fertilizer, wherein organic fertilization uses basic mature manure fertilizer and Local Microorganisms (MOL) whereas inorganic fertilization uses the addition of nitrogen fertilizer containing sulfur (ZA) or slow release fertilizer. The irrigation arrangements during rice paddy plant cultivation uses intermittent irrigation system. This is in lieu with the strategy to decrease CH4 emissions by combining low-emissions technology components without decreasing paddy plant production. The aim of this research is to analyze methane (CH4) flux produced by rice paddy cultivation using SRI method from varieties and different fertilizations during the vegetative phase. Site and Experiment Description This research was conducted at Kebun Tridharma, Faculty of Agriculture Universitas Gadjah Mada starting from October 2020, paddy cultivation uses box made from fiberglass measuring 150 x 100 x 40 cm as a growing media. Seeding is done on a small container (besek) filled with soil and manure fertilizer for 10 days and then it is moved to the growing media (soil and soil that has been mixed with mature manure fertilizer) in a fiberglass box that had been previously inundated for 2 days. Planting seeds done with the spacing of 30 cm x 30 cm and single cropping system (one hole for one plant). Irrigation arrangement based on the height and the water level kept at 0 cm, inundation done at the same time as seeding on the days after planting which are after the 10 th , 20 th , 30 th and 40 th day with the water level 2cm above the soil. The design of the research uses physical model called Nested Design (Rancangan Tersarang). This research design consists of 2 factors which are fertilization and varieties. Fertilization factor (A) is diverse and varieties (B) do not have diversity but its diversity is located within factor A. Each factor consists of 2 levels and factor B is repeated 3 times. C IR IR C IR2 C IR C1 C C IR3 IR C1 IR IR C IR C C C IR C C IR C IR1 IR IR C IR IR1 IR C IR IR IR IR C C2 C C C C IR C2 C C IR IR IR IR C3 C IR3 C IR C IR2 IR C3 Climate and Soil Weather measurements comprise of rainfall, solar radiation, temperature and humidity using Davis weather channel equipped with Decagon EM50 data logger. Soil data comprises of the temperature of the soil, humidity and Electrical Conductivity (EC) using 5-TE sensor. CH4 emissions Methane gas sampling was conducted during the vegetative phase. The chamber has been equipped with thermometer, injector, fan, and plastic bag are placed inside. The fan is then turned on and the gas samples are taken using the injector equipped with rubber tube and 3 way faucet connected to the chamber. The gas sample then put inside a plain vacuum tube and sealed with nail polish. The sampling of the gas is conducted three times from minute 0 until minute 20. (1) Where, is flux CH4 (mg/m 2 /minute), is the difference of CH4 concentration per collection time, (ppm/minute), ℎ ℎ is the height of the chamber (cm), is molecular Weight CH4 (g), is molecular Volume CH4 (22,41 liter at standard temperature and pressure), and is temperature during sampling ( o C). Plant height and Tillers The height and tiller of the rice plant during the vegetative phase measured every 5 days. Based on Table 4, some climate data at the study location shows that the average air temperature is 27.79 o C with 78.5% humidity and 169 W/m 2 irradiation. Meanwhile, the total rainfall during the research period in the vegetative phase was 350mm. The results of measurements of soil temperature, soil moisture, Electrical Conductivity, and soil pH from the two treatments, namely P1 and P2 in Table 5 and Table 6, show that the average values are not much different. This measurement intends to see the relationship between the microenvironment and the CH4 gas flux, where soil temperature plays a significant role in the activity of soil microorganisms. It is known that most of the methanogenic bacteria formed at an optimum temperature between 30-40 o C. The results of this study are to indicate the average soil temperature of both P1 and P2 treatments in the International Conference on Sustainable Agriculture and Biosystem 2020 IOP Conf. Series: Earth and Environmental Science 757 (2021) 012001 IOP Publishing doi:10.1088/1755-1315/757/1/012001 5 range of 29 o C. Besides, the maximum CH4 formation occurs at a soil pH of 6,9-7,1. At a pH below 5,75 or above 8,75, CH4 formation inhibited [5]. The pH measurement in this study from the two treatments had an average value of 7,4. Based on the CH4 gas data obtained in the vegetative phase, it shows that the flux of CH4 gas emissions in the fertilizer treatment between manure, ZA, SP36, and KCl with Ciherang varieties is 601.80 mg/m 2 /day. Meanwhile, in the same fertilization treatment with the IR64 type, the flux value was negative. Likewise, with the fertilization treatment between cages and MOL on both varieties, namely Ciherang and IR64, the CH4 gas emission flux was negative. It can occur because of the absorption of CH4 gas in rice fields. The structure of rice fields consisting of aerobic and anaerobic areas is an environment for producing and oxidizing CH4 gas by the activity of microorganisms. The absorption mechanism is occurred by the oxidation reaction of CH4 gas and happened in aerobic areas with the help of Methane oxidizing bacteria (MOB) that use CH4 as a source of carbon and energy for growth. The CH4 oxidation reaction requires O2 only at the beginning of the response. This oxidation reaction produces CO2 that is released into the atmosphere. Plant height The data on the measurement of plant height against time (plant age) plots into a graph, and the results are as shown in the following Figure 2. Figure 2. shows that the height of rice plants in the vegetative phase increases rapidly with time. Based on the two treatments, it seems that the rice with the cage fertilization treatment, ZA, SP36, and KCl from the two varieties named Ciherang and IR64, gave a good response and was higher than the fertilization treatment with occasional and MOL. Tillers The data for calculating the number of tillers against time (plant age) plots on a graph, and the results are as shown in the following Figure 3. Figure 3, it seems that the growth in the number of tillers increases rapidly as the rice ages. Fertilization treatment between manure, ZA, SP36, and KCl from the two varieties, namely Ciherang and IR64, gave a good response with a higher number of tillers than the occasional fertilization treatment and MOL of 23 Ciherang stems and IR64 with 26 stems, respectively. Meanwhile, the number of tillers in the manure and MOL treatment of the Ciherang variety was 14 stems and IR64 15 stems. Conclusion In the vegetative phase, it showed that the flux of CH4 gas emissions in the fertilizer treatment between manure, ZA, SP36, and KCl with the Ciherang variety was 601.80 mg/m 2 /day. Meanwhile, in the same fertilization treatment with the IR64 type, the flux value was negative. Likewise, with the fertilization treatment between cages and MOL on both varieties, namely Ciherang and IR64, the CH4 gas emission flux was negative.
2021-06-02T23:51:40.411Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "4ada55bf26220cfa5b674836f94c3918348fbc6a", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/757/1/012001", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "4ada55bf26220cfa5b674836f94c3918348fbc6a", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Physics" ] }
119145571
pes2o/s2orc
v3-fos-license
A new cubic nonconforming finite element on rectangles A new nonconforming rectangle element with cubic convergence for the energy norm is introduced. The degrees of freedom (DOFs) are defined by the twelve values at the three Gauss points on each of the four edges. Due to the existence of one linear relation among the above DOFs, it turns out the DOFs are eleven. The nonconforming element consists of $P_2\oplus \Span\{x^3y-xy^3\}$. We count the corresponding dimension for Dirichlet and Neumann boundary value problems of second-order elliptic problems. We also present the optimal error estimates in both broken energy and $L_2(\O)$ norms. Finally, numerical examples match our theoretical results very well. Introduction It has been well known that the standard lowest order conforming elements can produce numerical locking and checker-board solutions in the approximation of solid and fluid mechanics problems: see for instance [3,4,6,9,15] and the references therein. An efficient approach to deal with this case is to employ the nonconforming element method, which has made a great impact on the development of finite element methods [1, 2, 5, 7, 8, 10-14, 16, 19, 20, 22, 23, 25, 27, 28]. To approximate PDEs using a nonconforming element of order k, one needs to impose the continuity of the moments up to order k − 1 of the functions across all the interfaces of neighboring elements. This condition is known as the patch test [17]. In two dimensions, the patch test is equivalent to the continuity at the k Gauss points located on each interface. This implies that a P k -nonconforming element, if exists, must be continuous at the k Gauss points on each edge. These points (completed with internal points for k ≥ 3) can be used to define local Lagrange degrees of freedom (DOFs) on the simplex if k is odd, but this construction is not possible if k is even since there exists a lower-degree polynomial vanishing at all the Gauss points [14]. Thus suitable bubble functions are often employed to enrich the finite element space. Until now, the triangular nonconforming elements are well studied in the literature (see, [10,14]), but the analysis of their quadrilateral counterparts is less complete. Even though the triangular or tetrahedral meshes are popular to use, in some cases where the geometry of the problem has a quadrilateral nature, one wishes to use quadrilateral or hexahedral meshes with proper elements. For even k, the same trouble exists, that is, there also exists a lowerdegree polynomial vanishing at all the Gauss points. Again, some bubble functions are added to the 2 The P 3 nonconforming element on rectangular mesh Denoting by P k ( R) the space of polynomials of degree ≤ k on R, set The space P will be our nonconforming finite element space on R with appropriate degrees of freedom that will be defined soon. Before proceeding, we notice the following simple result. Lemma 1. The following relationship holds: Proof. Let p ∈ P 3 (R). Then any fourth order difference quotient of p(x) vanishes. The fourth order difference quotient of p(x) at points x 1 = −1, x 2 = − 3/5, x 3 = 0, x 4 = 3/5, and x 5 = 1 can be expressed by g 7 g 8 g 9 g 10 g 11 g 12R Figure 1: The reference rectangle R A simple computation derives the desired result. As an immediate consequence of Lemma 1, we have the following proposition. Proposition 1. The following relationship holds: (2) Proof. Notice that ϕ is a polynomial of degree no greater than 3 on any edge of R. The result follows immediately from Lemma 1. Due to proposition 1 and Lemma 2, we have the unisolvency result. Denote by M j , j = 1, 2, 3, 4, the four midpoints of four edges. Obviously, M j is one of three Gauss points on jth edge and hence the other two Gauss points on jth edge can also be denoted by M + j and M − j . For example, in Fig. 1, (with the identification g 0 = g 12 ), and We then have the following result. The proof of this lemma is similar to that of Lemma 2.3 in [21], which will be omitted. We are now in a position to state the definition of P 3 -nonconforming element on a rectangle as follows. • P R = P is the finite element space, and • R = {ϕ(g j ), j = 1, 2, · · · , 12, such that Eq. (2) holds for all ϕ ∈ P R } is the degrees of freedom. The following patch test lemma is immediate since any p ∈ P is of degree ≤ 3 on an edge. Remark 1. For actual computation, the local finite element can be alternatively given by Remark 2. In theory, if x 3 y−xy 3 is replaced by x 3 y, xy 3 or x 3 y+xy 3 , the unisolvency like Proposition 2 holds. But the former two choices are lack in symmetry. As for the third choice x 3 y + xy 3 , it turns out to be numerically during the computation of the bases since the corresponding coefficient matrices are nearly ill-conditioned (the determinants are near to zero). Let us proceed to define our P 3 -nonconforming element space. Assume that Ω ∈ R 2 is a parallelogram domain with boundary Γ. Let (T h ) h>0 be a regular family of triangulation of Ω into , whose collection will be designated by For a given triangulation T h of Ω, let N V , N E , N R , and N G denote the numbers of vertices, edges, rectangles, and Gauss points, respectively. Then set In particular, let N i V , N i E , and N i G denote the numbers of interior vertices, edges, and Gauss points of R ∈ T h , respectively. For a function f defined in Ω, denote by f j its restriction to R j , and E jk the interface between R j and R k . Similarly, g jk , k = 1, 2, 3, will mean the Gauss points on Γ j = ∂R j ∩ ∂Ω and g jkl , l = 1, 2, 3, will be the Gauss points on E jk . We are now in a position to define the following nonconforming finite element spaces. For each vertex V j ∈ V h , denote by E h (j) and G h (j) the set of all edges E ∈ E h with one of the endpoints being V j and the set of Gauss points nearest to V j among the three Gauss points on E for all E ∈ E h (j). For M j ∈ E j , if g i and g k are two other Gauss points and i < k, we also denote these two Gauss points by M + j and M − j , respectively. We then define the three types of functions in N C h , which serve as global bases for the nonconforming finite element spaces. Definition 2. The first type of functions are associated with vertices. Define ϕ V j ∈ N C h , j = 1, 2, · · · , N V , by Next define the second type of functions associated with edges E j ∈ E : define ϕ The last type of functions are also associated with edges E j ∈ E : define ϕ E− j ∈ N C h , j = 1, 2, · · · , N E , by Similarly, define the three types of functions which will serve as global basis functions for N C h 0 with those for N C h excluding ϕ V j 's which are associated with boundary vertices and ϕ E+ j , ϕ E− j 's which are associated with boundary edges. Now let us present the dimensions for the nonconforming finite element spaces. Let ϕ V j , j = 1, 2, · · · , N V and ϕ E+ j , ϕ E− j , j = 1, 2, · · · , N E be the functions defined in Definition 2. By omitting any one of these functions, each of the following three sets forms global basis functions for N C h : and The proofs of the above theorems are quite similar to those in the literature [21], and thus omitted. Here we remark that our finite element space is a little different from those in the literature [18,21]. Those finite element spaces are nothing but conforming element spaces enriched by some suitable bubble function spaces. Thus the idea of Fortin and Soulie [14] is not applicable here. The interpolation operator and convergence analysis In this section we will define an interpolation operator and analyze convergence in the case of Dirichlet problem. The case of Neumann problem is quite similar and the results will be briefly stated with their details being omitted. Denote by (·, ·) the L 2 (Ω) inner product and (f, v) will be understood as the duality pairing between H −1 (Ω) and H 1 0 (Ω), which is an extension of the duality paring between L 2 (Ω). By · k and | · | k we adopt the standard notations for the norm and seminorm for the Sobolev spaces H k (Ω). Consider then the following Dirichlet problem: ∈ Ω, and f ∈ H 1 (Ω). We will assume that the coefficients are sufficiently smooth and that the elliptic problem (7) has an H 4 (Ω)-regular solution. The weak problem is then given as usual: where a : H 1 0 (Ω) × H 1 0 (Ω) → R is the bilinear form defined by a(u, v) = (α∇u, ∇v) + (βu, v) for all u, v ∈ H 1 0 (Ω). Our nonconforming method for Problem (7) states as follows: find where with a R being the restriction of a to R. For a given rectangle R ∈ T h , define the local interpolation operator Π R : for all Gauss points on the edges of R. The global interpolation operator Π h : W 1,p (Ω)∩H 1 0 (Ω) → N C h 0 is then defined through the local interpolation operator Π R by Π h | R = Π R for all R ∈ T h . Since Π h preserves P 3 for all R ∈ T h , it follows from the Bramble-Hilbert Lemma that Denote where P 2 (E) denotes the set of quadratic polynomials on the face E. Also define the projection where v j = v| Rj and ν T j is the transpose of the unit outward normal to R j . Then we have the following standard polynomial approximation result: Since w j − w k has zero values at the Gauss points on E jk for all w ∈ N C h 0 and the 3-point Gauss quadrature is exact on polynomials of degree no more than 5, the following useful orthogonality holds. (See also Lemma 4) Lemma 5. If u ∈ H 3/2 (Ω), then the following equality holds: Denote the broken energy norm · h on N C h + H 1 (Ω) by ϕ h = a h (ϕ, ϕ) 1/2 for all ϕ ∈ N C h + H 1 (Ω). We now consider the energy-norm error estimate and first consider the following Strang lemma [26]. Lemma 6. Let u ∈ H 1 (Ω) and u h ∈ N C h 0 be the solutions of Eq. (8) and Eq. (9), respectively. Then Assume sufficient regularity such that u ∈ H 4 . Due to (10), the first term in the right side of (12) is bounded by In order to bound the second term of the right side of (12) which denotes the consistency error, integrate by parts elementwise so that where m j ∈ Q 2 (R j ) is a biquadratic polynomial on R j . In particular, if m j is chosen as the Q 2 projection of w in R j , due to the trace theorem, (10) and (11), we get Thus, we have By collecting the above results, we get the following energy-norm error estimate. Theorem 3. Let u ∈ H k+1 (Ω) ∩ H 1 0 (Ω) and u h ∈ N C h 0 be the solution of (8) and (9), respectively. Then we have By the standard Aubin-Nitsche duality argument, the L 2 (Ω)-error estimate can be easily obtained, but the corresponding proof is omitted. We state the result in the following theorem. Instead of the Dirichlet problem, if the following Neumann problem is considered, the weak problem (8) is then replaced by finding u ∈ H 1 (Ω) such that where a n is the bilinear form defined by a n (u, v) = (α∇u, ∇v) + (βu, v) + γu, v for all u, v ∈ H 1 (Ω), and ·, · is the paring between H −1/2 (Γ) and H 1/2 (Γ). Thus, the nonconforming method for Problem (15) states as follows: find u h ∈ N C h such that Then all the arguments given above for Dirichlet case hold analogously Hence one can have the following result. Numerical examples In this section we illustrate two numerical examples. First, consider the following Dirichlet problem: −∆u = f, Ω, where Ω = (0, 1) 2 . The source term f is calculated from the the exact solution u(x, y) = sin(2πx) sin(2πy)(x 3 − y 4 + x 2 y 3 ). Table 1 shows the numerical results, where the error reduction ratios in L 2 (Ω) and broken energy norm are optimal. Table 1: The Dirichlet problem: The apparent L 2 and broken energy norm errors and their reduction ratios on the quadrilateral meshes. Next, turn to the following Neumann problem: with the same domain Ω = (0, 1) 2 . The source terms f and g are are generated from the exact solution u(x, y) = cos(2πx) cos(2πy)(x 3 − y 4 + x 2 y 3 ). Again, Table 2 shows the numerical results, where the error reduction ratios in L 2 (Ω) and broken energy norm are optimal. Table 2: The Neumann problem: The apparent L 2 -and broken energy norm errors and their reduction ratios on the quadrilateral meshes.
2019-04-11T21:23:38.332Z
2013-01-29T00:00:00.000
{ "year": 2013, "sha1": "d29fb889ec0d43bcde6b6eab6fb76d184e01a3d2", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1301.6862", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ffe83dc8054a6446af184a19d75ea1767f040371", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
199353364
pes2o/s2orc
v3-fos-license
THE KEY COMPONENTS IN FORMING A MODERN INNOVATION BASIS OF COMPETITIVENESS IN THE CONTEXT OF GLOBALIZATION TRANSFORMATIONS Skrypnyk N. Y., Sydorenko K. V. The Key Components in Forming a Modern Innovation Basis of Competitiveness in the Context of Globalization Transformations The article is aimed at substantiating the modern most important components of the innovation base of competitiveness in the conditions of globalization transformations. Features of management of global competitiveness of countries are considered; the key components of formation of a modern innovation base of competitiveness in conditions of globalization transformations are defined; the role and importance of the infrastructure of international airports in the system of socio-economic well-being and increasing the global competitiveness are substantiated; instruments for increasing competitive advantages of the airport infrastructure are defined. The research applied both general scientific and special methods of scientific cognition: descriptive-analytical method, method of analysis and synthesis, methods of quantitative and qualitative comparisons. The information basis of the article are monographic researches and periodical publications by domestic and foreign scholars-economists, materials and analytical reports of the World Economic Forum, the International Institute of Management Development, OECD, International Civil Aviation Organization, International Air Transport Association, aggregated data provided by the International Council of Airports. The main provisions of the article will help to accelerate the solution of issues of identifying modern innovative factors, as well as substantiating the conditions and directions of increasing the global competitiveness of countries. The novelty of the research is the development and substantiation of methodical provisions of scientific provision of global competitiveness of countries in the world-wide economy. This work provides an opportunity for further research into the management of international competitiveness of airport infrastructure and enhancing the global competitiveness of countries. A great contribution to the development of the conceptual foundations of innovation development and competitiveness in the context of globalization transformations is made by a number of scientists, namely: V. Budkin , is concentrated around solving problems of the formation of innovative infrastructure as an endogenous factor of long-term socio-economic growth and ensuring global competitiveness of countries. At the same time, despite the existence of deep and thorough research works of domestic and foreign scientists of different years, the issue of raising the level of international competitiveness of countries is not sufficiently developed and needs to be improved. In particular, mechanisms for the development of competitive advantages and formation of a modern innovation basis of competitiveness in the context of globalization transformations need further specification. The purpose of the research is to identify and substantiate the key components of the formation of a modern innovation basis of competitiveness in the context of globalization transformations. T he methodological basis of the research comprises both general scientific and special methods of scientific knowledge: descriptive analysis, analysis and synthesis, methods of quantitative and qualitative comparisons. The information base of the article is monographic researches and periodical publications of Ukrainian and foreign economists, materials and analytical reports of the World Economic Forum, International Institute for Management Development, OECD, International Civil Aviation Organization, International Air Transport Association, aggregated data of the Airport Council International. БІЗНЕСІНФОРМ № 4 '2019 www.business-inform.net The intensification of integration processes as well as the processes of globalization of economic relations, diversification, and internationalization of various types of economic activity are accompanied by increased mobility of the population, technologies, and information and open up significant opportunities for socio-economic development of countries and formation of longterm competitive advantages. The competition for the right to acquire new knowledge and dissemination of innovations, for the ability to control and regulate the resource base, information and financial flows, share and leadership in world markets is intensifying [43]. The role and importance of air transport, which is characterized by the ability to quickly move over long distances in a relatively short period of time is increasing. Air transport becomes the basis for global networking of society. Therefore, in order to create and sustain competitive advantages in the context of globalization transformations, countries need to address the issue of increasing and modernizing airport infrastructure capable of producing innovative highly competitive services which are in demand in the global aviation market. T hus, according to the most popular indicators of competitiveness of countries, calculated by the International Institute for Management Development [33] and the World Economic Forum [47], one of the key elements of socio-economic development of countries in the world economy is the proper level of infrastructure. This is due to the fact that infrastructure networks reduce the effect of distance, help integrate markets, providing the necessary connections, and facilitating international trade. In particular, a reliable, innovative airport infrastructure is one of the key factors in enhancing countries' capacity for real economic growth both in the short and long term. As can be seen from Fig. 1, the basis for compiling the World Competitiveness Rankings by the International Institute for Management Development is the calculation of factors and subfactors in correspondence to which countries manage their competitive environment. The key factors of competitiveness are economic performance, government efficiency, business efficiency, and infrastructure. In turn, according to the structure of the Global Competitiveness Index (GCI), developed by the experts of the World Economic Forum, high-quality and modern air transport infrastructure is considered as one of the factors determining global competitiveness of countries, without which the national economy cannot be transformed from factor-oriented to efficiency-oriented, not to mention innovation orientation. A s can be seen from Tbl. 1, GCI consists of twelve pillars, which are grouped in four categories: enabling environment, human capital, markets, and innovation ecosystem. In turn, the pillars are broken down into indicators of a lower level. Every year the World Economic Forum publishes The Global Competitiveness Report and ranks countries by the general index and its individual components. A country's rank in terms of a separate pillar is the expression of its competitive advantage (disadvantage) relative to other countries in the global dimension and may indicate obstacles to its development, the need to initiate and implement reforms to increase the productivity of the economy [15, p. 146]. As can be seen from Tbl. 2, countries with innovative potential and competitive air transport infrastructure are also leading in terms of the Global Competitiveness Index. This dependence is explained by the fact that innovative infrastructure of international airports plays an important role in the formation of dynamic global supply chains, establishment of effective logistics schemes for business, provision of air links between markets, and the national and world economies take advantage of airports as integral elements of economic development [16, р. 23]. The assertion that airport infrastructure affects the location of airports in the world air transport market and also is a necessary (but not sufficient) precondition for Table 1 Place of air transport infrastructure quality in the system of competitiveness (in accordance with the methodology of the Global Competitiveness Index) [44, р. 131] analyzes the processes of deregulation, commercialization and privatization in the EU airport sector and concludes that "the indivisibility of physical infrastructure (of airports) with increased use causes significant economies of scale". The scientist considers expanding the physical infrastructure of airports as a driver of sustainable development and socio-economic growth. A number of scientific studies [28; 29; 38; 48] examine the relationship between the global airport infrastructure and its surrounding regional space. The researches have developed a concept of aeroregionalism, the essence of which is that the imperatives of globalization and neo-liberalization intensify the regionalization of the airfield space, and the development of large-scale airport infrastructure affects the competitiveness of the surrounding regional space. Several scholars [31; 32; 39; 42] also note the interdependence between the development of the airport infrastructure network of countries and the socio-economic welfare. The conceptual approach proposed by the scientists is that airport infrastructure is seen as an economic generator, combining local and international markets, linking regions on a global scale. T he level of airport infrastructure development directly and indirectly influences economic growth of regions (Fig. 2). Direct economic impact on the local economy is manifested as a result of economic activity of airports and enterprises, the existence of which depends on the viability of the airport. This includes airport security, immigration services, airport administration, air traffic control, meteorological services, aircraft maintenance and repairs, ground handling, aircraft refuelling, commercial services at the airport, concessionaires and tenants of airport infrastructure, airlines, etc. Indirect impact is due to money coming into economy from individuals, enterprises and organizations located outside the airport but related to the airport business. For example, companies which are included in the supply chain of the airport sector and are generators of indirect employment:  suppliers of aviation fuel;  construction companies that build infrastructure facilities at the airport;  manufacturers of goods sold at the retail halls of the airport;  wide range of activities in the business services sector (legal, accounting, information, etc.). Induced impact is the result of the multiplicative effect of direct and indirect influences [35, р. 9]. It is im- where GY -real GDP growth; GK -xed assets growth; GL -employment growth; A -growth of aggregate factor productivity; b, c -share of capital and labor in income www.business-inform.net portant to consider induced impact only in a local or regional context, since some of these effects will influence other adjacent regions. Economic catalytic impact results from the influence of air transport infrastructure development, which is not directly related to development of the air transport industry, on a country's economy. End of Thus, innovative airport infrastructure is an important tool of national socio-economic prosperity and a key component of the modern innovation basis of competitiveness in the context of globalization transformations (Fig. 3). economy as a whole, may serve as an incentive to provide such assistance for various reasons, including the desire to promote trade and cultural relations among countries. Developing countries can benefit from special programs adopted by some governments to promote economic and social growth in different regions of the world. The most important sources of foreign aid to developing countries in the field of airport infrastructure financing are international banks and funds created to facilitate the implementation of projects aimed at the development of national economies. The most famous banks and funds are the International Bank for Recon- Increase in global competitiveness Access to the global air transport market Sustainable socioeconomic growth Increase in the share of the world air transport market Increase in population mobility/ Enhancement of import and export opportunities Growth in air transportation volumes H owever, the acquisition and maintenance of long-term competitive advantages of countries requires producing and implementing knowledge and new technologies in a favourable institutional environment and investment climate. Furthermore, it should be noted that airport infrastructure is characterized by significant capital intensity, which, despite considerable positive economic consequences, reduces the ability of many governments to meet the financing needs of large-scale projects, causing a significant gap in the infrastructure in the global airport sector. Historically, the most widespread practice of financing airport infrastructure development is government sources (grants, subsidies). Government sources include funds provided directly by governments as well as government financial institutions, including export promotion agencies. The funding can be provided either only by the national government or with the involvement of one or more foreign governments. Moreover, one or more international government institutions or agenciesbilateral institutions, banks, development funds -may participate [27]. Any airport infrastructure development project, which will eventually benefit the national struction and Development and its subsidiaries -the International Development Association and the International Finance Corporation (although the purpose of the latter is to encourage development by providing loans to the private sector), as well as various regional banks and development funds. In recent years, dependence on state funding has declined significantly with the continued increase in the number of autonomous structures that operate infrastructure of airports. The investment in the airport sector through public-private partnerships is becoming more widespread. The involvement of the private sector has become a further step towards the liberalization of property rights and the management of international airports and is part of the overall globalization process of the world economy. Thus, under the current conditions of dynamic development and strengthening of networking of the world economy, one of the tools for increasing the global competitiveness of countries is management of airport facilities and development of a competitive innovative infrastructure capable of meeting the needs and requirements of the global air transport market. БІЗНЕСІНФОРМ № 4 '2019 www.business-inform.net CONCLUSIONS A thorough analysis of the concepts associated with enhancing competitiveness has made it possible to conclude that the effect of infrastructure development as an investment in the future is one of the conditions for ensuring social welfare and economic growth of countries. Moreover, infrastructure of international airports is one of the main structural elements in forming a modern innovation basis of competitiveness of national economies in the context of globalization transformations. Since airport infrastructure is characterized by considerable capital intensity, the government of each country chooses an individual optimal way for its development and financing with consideration to a number of factors, including the local geographical conditions, political situation, structural changes in the world economy, the nature of foreign economic relations, technological development level. 
2019-08-03T00:34:35.728Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "869d8b5f6b0a38e0ea9e11207faffd86b3833a4d", "oa_license": "CCBYSA", "oa_url": "http://www.business-inform.net/export_pdf/business-inform-2019-4_0-pages-115_123.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e0bf07e76f432682664009ad311c5b7af60c0433", "s2fieldsofstudy": [ "Economics", "Engineering", "Business" ], "extfieldsofstudy": [ "Business" ] }
126255820
pes2o/s2orc
v3-fos-license
Gravity in View of the Theory of Orbiting Binary Stars In this paper, we investigate orbiting of two stars having equal masses. We consider two models: with a circular orbit and with two elliptical orbits having a common center of a mass located in a common focal point. In the case of the circular orbit, we applied the notion of the instantaneous complex frequency. The paper is illustrated with numerous formulas, derivations and discussion of results. Introduction Three years ago, the author presented a paper describing the gravitational forces as a result of anisotropic energy exchange between baryonic matter and quantum vacuum [1]. Here, we try to show that the theory of circulation of double stars around a common center of mass yields arguments in favor of the above theory. Our goal can be achieved by investigating orbiting of two stars having equal masses. We present two such models: the first one with a circular orbit and the second one with two elliptical orbits with a common center of a mass located in a common focal point. The presented mathematical descriptions of the above models are derived by the author and certainly only the methods of derivations are new. Most of the results belong to the existing knowledge. As regards the circular orbit, we applied the notion of the instantaneous complex frequency. We introduce the following notations: A Model of a Double Star of Equal Mass Orbiting on a Circular Orbit We get the following time-independent relations between the angular velocity ω 0 and the radius ρ 0 : ( The orbital tangential velocity is 0 0 0 v ω ρ = . Therefore, the kinetic energy of the system is 2 0 k E mv = and the potential energy energy is negative. Its value equals twice the kinetic energy. Therefore, the total energy of the system is negative and time independent. Several authors derived formulae for calculation of the power of gravitational waves emitted by the system of Figure 1. ( In the book of Gasperini [3], we find [ ] Instantaneous Complex Frequency Description of the Circular Binary System We have to show that due to the emission of gravitational waves, the trajectory of the stars is not circular since the instantaneous radius decrease in time and the angular velocity and tangential velocity increase in time. The stars are orbiting along spirals ( Figure 2). A convenient method of description of this phenomenon is the notion of instantaneous complex frequency. The phasor representing the first star has the form and the second one α (t) is called instantaneous radial frequency and ω(t)-instantaneous angular frequency. The instantaneous radius is In the is a line connecting the mass centers through the origin (0, 0) and defines the direction of gravitation force. Differently, the direction of the centrifugal force F c is defined by the curvature radius ρ c . The geometry of the addition of the two forces is presented in Figure 3 (with large rate of inspiral). In the case ( ) 0 t α α = − , the angle γ is given by the formulae (see Ap- The inspiral orbit is defined by the equation However, there is a tangential force (see Figure 3) However, for the quasi-circular orbit, the tangential force is extremely small. This force induces acceleration of the mass m given by ( ) In consequence, the instantaneous angular frequency is increasing in time where T(t) is a decreasing instantaneous period. The energies of the system also increase in time. The instantaneous tangential velocity of the stars is The curvature radius is (see Appendix 2) The instantaneous kinetic energy of both stars is total energy of the system is negative. The power of the gravitational waves emitted by the system given by Equation (3) is [ ] 23 6.523698 10 W P = × . 1) Estimation of the value of the radial frequency α 0 The decrease of the radius of the circular model in one period T 0 is We have an increase of the negative value of the potential energy ( ) Therefore, we get the increase ( ) Assuming arbitrary that this increase should be equal to the energy emitted by gravitational waves during one period we get ( ) Therefore, 2) The increase of the angular frequency (or decrease of the period T 0 ) during the inspiral Taylor and Hulse have measured that the period of the PSR system decreases by 76.5 μs per year [6]. Let us derive a formula for this decrease for the circular system. We insert in Equation (2 It is more than one order of magnitude smaller in comparison to 76, 5 μs per year of the PSR system. Therefore, the circular model cannot be applied to describe the properties of the PSR elliptical system. 3) The increase of the negative value of the potential energy The potential energy at the moment t = 0 is i.e., exactly the value defined by Equation (3) which represents the power of the emitted gravitational waves. This result validates the correctness of Equation (17) defining α 0 and Equation (19) defining the delay per year. The negative sign of this power is applied in the book of Gasperini [3] with no comment. We found that the authors of reference [9] derived a formula with a negative sign of the gravitational "Poynting vector" also with no comment. 4) The inspiral time The main goal of this paper is to validate the explanation of the nature of gravity presented in [1]. The instantaneous radius is The decrease of the radius during a year is The Theoretical Model of the Binary Pulsar PSRB1913+16 The PSR system differs considerably from the above described circular system. The two stars are orbiting along elliptical orbits (see Figure 4) around a common center of mass located in the focus. We consider again equal masses The local velocity of the stars by orbiting from periastrone to apastron is (deceleration) The mean velocity (in terms of φ) is the same for both directions The local tangential velocity is alternatively defined as Where ρ c is the local curvature radius (see Figure 5). The local velocity is shown in Figure 6. The maximum value equals 448.172 [km/s] and the minimum 106.287 (compare with Appendix 1). Note that in this model, the maxima and minima of the velocity are located near the periastrone and apastrone (not exactly at these locations). The mean value in terms of φ (as in Equation (38) Final Conclusions o Both orbits, the circular and the elliptical, are defined by two forces of opposite directions: the centrifugal force and a term of the gravitational force (see Figure 3). It is logical to assume that both forces have the same physical explanation: the anisotropic energy exchange as described in reference [1]. Here, both anisotropies of radiation cancel. The other part of the gravitational force responsible for tangential acceleration or deceleration is the result of the tangential anisotropy of radiation. o The cancelation of the two forces shows that gravity and inertia have the same physical origin. They are recoil forces of radiation. The radiation pattern should be symmetric w.r.t. the tangent of the orbit. Differently, the pattern is asymmetric w.r.t. the line perpendicular to the orbit resulting in a recoil force of radiation. In a word, it is logical to assume that all described here forces are recoil forces of radiation. The radiation pattern is symmetric w.r.t. the tangent of the orbit (cancellation of gravitation and inertia) and asymmetric w.r.t. the line perpendicular to the orbit, i.e., the direction of the curvature radius. Appendix 1 This paper is illustrated by the properties of the binary pulsar PSR1913+16, a system of two binary neutron stars discovered and measured during many years by Taylor and Hulse [7] [8]. This great achievement of radioastronomy and also of time-frequency metrology was awarded by the Nobel Prize in physics in 1993. Let us repeat here the data compiled by Robert Johnston [8]. Appendix 2: The Derivation of the Curvature Radius of the in Spiral Orbit Let us define the in spiral orbit by the equation
2019-04-22T13:12:26.917Z
2018-08-31T00:00:00.000
{ "year": 2018, "sha1": "eaa474535cca72c35b847fcc6de6e660533d8b0f", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=87277", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "ad8241bdb199d7679411435488b90f71a34bd454", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
237842194
pes2o/s2orc
v3-fos-license
Determinants of High-tech Export in CEE and CIS Countries few empirical papers on the determinants of high-tech export Introduction The problems of the industrial structure of the Russian economy are well known.Oil and gas exports largely contribute to federal budget revenues.In the planning documents of the Ministry of Finance of the Russian Federation, their indicative value is 61 % for 2020 and 62 % for 2021. 1 During the last 30 years of reforms, the raw material specialisation of Russian exports has only increased: the share of mineral products in total 1 Ministry of Finance of the Russian Federation.2018.Main directions of budgetary, tax and customs tariff policy for 2019 and for the planning period of 2020 and 2021.Retrieved from: https://www.minfin.ru/ru/document/?id_4=123006-proekt_ osnovnykh_napravlenii_byudzhetnoi_nalogovoi_i_tamozhenno-tarifnoi_politiki_na_2019_god_i_na_planovyi_peri-od_2020_i_2021_godov (Date of access: 18.07.2020). exports in 2000 was 53.8 %, while in 2019 it was 63.3 %. 2Due to the raw material specialisation, the Russian economy is unstable and critically dependent on external factors.At the same time, diversification of the industrial structure by strengthening the sectors with high added value will help solve the problems associated with sustainability of economic growth, creation of highly productive jobs, increase of the population's well-being and quality of life. After the disintegration of the bloc of socialist countries in Central and Eastern Europe and the collapse of the USSR, a large number of countries faced similar problems related to the legacy of the planned economy, hyperinflation, decrease of real incomes of the population, and state budget deficits.At the same time, many countries in this group have already undergone a successful transformation.Table 1 presents individual indicators of Russia's development in comparison with three Eastern European countries (Hungary, the Czech Republic, Poland), as well as three Baltic republics of the former USSR (Latvia, Lithuania and Estonia).In this group of countries, Russia demonstrated the lowest growth in GDP per capita in absolute terms for the period 1995-2019 ($6083, compared, for example, with $13532 in Estonia, $13109 in Lithuania and $10847 in Poland).Russia is the only country in the group whose share of mid-and hightech exports in total exports of industrial products decreased from 37.6 % to 28.6 % over the period of 1995-2017 (this indicator increased from 45.6 % to 75.8 % in Hungary, from 45.5 % to 69.9 % in the Czech Republic, from 36.7 % to 55.2 % in Poland).From 1995 to 2018, the value of exports in nominal terms increased only 5.4 times in Russia, compared to 12.4 times in Latvia, 12.3 times in Lithuania and 11.4 times in Poland. Nowadays, an increase in exports of high-tech industries is one of the main priorities of the countries involved in the system of world economic relations.Identification of factors contributing to the development of these industries is important for both understanding the mechanisms triggering growth in certain national industries and developing measures aimed at improving the efficiency of state industrial policies. This study analyses factors affecting exports of high-tech industries in Central and Eastern Europe (CEE) and the Commonwealth of Independent States (CIS). 1 1 A complete list of countries can be seen in Table 3. Literature Review This article examines external factors (from the point of view of the exporting firm) that affect export flows in the country. Currently, there is a significant number of studies aimed at identifying the determinants of exports in the country.Most of them analyse the country's exports using three indicators: increase in nominal export volumes, export diversification, and export sophistication.This section is devoted to structuring the well-known factors influencing export activity. The direction and volume of intercountry trade flows (exports and imports) in the world economy can be explained using the gravity approach.It stipulates that the volume of exports from country j to country i is directly proportional to the GDP of countries I and j and inversely proportional to the distance between them.Tinbergen is considered a pioneer in the application of the gravity approach to the analysis of international trade [1].Later, the works of Anderson & van Wincoop [2], Egger [3], Silva & Tenreyro [4] and others made an important contribution to the development of methodological approaches to the empirical assessment of the gravity model of international trade.In addition to gravitational factors, the volume of exports from country j to country i also depends on the amount of differences between them: Kónya [5] argued the importance of cultural differences, Gómez-Herrera [6] -linguistic, Francois and Manchin [7] -institutional ones. According to the Heckscher-Ohlin neoclassical model of international trade, the structure and volume of a country's exports is determined by its resource endowment.Based on data from the Organisation for Economic Co-operation and Development (OECD) countries, Gustavsson, Hansson & Lundberg [8] revealed a positive relationship between the volume of exports in an industry and the volume of a country's resources Экономика региона, Т. 17, вып. 2 (2021) used by this industry, while the relationship between the volume of exports and the prices of used resources is negative.Naudé & Gries [9] obtained similar results when analysing the export in South African regions.Accumulated physical capital is an important export driver in Marconi and Rolli's work on emerging markets [10], as well as Thangamani's work on Sri Lanka export data [11]. Many studies examine the impact of the quality of workforce on the country's export activities.Based on data on 79 countries from 1962 to 2000, Agosin, Alvarez & Bravo-Ortega [12] showed that countries with higher quality human capital have a more diversified export structure.The International Monetary Fund (IMF) study found that the complexity of exports correlates with the level of education in the group of low-income countries [13].Analysing the structure of exports by revealing comparative advantages of 16 developing countries, Marconi & Rolli [10] showed that low wages determine comparative advantages in both low-tech and high-tech sectors.Low quality of human capital negatively affects the value of exports in the countries under consideration. The country's foreign economic indicators are a significant export factor.The openness of the economy has a positive effect on the volume of exports (see, for example, the works of Parteka & Tamberi [14] and Mau [15]).Based on data on 175 countries from 1980 to 2007, Iwamoto & Nabeshima [16] showed that foreign direct investment (FDI) leads to export diversification.While studying the export activity of Indian firms, Banga found that FDI from American firms leads to the diversification of Indian firms' exports, while FDI from Japanese firms does not have a significant impact on Indian exports [17].The transfer of technology from foreign to national companies contributes to the growth of exports of the latter, as shown in the work of Gorg & Greenway [18].Based on data on the economy of Sri Lanka, Thangamani [11] concluded that involvement of a foreign partner allows the company to gain access to the technology for entering foreign markets. A country's participation in global value chains is directly related to the characteristics of its export activity.The quality of the country's export flows is affected by the volume of the components' import [19].On the one hand, imported components of better quality contribute to an increase in the quality of exported products [20].On the other hand, the high quality of imported components contributes to the transfer of technologies in the country, also leading to an increase in exports [21].Participation of countries in regional economic associations is another foreign economic factor stimulating exports.Using the example of the cheese industry in the European Union (EU), Balogh & Jambor [22] show that the EU membership is a significant factor in stimulating exports.Similar findings are included in the OECD report for the EU meat industry [23]. The country's macroeconomic indicators have a significant impact on export volumes.The high cost of credit, as well as the volatility and overvaluation of the exchange rate, negatively affect export diversification [12].The resilience of the national financial system contributes to the growing export sophistication, according to the IMF report [13].Sulaiman and Saad described a positive relationship between economic growth and exports in Malaysia [24]. Innovation activity is the most important determinant of exports.Gustavsson, Hansson & Lundberg [8], as well as Muratoğlu & Muratoğlu [25], demonstrated that research and development (R&D) spending stimulates competitive advantages and export growth in OECD countries.Analysing the activities of industrial enterprises in Sweden and Finland, Blomstrom & Kokkо [26] showed that the state policy in the field of stimulating innovation activity contributes to the growth of high-tech exports in the country. Infrastructure investment decreases production costs for firms and increases exports [13]. Many studies show the significance of the development of institutions in the country as a factor in export activities.Analysing companies of different ages and sizes in emerging economies, LiPuma, Newbert & Doh [27] found that the development of institutions has a significant influence on all groups of companies.Nguyen & Wu [28] showed that export volumes of Vietnamese firms are positively correlated with the quality of public administration in the country.Li, Vertinsky & Zhang [29] obtained similar results based on data on export activity of Chinese firms. Summarising the results of the literature review on the research topic, we can identify the following groups of factors that affect the quantitative and qualitative indicators of the country's exports: gravitational, resource-related, foreign economic, macroeconomic, innovative, infrastructural and institutional.At the same time, the determinants of export growth are not universal for different groups of countries and for different types of industries.Studies of the export activity factors in post-communist countries and hightech sectors are sporadic.This article contributes to filling the identified gap. Hypotheses One empirical study cannot provide an assessment of all factors affecting the volume of hightech exports in a country.Based on the literature review, as well as on data available for analysis, our study examines the influence of four groups of factors: foreign economic, macroeconomic, resource-related, and innovative.We formulated the following research hypotheses. H1.A rise in resource prices in the country leads to an increase in high-tech exports.Investments in high-tech products in the modern economy are largely determined by the limitation of resources, primarily, labour and raw materials.As a rule, countries with cheap labour and high availability of raw materials specialise in labour-intensive upstream production.It is believed that the high price of resources stimulates the development of high-tech industries in the country, and, consequently, the export of high-tech products. H2.An increase in foreign economic activity in the country stimulates the growth of high-tech exports.In the modern economy, the structure of world production is fragmented by country within global value chains (GVCs).As a rule, high-tech products are traded within GVCs, since multinational companies (MNCs) are a source of demand in production chains, and MNCs require high-tech components for their production.It is assumed that an increase in the country's foreign economic activity, manifested in an increase of foreign trade and foreign direct investment, stimulates the interaction of national companies with external partners.In turn, this encourages national companies to make various improvements to their product, increasing their competitiveness in international markets and, ultimately, causing an increase in the export of high-tech products. H3. Macroeconomic stability in the country has a positive effect on the volume of exports of hightech products.The stable macroeconomic position of the country is a factor that reduces investment risks.Two of the most important macroeconomic indicators in the country are considered, namely, inflation and unemployment.Firstly, a decrease in inflation leads to a decrease in interest rates and, consequently, to an increase in the number of implemented investment projects.Secondly, an increase in the unemployment rate, which may reflect the availability of labour in the country, negatively affects the industrial development due to the deterioration in the quality of life of the population. H4.A reduction of the tax rate in the country leads to an increase in the volume of high-tech exports.The tax rate, influencing the cost of con-ducting business in the country, has a significant impact on implemented projects.We can predict a negative relationship between the tax rate and the export in high-tech industries for two reasons.Firstly, a decrease in the tax rate leads to an increase in both financial efficiency and the number of investment projects implemented in the country.Secondly, since companies face additional costs when exporting (compared to supplying the domestic market), tax cuts will lead to the implementation of export projects that were unprofitable under high tax. H5.The level of innovation activity and the quality of human capital have a statistically significant positive impact on exports in high-tech industries.R&D investment is an essential condition for the development of enterprises in the modern economy.At the same time, enterprises in high-tech industries invest significantly more in R&D compared to enterprises in the traditional sector [23].In this regard, it can be assumed that the level of innovation activity in the economy is a stimulating factor in the development of the high-tech sector. The quality of human capital is also an important condition for creating new technologies, because the lack of highly qualified specialists can restrict the economic development.At the same time, a higher level of human capital stimulates entrepreneurial activity in the country, which should have a positive effect on activity in hightech sectors of the economy. The share of urban population in the total population of the country is another crucial indicator.Firstly, the urban environment generates various agglomeration effects associated with the growth of business activity and the productivity of enterprises [30].Secondly, rural residents are usually not involved in high-tech production. Data and Descriptive Statistics The Balassa Index [31], based on the concept of comparative advantage, is used in this study to assess the value of a country's exports at the industry level.The index is calculated as the proportion of exports of a certain product in the total volume of a country's exports divided by the proportion of the same product in the world export volume: where Х ij and Х wj -the amount of revenue from the export of product j for country i and the world export of product j, respectively; X it and X wt -the total export volume of the selected country and the world in general, respectively. Экономика региона, Т. 17, вып. 2 (2021) An index value greater than one indicates the presence of a comparative advantage in the industry in the considered country, while a value less than one shows its absence.In this study, we proceed from the assumption that the presence of a comparative advantage in an industry shapes its competitiveness in the global market, which implies a higher level of exports compared to the same industries in other countries. Data in the public domain presented on the UNCTAD statistical portal are the basis for calculating the index of comparative advantage of high-tech industries.We examined the data on 27 countries of Eastern Europe and the CIS covering the period from 1995 to 2018. 1 In accordance with the Third Revision of the Standard International Trade Classification (SITC) 2 , 73 groups of mid-and high-tech products were classified as products of high-tech industries.The group of mid-tech production includes complex technologies with moderately high investments in research and development, which require advanced skills and long training.This group comprises automotive, chemical and mechanical engineering products.The group of high-tech production includes products with rapidly changing advanced technologies and high investments in research and development, namely, electronics and electrical engineering.In the database, the value of the comparative advantage index varies from 0 (the country does not export a specific product) to 34.94 (Ukraine's exports of the group of products "Railway transport and railway materials" in 2011), with an average value of 0.712. To simplify the analysis of available data, it is advisable to have a single indicator characterising the identified comparative advantage of high-tech industries for each country for each year (for brevity, we will denote it as the CAHTI index). We construdcted this index using the principal component analysis, one of the main advantages of which is the minimum loss of informa- 1 The choice of 1995 as the starting year for the study is due to the fact that data for an earlier period for most of the indicators used in the work are not available to the authors. 2 SITC classification 3 (SITC Rev. 3 -Standard International Trade Classification, Revision 3) is a basic classification recommended by the UN to be used by all countries for data on exports and imports.It ensures comparability of data on foreign trade statistics.It is used when reporting trade data by international organisations such as UNCTAD, World Bank, etc.It was introduced by Lall [32].The principle of selection of product groups takes into account the following: the materials used in their production, the stage of production, the purpose of the product, the importance of the product in international trade, and technological changes (https://unstats.un.org/unsd/ classifications/Family/Detail/14). tion during the data dimensionality reduction. 3 The CAHTI index is in the range from -2.46 to 4.70 with an average value of zero. 4 The generated index explains approximately 94 % of the variation in the original set of variables on the basis of which it was calculated.Consequently, the negative value of the CAHTI index indicates a relatively weak level of development of high-tech exports in the country, while the positive value indicates a higher development of this sector in comparison with the average level for all countries. Table 2 shows the dependent and explanatory variables used to test the formulated hypotheses.For better comparability of the indicators included in the model, most of the variables measured in dollars (except wages) are recalculated per capita.In order to have regressors comparable in dimension, all variables, except those measured in fractions, were logarithmised. Table 3 provides information on the change in this index in the examined countries.The initial value is the average value of the index for the first two years of the observation period, the final value is the average value of the index for the last two years.At the beginning of the period, 18 out of 27 countries had a negative value of the CAHTI index, which was probably caused by the non-market nature of the economy of these countries.Slovenia (2.85), Czech Republic (2.62), Hungary (1.62) and Slovakia (1.30) were initially characterised by high indicators of the CAHTI index.The rest of the countries in the group with a positive 3 Principal component analysis is a multivariate statistical analysis technology used to reduce the dimensionality of feature space while minimizing useful information loss.This method was proposed by K. Pearson in 1901, and then, in the 1930s, was developed in detail by the American economist and statistician G. Hotelling.From a mathematical point of view, the principal component analysis is an orthogonal linear transformation that maps data from the original feature space to a new space of lower dimension.In this case, the first axis of the new coordinate system is constructed in such a way that the variance of the data along it is maximal.The second axis is orthogonal to the first one, so that the data variance along it is maximal of the remaining possible, etc.The first axis is called the first principal component, the second one is called the second, etc. [33].Principal component analysis is widely used in economic research, where regression-correlation analysis is used: when constructing socio-economic indices [34], when analysing the factors of economic growth [35], when assessing risks in financial markets [36], when assessing the quality of institutions at the country level [37] and many others. 4Since the first step of the principal component analysis is the centring of the variables (geometric transfer of the coordinate centre to the centre of the observation cloud), the generated value of the CAHTI index has both negative and positive values, despite the fact that the value of the Balassa index lies only in the region of non-negative values.Экономика региона, Т. 17, вып. 2 (2021) index value at the beginning of the period had indicators that were slightly above zero.At the end of the examined period, countries were clearly divided into two groups.All Baltic countries and all Eastern European countries, except for Slovenia and Albania, have significantly increased the export of high-tech industries, which led to an increase in their CAHTI index.At the same time, there was a drop in the CAHTI index in most of the CIS countries.The exception is Belarus (+1.64),where the value of this index increased, as well as Russia (+0.04),Georgia (+0.1) and Tajikistan (-0.02),where the value of the index remained virtually the same. Ekonomika Regiona [Economy of To identify patterns in the dynamics of the index of comparative advantages of high-tech industries, we analysed the most important development indicators in three countries with the greatest increase (Romania, Hungary, Poland -hereinafter group 1) and the largest decrease (Kazakhstan, Azerbaijan, Armenia -hereinafter group 2) in the CAHTI index for the considered period (see Table 4). According to Hypothesis 1, the development of high-tech industries in the country is influenced by resource prices.In the reviewed period, all countries experienced a multiple increase in wages (2.1 times (minimum increase) in Azerbaijan, 4.9 times (maximum) increase in Armenia).At the same time, the level of wages in countries with an increase in the CAHTI index ($1136 on average at the end of the reviewed period) significantly exceeds this indicator in countries with a decrease in the CAHTI index ($321, respectively).Table 4 shows the gasoline price as one of the indicators of resource prices in the studied countries.It is higher in Romania, Hungary, and Poland than in Azerbaijan, Armenia, and Kazakhstan. Additionally, it is necessary to analyse macroeconomic indicators of the countries.The average growth rate of nominal GDP in Group 2 exceeded this indicator in Group 1.The inflation rate for the reviewed period has decreased in all countries.Currently, the inflation rate in Romania (2.9 %), Hungary (2.6 %) and Poland (1.9 %) is roughly the same as in Armenia (1.8 %), but it is at a much lower level than in Kazakhstan (10.9 %) and Azerbaijan (7.4 %).Currently, the unemployment rate in Romania (4.5 %), Hungary (3.9 %) and Poland (4.4 %) is at the same level as in Azerbaijan (5.0 %) and Kazakhstan (4.9 %), but it is significantly lower than this indicator in Armenia (16.7 %).Tax rates in Group 1 are much higher than in Group 2. After conducting a similar analysis for the indicators of imports, R&D, patent activity, human capital development, and urbanisation (the values of which are presented in Table 4), we drew the following conclusions regarding the studied countries.Firstly, more expensive resources in the country (in particular, higher labour costs and higher fuel costs) stimulate the development of high-tech sectors of the economy.Secondly, higher levels of indicators of foreign economic activity -in particular, imports and FDI volumeare associated with a higher value of the CAHTI Экономика региона, Т. 17, вып. 2 (2021) index in the country.Thirdly, higher taxes have a negative impact on the volume of exports of hightech industries in the country.Finally, there is no obvious positive relationship between exports of high-tech industries and the level of R&D, patents, and the quality of human capital.For clarity, Figure 1 shows a linear approximation of the CAHTI index and four indicators in the studied countries, namely, wages, openness, tax rate and human development index.A negative slope of the graph means an increase in the considered indicator with a decrease in the CAHTI index, a positive slope indicates an increase in the indicator with an increase in the CAHTI index.The extreme points of each segment correspond to the minimum and maximum value of the considered indicator for the observation period.If the segment is to the right (left) of the others on the graph, then the range of values of the considered indicator in a specified country is higher (lower) compared to other countries. Multivariate Model and Methods of Analysis We believe that the groups of factors considered in this study have a simultaneous effect on the comparative advantage of the country's hightech industries.Since there is a mutual influence of various indicators of the country's development on each other, as well as simultaneous influence of factors on the dependent variable, it is necessary to build a multivariate regression model to obtain reliable research results. The CAHTI index is the dependent variable in this model, and the variables presented in Table 2 are the regressors.For regression analysis, we use the method of least squares (OLS).Standard Wald, Breusch-Pagan and Hausman tests show the preference for using the OLS method with fixed effects.The model with fixed effects (within-estimators for each group in the model) was chosen due to the need to consider the unobservable and observable characteristics, which do not change over time and which can influence the variables used in the model.An analysis of pairwise correlations of dependent variables in the model showed the presence of a potential multicollinearity problem.In this regard, the specifications presented below did not simultaneously include variables with a value of the pairwise correlation coefficient greater than 0.5.A variance inflation factor (VIF) test for multicollinearity was also conducted for each specification.The test showed the presence of heteroscedasticity in all specifications of the model.Therefore, all the standard errors of the regressor coefficients given below were obtained with a correction for heteroscedasticity. Results and Discussion Table 5 presents the results of assessing various specifications in the model.The table shows that the within determination coefficient varies in the range 0.26-0.36,indicating a sufficiently high quality of the model for panel regression. Hypothesis 1 that an increase in resource prices in the economy stimulates the export of high-tech industries is confirmed: coefficients of the variables of wages and gasoline prices have a positive sign with a high level of significance. All indicators related to the country's foreign trade (the volume of imports and the indicator of openness) have a statistically significant positive effect on the CAHTI index.Simultaneously, there is no statistically significant positive effect of FDI on the dynamics of the CAHTI index.These results indicate that FDI is directed to sectors not related to the export of high-tech products. Thus, the obtained data correspond to Hypothesis 2 that an increase of foreign trade openness in the country stimulates high-tech exports. The unemployment rate in the country was found to be statistically significant among the considered macroeconomic factors.A negative relationship was discovered between the unemployment rate in the country and the volume of exports of high-tech products.At the same time, there is no statistically significant effect of the inflation rate on the dependent variable.Thus, Hypothesis 3 was partially confirmed in terms of the influence of the unemployment rate on the dependent variable.Regarding Hypothesis 4, tax rate was found to affect the CAHTI index negatively: an increase in the tax burden corresponds to a decrease in the export of high-tech products in the country. Table 5 shows that the number of patents per capita and the share of R&D expenditures in GDP are statistically insignificant in the model.The share of urban population in the country is also insignificant.At the same time, a higher level of human capital development is associated with a higher level of the CAHTI index in the model.In this regard, we can partially confirm Hypothesis 5. Based on the obtained results, we formulated the following recommendations for the state policy of stimulating the export of high-tech products in the Russian economy. 1. Resource prices.An increase in the price of energy resources encourages national companies to invest in the modernisation of their production facilities, look for new market niches and offer products with high added value.An increase in the level of technological effectiveness of national in-Экономика региона, Т. 17, вып. 2 (2021) dustries results in an increase in exports of products with high added value.Russian state tariff policy should stimulate further liberalisation of energy markets and services of natural monopolies, as well as introduce market incentives for the implementation of energy-saving measures. Even though wages are one of the types of costs for companies, manufacturers of high-tech products are not under significant pressure from rising wages in the economy [38].On the contrary, the increase in the population's income leads to a long-term change in the ratio of lowskilled and high-skilled jobs in the economy, a reduction in the share of upstream sectors and an increase in the share of high-tech sectors.In this regard, the state policy aimed at increasing the income of the population will contribute to the increase of high-tech exports in the economy in the long term. 2. Foreign economic factors.The growth of the economy's foreign trade openness leads to an increase in its high-tech exports.Since 70 % of world trade at the moment occurs in global value chains, the efforts of government authorities should be aimed at eliminating the one-sided participation of Russia in the GVCs (as a supplier of raw materials and upstream products).The growth of the country's exports depends on the volume of imports: new technologies are imported into the country along with necessary components, stimulating the growth of technological effectiveness of production [39]. The effective tools of integrating the country into global value chains include the involvement of MNCs in the economy, the development of export-related industries, and the development of platform solutions for business.At the level of individual companies, the following tools can be used: direct support of exporting firms, stimulation of the modernisation of national companies and their susceptibility to innovation, support of the creation of new industries focused on global demand.The support of fast-growing companies, as well as small and medium-sized exporting companies, plays an important role in changing the structure of exports of a developing economy [40].Lower tax rate is associated with higher exports of high-tech products.The system of tax incentives in the Russian economy is quite diverse and includes the following: depreciation of capital investments; value added tax (VAT) exemption when performing R&D at the expense of state budgets and special funds; VAT exemption on the import of technological equipment that has no equivalents in Russia; multiplier for R&D expenditures, etc. Simultaneously, the share of enterprises using such support is small, and the support is often biased towards companies that are close to the public sector [41].We consider it advisable to expand tax and other financial incentives among national exporters.The toxicity of state support measures is another well-known problem, which requires solution.This problem includes an increasing complexity of reporting, attracting the attention of supervisory authorities, an increasing number of inspections, etc. 4. Human capital.The quality of human capital is the most important factor in the creation of products and technologies that are competitive on international markets.Moreover, an increase in the quality of human capital is a factor in the diversification of exports in resource-dependent countries [42].An increase in the quality of human capital in Russia is a long-term investment, primarily by the state, in such sectors as education, healthcare and social support system.Technological transformation and improvement of the quality of services in these sectors are the basis of the policy for increasing the quality of human capital in Russia.In addition, the state policy should focus on increasing the population mobility and changing the existing labour market model [43].The role of enterprises in improving the qualifications of workers seems to be significant.In this regard, it is expedient to create a system of tax and financial instruments that encourage enterprises to implement training programmes for their employees [44]. Conclusion The study analyses the factors influencing the level of exports of high-tech industries in 27 countries of Eastern Europe and the CIS from 1995 to 2018.The Balassa Index was used to measure the volume of exports at the industry level in the country.The index is calculated as the proportion of exports in a specific industry in the total export of a country divided by the proportion of exports in this industry in the structure of exports of all countries in the world.The analysis uses data on 73 industries divided in the following groups: automotive, chemical, mechanical engineering, electronics, and electrical engineering.To conduct regression analysis using the principal component method, the data on all industries in each country in a specific year was combined into one indicator of comparative advantage of high-tech industries. A descriptive analysis of the CAHTI indicator showed that three leading countries in terms of high-tech exports -the Czech Republic, Hungary, and Slovenia -stood out at the beginning of the observation period in the considered group of countries.However, at the end of the observation period, all countries were clearly divided into two groups.The countries of Central and Eastern Europe, as well as the Baltic countries, showed a significant increase in the CAHTI index, while other countries of the former USSR experienced a decrease or stagnation of this indicator. The article analyses the impact of resource-related, foreign economic, macroeconomic, and innovative factors on the change in the CAHTI index. The panel regression method with fixed effects allowed us to obtain the following results.First, it is shown that the high resource price in the country (in particular, the level of wages and gasoline prices) stimulates production in hightech industries.Second, the growth of the country's foreign trade openness stimulates the export of goods with high added value.Third, there is no positive relationship between foreign direct investment and the CAHTI index due to the fact that FDI in these countries is not directed to hightech industries.Fourth, a statistically significant negative relationship is revealed between the unemployment rate in the country and the output in high-tech industries.Fifth, increase in the tax burden in the country puts significant pressure on the production volume of high added value goods.Sixth, there is a statistically significant impact of the quality of human capital on the increase of the CAHTI index in the analysed countries. We believe that if public authorities use the results obtained in this study to develop various programmes and measures aimed at supporting high-tech exports, the effectiveness of state policy in this field will increase. The results presented in this study can serve as a basis for further research.Firstly, alternative, complex indicators can be used as variables reflecting the volume of exports of high-tech industries in the economy.Secondly, a different sample of countries can be used to identify the determinants of export growth in high-tech industries, allowing conclusions to be drawn for countries with other similarities.Thirdly, other industries and Экономика региона, Т. 17, вып. 2 (2021) other groups of factors can be used to examine the determinants of exports.Finally, an analysis of ex-port factors at the industry level in regions of a country or a group of countries can be conducted. Fig. 1 . Fig. 1.Linear approximation of the values of wages, openness, tax rate and human development index in individual countries.Source: authors' calculations in the Stata package. Table 3 Dynamics of the index of comparative advantage of high-tech industries in the CIS and Eastern Europe Source: compiled by the authors. Table 4 Key indicators of countries with the highest increase and decrease in the CAHTI index Indicator Maximum increase in the CAHTI index Maximum decrease in the CAHTI index Romania Hungary Poland Azerbaijan Armenia Kazakhstan Initial Final Initial Final Initial Final Initial Final Initial Final Initial Final Source: World Bank, UNCTAD.
2021-09-01T15:37:20.122Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "766500ebbcf4aed902bfc7042de4f6a051d1dba2", "oa_license": "CCBY", "oa_url": "https://economyofregion.com/current/2021/83/3375/pdf/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "aca0f068b2457e6102d78d7a6bb072fd444de85c", "s2fieldsofstudy": [ "Economics", "Engineering", "Business" ], "extfieldsofstudy": [ "Business" ] }
261694856
pes2o/s2orc
v3-fos-license
Associations between neurovascular coupling and cerebral small vessel disease: A systematic review and meta-analysis Purpose: The pathogenesis of cerebral small vessel disease (cSVD) remains elusive despite evidence of an association between white matter hyperintensities (WMH) and endothelial cerebrovascular dysfunction. Neurovascular coupling (NVC) may be a practical alternative measure of endothelial function. We performed a systematic review of reported associations between NVC and cSVD. Methods: EMBASE and PubMed were searched for studies reporting an association between any STRIVE-defined marker of cSVD and a measure of NVC during functional magnetic resonance imaging, transcranial Doppler, positron emission tomography, near-infrared spectroscopy or single-photon emission computed tomography, from inception to November 3rd, 2022. Where quantitative data was available from studies using consistent tests and analyses, results were combined by inverse-variance weighted random effects meta-analysis. Findings: Of 29 studies (19 case-controls; 10 cohorts), 26 reported decreased NVC with increasing severity of cSVD, of which 18 were individually significant. In 28 studies reporting associations with increasing WMH, 25 reported reduced NVC. Other markers of cSVD were associated with reduced NVC in: eight of nine studies with cerebral microbleeds (six showing a significant effect); three of five studies with lacunar stroke; no studies reported an association with enlarged perivascular spaces. Specific SVD diseases were particularly associated with reduced NVC, including six out of seven studies in cerebral amyloid angiopathy and all four studies in CADASIL. In limited meta-analyses, %BOLD occipital change to a visual stimulus was consistently reduced with more severe WMH (seven studies, SMD −1.51, p < 0.01) and increasing microbleeds (seven studies, SMD −1.31, p < 0.01). Discussion and Conclusion: In multiple, small studies, neurovascular coupling was reduced in patients with increasing severity of all markers of cSVD in sporadic disease, CAA and CADASIL. Cerebrovascular endothelial dysfunction, manifest as impaired NVC, may be a common marker of physiological dysfunction due to small vessel injury that can be easily measured in large studies and clinical practice. Introduction Cerebral small vessel disease (cSVD) accounts for 30% of ischaemic stroke, 80% of haemorrhagic stroke and 40% of dementia. 1Structural features of cSVD on magnetic resonance imaging (MRI) or computer tomography (CT) are well characterized, including white matter hyperintensities (WMH), lacunes of presumed vascular origin (lacunes), cerebral microbleeds (CMB), enlarged perivascular spaces (EPVS) and cerebral atrophy, but the underlying pathophysiology and mechanisms of cognitive dysfunction are unclear. 2However, both imaging markers of cSVD and clinical outcomes are strongly associated with impaired endothelial function, 3 which is a key target of current trials in cSVD.The LACunar Intervention Trial-2 (LACI-2) demonstrated potential reduced cognitive decline with two endothelial-stabilizing drugs (cilostazol and isosorbide mononitrate) in cSVD patients, particularly with isosorbide mononitrate. 4,5Similarly, the Oxford haemodynamic adaptation to reduce pulsatility (OxHARP) trial is testing the potential of sildenafil, a PDE5 inhibitor, on both cerebrovascular pulsatility and reactivity in patients with cSVD. 6ndothelial function can be assessed through CO 2 inhalation during MRI or transcranial ultrasound, but this is poorly tolerated by many patients and technically challenging.It is also uncertain whether this is principally a marker of established disease or a component of the causative pathway leading to worse cSVD, and therefore a target for treatment.To assess this in clinical populations and future trials, a more acceptable marker of endothelial dysfunction in cSVD is required, ideally one that is already available in very large population-based cohorts. Neurovascular coupling (NVC) measures the change in cerebral blood flow (CBF) due to neural activity. 7It is altered in degenerative conditions like Alzheimer's, 8,9 vascular dementia 9 and Parkinson's. 10The neurovascular unit (NVU) triggers vasodilation in response to neural activity to increase blood flow to meet the metabolic demands of active neurons (NVC). 11Vasodilatation is at least partially dependent upon endothelial factors, and therefore NVC reflects endothelial dysfunction. 7urthermore, impaired NVC may therefore be associated with a reduced capacity of the cerebrovascular to respond to haemodynamic challenges, resulting in decreased oxygen and nutrient delivery to the brain, contributing to further neuronal dysfunction and potentially exacerbating cSVD progression. 7As such, NVC has the potential to provide a measure of endothelial dysfunction both as a marker of disease severity and as a possible targetable mechanism for treatment. NVC can be assessed through non-invasive and welltolerated methods, including blood-oxygen-level-dependent functional MRI (BOLD-fMRI), 12 arterial spin labelling MR perfusion (ASL), 13 near-infrared spectroscopy (NIRS), 14 Oxygen-15-Water Positron Emission Tomography (PET), 15 single-photon emission computerized tomography (SPECT) 16 and transcranial Doppler (TCD) ultrasound. 17NVC may also therefore provide an alternative measure of endothelial function in cSVD that is available in large population-based studies and easily applicable during clinical MRI imaging. To evaluate the potential of neurovascular coupling as a reliable marker of cerebrovascular dysfunction, we conducted a systematic review and meta-analysis on the reported associations between neurovascular coupling and cSVD severity. Eligibility criteria and search strategy This systematic review and meta-analysis was registered on PROSPERO (CRD42022382637) and was reported in line with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist (Supplemental Table 1). 18The PubMed and EMBASE databases were searched for potential studies from inception to November 3rd, 2022.Search strategy is detailed in Supplemental List 1.No restriction on species, languages, and study types were applied in initial search.Both authors performed searching and screening of studies.Disagreements were resolved by discussion between two authors.Reference lists were searched for eligible studies. Data extraction and quality assessment The following data was extracted from the included studies: first author, year of publication, study design, sample size, age, sex, cSVD markers, cSVD markers quantification, methods of neuronal stimulus, methods and outcomes for measuring NVC and direction of effects.If there was inadequate or unclear data, the corresponding authors of the studies were contacted for relevant information or clarification.If the study measured NVC at different timepoints without providing a single quantification across timepoints, changes with greater statistical significance were extracted.The Newcastle-Ottawa Quality Assessment Scale (NOS) was used to assess the quality of the included studies. 19 Statistical analysis Studies reporting quantification of the burden of WMH, CMB or lacunes and NVC measurements were assessed for eligibility to be included in the meta-analysis.If a cSVD marker or a NVC assessment location was only reported in one study, they would not be included in the meta-analyses.Furthermore, only studies with consistent tasks and consistent regions of interest could be combined. Since multiple cSVD markers and NVC in different brain locations or cerebral vessels were evaluated, we grouped the extracted studies by types of cSVD (WMH, CMB, lacunes) and location where NVC was assessed. Statistical analysis was carried in R (version 4.2.2 with the following packages: database management by data.table(version 1.14.8);meta-analysis and plots by meta (version 6.2-0).Minimum number of studies for inclusion in meta-analysis is 3. Pool effects were calculated by standardized mean difference (SMD) using the inverse variance (IV) method.Due to high heterogeneity from preliminary analysis, random effects were used.Subgroup analysis based on cSVD aetiology was performed. The inconsistency test (I 2 ) was used to assess statistical heterogeneity across included studies, where I 2 values of 25%, 50% and 75% are considered low, medium and high heterogeneity, respectively. If there were more than 10 studies included in metaanalysis, funnel plots were used to assess publication bias. 20or meta-analyses that included 10 or more studies, Egger's regression test was calculated.A p-value of <0.05 was interpreted as statistically significant. Study selection The search yielded 12,941 results.After removal of duplicates and title/abstract screening, 284 studies were retrieved for full-text review.They were evaluated by two reviewers independently, and the reference lists of the included papers were also screened for eligible studies (Figure 1). Study characteristics Twenty-nine studies were included in this systemic review, comprising 19 case-control studies and 10 cohort studies.The characteristics of the included studies are summarized in Table 1 and detailed in Supplemental Table 3.The mean age in the higher-cSVD burden group, which refers to the group with presence or higher counts/volumes of cSVD markers, was higher at 64.44 compared to 59.54.All included studies assessed cSVD using MRI, with field strength ranging from 1.5 to 7 Tesla.WMH was the commonest cSVD marker reported whilst associations between NVC and other markers were less commonly reported, with nine studies reporting associations with CMB, five with lacunes and none with EPVS (Table 2). The majority of studies (22/29) used BOLD-fMRI to measure changes in BOLD signals in respective brain regions.Among two studies combining ASL-fMRI and BOLD-fMRI to detect changes in CBF and BOLD signal, Huneau et al. compared changes in the primary motor and visual cortices in cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy (CADASIL) patients, while Opstal et al. looked at changes in the occipital lobe in hereditary cerebral amyloid angiopathy (CAA) gene carriers.Two studies used NIRS to measure changes in the concentrations of haemoglobin to derive CBF data, with Tak et al. combining BOLD-fMRI with NIRS to look at NVC changes at the primary motor and somatosensory cortices.Five studies used TCD and one used PET (Oxygen-15). Multiple methods of neuronal stimulus to activate NVC were used.The commonest method was visual stimulation with 12 studies employing a flickering checkerboard-like pattern (8 or 10 Hz) and one by reading a magazine.Motor stimulus by hand or finger movements were performed in six studies and by ankle movement in 1. Cognitive stimulation with an established fMRI paradigm was used in nine studies: the 'N-back' task was performed in three studies to activate working memory regions; two studies used the Stroop test to excite regions involving cognitive flexibility and attentional control, whilst other methods to stimulate regions involved in cognitive functions included episodic memory retrieval, verbal memory encoding, sample-matching working memory, Go/No-go and the Digit Symbol Substitution Test.Among these, eight reported correlations between cSVD severity and task performance.Overall, similar task performance was observed in group with higher cSVD burden compared to healthy controls or lower cSVD burden (Supplemental Table 4).Less commonly used neuronal stimuli focused on brain regions involved in affection.Aizenstein et al. 21elicited affective reactivity through face-matching and shape-matching, and Vasudev et al. by words with positive, neutral or negative affections. Out of the 26 (p < 0.001) 22 studies reporting a positive association between increasingly severe cSVD and reduced NVC, 18 were individually significant.Similarly, in 28 studies reporting associations with increasing WMH, 25 reported reduced NVC (19 showing a significant effect). Other markers of cSVD were associated with reduced NVC in: eight of nine studies with CMB (six showing significance); and three of five studies with lacunes.However, no studies reported an association with EPVS (Table 2).CAA was the most-studied phenotypic cSVD subgroup, among which six out of seven studies reported reduced NVC with more severe disease on MRI and one study reporting no such correlation.Monogenic CADASIL was the most studied genetic cSVD subgroup, and all four in CADASIL reported reduced NVC with increased cSVD severity. Quality assessment The NOS of the included studies ranged from five to eight, with a median of 7 (Supplemental Table 3). Due to the small numbers of studies included, especially with a consistent test of NVC in a consistent population, it was not possible to reliably assess publication bias through funnel plots, although it remains possible that there was bias due to unpublished studies demonstrating no association between NVC and cSVD severity. Associations between NVC and markers of cSVD White matter hyperintensity.NVC was reported to be reduced in patients with WMH versus controls (17 of 18 studies, p < 0.001) and in patients with more severe versus less severe WMH (9 of 11 studies).This association was reported across different cognitive tasks (8 of 10), motor tasks (7 of 8) and visual tasks (12 of 12), with responses in corresponding brain regions.Fourteen of 17 studies reported this association among sporadic SVD. Seven studies reported a consistent outcome measure of change in percentage of BOLD amplitude in the primary visual cortex with WMH severity (Figure 2), during a flashing checkerboard task, and could therefore be meta-analysed.Studies found that NVC was more impaired in groups with higher WMH burden, with a significant effect on average in a random-effects meta-analysis (SMD = −1.51;95% CI = −2.27 to −0.76, p < 0.01).In subgroup analysis, both CADASIL and CAA patients showed the same association.However, the heterogeneity between studies was high (I 2 = 86%), likely due to a greater effect in patients with CAA.There were no studies reporting a quantitative result in patients with sporadic cSVD. Overall, six of seven studies reported a reduced BOLD amplitude response in primary motor cortex during a motor task.Of these, three reported consistent associations between change in percentage BOLD amplitude in the primary motor cortex with WMH severity during a motor task, and could be meta-analysed.Overall, impaired NVC was associated with higher WMH burden (SMD = −1.34;95% CI = −3.07 to 0.39, p = 0.01), although there was significant between-studies heterogeneity (I 2 = 76%) (Figure 3). Severity of WMH and NVC assessed on posterior cerebral artery (PCA) TCD to visual stimulus were reported in four studies.All four showed that more WMH were associated with more impaired NVC but the results were not statistically significant (SMD = −1.15;95% CI = −1.49to −0.81, p = 0.87), with low heterogeneity between studies (I 2 = 0%) (Figure 4). Cerebral microbleeds. Of nine studies on CMB, seven reported consistent quantitative associations between the severity of CMB and NVC measured by changes in percentage of BOLD amplitude in the primary visual cortex.Overall, higher CMB burden was associated with more impaired NVC (SMD = −1.31;95% CI = −2.24 to −0.38, p < 0.01), but the heterogeneity was high (I 2 = 92%).In subgroup analysis, CAA and CADASIL both showed the same direction of effects compared to the overall trend, although the single study reporting an association in sporadic cSVD showed a trend in the opposite direction (Figure 5). Lacunes.Two case-control studies reported on the severity of lacunes and NVC measured by changes in percentage of BOLD amplitude in the primary visual cortex.Both recruited CADASIL patients and healthy comparisons, and showed higher lacune burden with worse NVC. Discussion Across 29 studies, there was a highly consistent association between markers of cSVD and reduced NVC, including in meta-analyses of 13 studies with reasonably consistent methods of measurement and analysis.This association was most commonly reported in patients with increased WMH, but was consistent for all markers of sporadic cSVD where there was data available, and was largely consistent for both sporadic disease and for specific cSVD (CADASIL, CAA), implying a common association with endothelial dysfunction.Furthermore, where meta-analysis was possible, there was a consistently reduced response to visual stimuli with all markers of cSVD, despite preservation of vision in these populations, supporting a probable vascular rather than neuronal mechanism for reduced NVC. Our findings were consistent with other studies investigating the relationship between cSVD and endothelial dysfunction with different methodologies.Systematic endothelial dysfunction is found in cSVD patients, 11 particularly in studies demonstrating reduced flow-mediated vasodilation of the brachial artery with increasing cSVD severity. 23,24Similarly, in the brain, the haemodynamic response to a visual stimulus on BOLD was prolonged in hereditary and sporadic CAA versus healthy controls, and was correlated with increased cerebral atrophy in CAA patients. 25,26A similarly slowed response to neuronal stimulation was found in probable CAA patients compared to healthy controls with increasing CMB counts. 27Quantitative cerebrovascular reactivity using inhaled CO 2 is the commonest method of assessing cerebrovascular endothelial dysfunction, 28 with reduced CBF responses in patients with cSVD. 29Lacunes and WMH are also associated with increased blood-brain barrier leakage, and with broader blood-based biomarkers of endothelial dysfunction. 30urthermore, decreased BOLD signal responses in the occipital region upon visual stimuli were associated with more severe WMH, CMB and lacunes, in both sporadic and genetic cSVD.Despite the severity of their conditions, participants remained visually intact, suggesting not only common endothelial involvement in all forms of the disease but a likely common vascular cause for the reduced NVC underlying condition rather than localized neuronal damage.These direct measures of endothelial dysfunction in cSVD are consistent with the direction of impairment in NVC in this review, supporting a common underlying vascular mechanism. The specificity of NVC to cSVD is currently unclear and warrants further research, and may be present in other conditions.However, if impaired NVC is an early cSVD marker, preceding other structural markers (WMH, lacunes and microbleeds), it may enable timely intervention to prevent disease progression and provide a short-term measure of the effect of future treatments in clinical trials.NVC also reveals real-time, functional consequences of vascular damage, aiding comprehensive understanding of cSVD pathology.If NVC is established as a reliable measure of endothelial dysfunction in cSVD, it would offer a method for assessing the clinical determinants of endothelial dysfunction, its prognostic value in large populations, and to investigate the underlying pathophysiological process, given our more detailed cellular understanding of the physiological basis of NVC. 31 This study has several limitations.First, multiple sources of heterogeneity, such as varying MRI field strengths and NVC acquisition methods across studies, limited the ability to quantitatively compare and standardize results.In particular, variability in neuronal stimulus frequency with limited signal-to-noise ratio may lead to potential underestimation of associations, but this would lead to a conservative underestimation of the overall effect size.Second, the small, selective study populations limits generalizability of the results, reflected in the heterogeneity in the cSVD manifestations.However, subgroup analysis showed a stronger association between higher cSVD severity and impaired NVC in more defined populations.Third, possible publication bias from unpublished data may exist.Most included studies reported decreased NVC with increased cSVD severity, while only four reported opposite direction of effects, although this may simply represent the reliability of the pathogenic mechanism.Fourth, lack of numerical data in the included studies prevented a comprehensive meta-analysis.Finally, more studies are needed to compare NVC impairment in sporadic cSVD with specific cSVD like CAA or CADASIL, which often present with more severe phenotypes. Future research with larger populations is needed to establish a stronger association between cSVD and impaired NVC, especially in sporadic cases and to determine the direction of causality.Large datasets like UK Biobank, with over 500,000 participants and 50,000 functional brain images, may provide adequate data.Further studies should stratify cSVD subgroups and adopt consistent methods for assessing NVC across different brain regions and neuronal stimuli.The potential advantage of NVC measurements, such as minimal invasiveness, easy setup, real-time recording and independence of patient compliance (e.g.visual flashing checkerboard) imply great potential for research use and clinical practice, if predictive of disease progression or treatment response on an individual level. In conclusion, this systemic review and meta-analysis demonstrates that NVC is more impaired in patients with more severe cSVD markers, including WMH, CMB and lacunes.WMH, the most-studied marker, is associated with decreased BOLD changes in visual and motor cortices, as well as poorer cerebral flow velocity response in PCA.These findings suggest a central role of endothelial dysfunction in cSVD.Impaired NVC could serve as a potential biomarker for treatment trials and a practical method to assess physiological dysfunction in cSVD if it predicts disease progression or treatment response. Figure 1 . Figure 1.PRISMA flowchart of literature search and selection. Figure 2 . Figure 2. Comparisons of BOLD signal changes in amplitude (%) from baseline in the primary visual cortex between higher and lower WMH burden group. Figure 3 . Figure 3. Comparisons of BOLD signal changes in amplitude (%) from baseline in the primary motor cortex between higher and lower WMH burden group. Figure 4 . Figure 4. Comparisons of blood flow velocity changes (%) measured by TCD in the PCA (posterior circulation) between higher and lower WMH burden group. Figure 5 . Figure 5. Comparisons of BOLD signal changes in amplitude (%) from baseline in the primary visual cortex between higher and lower CMB burden group. Table 1 . Characteristics of the included studies.
2023-09-13T06:17:07.405Z
2023-09-11T00:00:00.000
{ "year": 2023, "sha1": "da07723162077c1b0752290a298d89c672ac2b74", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/23969873231196981", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b88b837518bdb4867f7052a2a6b71aabdbc251bd", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
128063426
pes2o/s2orc
v3-fos-license
Overview on the Development of Aquaculture and Aquafeed Production in Korea Sung-Sam Kim and Jeong-Dae Kim. 2019. Overview onf the Development of Aquaculture and Aquafeed Production in Korea. Aquacultura Indonesiana, 20 (1): 1-7. According to KOSIS (2018), total landings of capture and culture fisheries in Korea increased from 1,073,000 metric tons (MT) in 1971 to 3,743,000 MT in 2017 mainly due to the development in marine aquaculture practices. During the last four decades, marine aquaculture production in Korea showed around 5-fold increases from 491,000 MT in 1977 to 2,310,000 MT in 2017 recording the value of 2.9 billion USD. Last year, the main production was derived from seaweed (1,755,630 MT), while the aquatic animal production was made from shellfish (428,160 MT), fish (86,400 MT), crustacean (mainly shrimp of 5,100 MT) and others (34,530 MT). Either trash fish or moist pellet based on the raw fish is still being fed to marine culture fish, which is the main obstacle for developing the farming. The present situation and development direction are suggested for the mariculture development in Korea. History of Aquaculture South Korea has mostly surrounded by sea and has 2,413 km coastline along three coasts (east, west and south coasts) and its land mass is approximately 100,032 km2.As capture production mainly from inshore and offshore catches continuously decreased, a great attention has been paid to aquaculture which is now one of the very important sectors in terms of food security, revenue and employment to the country (Yoon, 2008).Aquaculture is mainly divided into two categories, marine and inland cultures. Mariculture production is composed of seaweed, molluscs, finfish, crustacean and other animals, while inland aquaculture is mostly based on finfish production.According to KOSIS (2018), total value of the fishery production hit a record of 8,614 million dollars in 2017, of which main portion came from inshore catch, followed by mariculture and offshore catch, while inland fishery contributed the least for the record (Table 1). Aquaculture Species and Production In 2017, total fishery landings reached 3,743,000 MT, of which 61.7% (2,310,000 MT) was provided by mariculture corresponding to 34.3% in terms of total value.Inland fishery production (36,000 MT) basically originated from finfish culture represented only 0.9% of total fishery landings, although it amounted to 5.3% in terms of value (Figs. 1, 2 and 3). Seaweed aquaculture began in 1960's and sea mustard (Undaria pinnatifida), kelp (Laminaria spp.) and laver (Porphyra tenera) are the main cultured species in Korea.Seaweed production of 1,756,000 MT ranked first in total mariculture production in 2017.Main species of molluscs include oyster (Crassostrea gigas), mussel (Mytilus edulis) and ark shells (Scapharca broughtonii) which have been cultured since the 1970's.As the second most important group of mariculture, molluscs of 428,156 MT were produced in 2017.Abalone (Haliotis discus hannai) is now the most important species in terms of production value (Fig. 4).Finfish, the third important production group, is dominated by olive flounder (Paralichthys olivaceus) and rockfish (Sebastes schlegeli), of which artificial seed production techniques were developed in 1990 and 1992, respectively.Since then, substantial culture practices were initiated and the main two species now consist of 80% of the total of mariculture finfish production.Even though marine finfish production of 86,400 MT is fairly lower than those of seaweed and molluscs, it de facto represents the highest production value among the mariculture groups (Fig. 5).Whiteleg shrimp (Penaeus vannamei) is the sole species being cultured since 2004.Although the production of 5,100 MT is negligible, it ranks seventh in terms of production value (Fig. 4).In addition, sea squirts (Halocynthia roretzi and Styela clava) are also cultured and categorized as others in the current aquaculture species in Korea. Finfish production in freshwater accounts for more than 80% of inland fishery production.Main species include Japanese eel (Anguilla japonica), catfish (Silurus asotus) and rainbow trout (Oncorhynchus mykiss).It should be noted that fish farming in Korea was developed with cage culture of common carp (Cyprinus carpio) in artificial lakes since 1984, which was, however, totally disappeared in 2000 with increased public concerns against water pollution.In 2017, main species production was 13,000 MT, 6,300 MT and 3,700 MT for eel, catfish and rainbow trout, respectively (KOSIS, 2018). Aquafeed Production Aquafeed development was initially made for freshwater species including carp, rainbow trout, and eel along with advanced fish farming practices.The highest feed production (94,846 MT) for freshwater species was achieved in 1995, of which 61% (58,069 MT) were fed to carp.In accordance with an increase in public concerns on carp farming practice against water pollution, most cage farms started taking away from artificial lakes and freshwater feed production, leading to a significant decrease such as 47,948 MT in 1999 with carp diet of only 6,907 MT.Feed production for mariculture fish and shrimp increased from 28,123 MT in 1995 to 52,948 MT in 1999.Since 1999, the feed production for mariculture overtook that for freshwater fish culture.On the other hand, unexpected increase in feed production for eel since 2014 was mainly due to the change in harvesting size from 4-5 fish to 1-2 fish/kg and recorded 28,899 MT in 2017.In 2018 , however, it is anticipated that the production would be decreased to 14,500 MT due to a recent sharp decrease in elver catch.As a whole, total aquafeed production amounted to 151,150 MT in 2017, which were divided into 66.9% and 33.1% for marine and freshwater species, respectively (Table 2).On the other hand, aquafeed occupied only 0.78% of total animal feed production (18,910,000 MT) in 2017. All diets for cultured fish and shrimp are manufactured to extruded pellet (EP) using the extruder except eel feed, which is made as a powder type mixed with pregelatinized starch al Generally and 10% powder type of formul in Table 3, total feeds consumption by marine finfish species amounted to 582,776 MT in 2017, of which 85% were MP.It should be noted that the MP used to produce marine finfish of 86,400 MT corresponding to a half of the inshore catch in 2017 (Fig. 1).On the other hand, 80% of the MP were distributed to be produced for two species including olive flounder and rockfish, suggesting that mariculture of those species is still in the far distance from the sustainable farming practice.In fact, frequent disease outbreaks, as well as water pollution by wasted feeds, are the main problems raised by feeding the MP (Kim, 2018).Also, the use of the MP threatens the depletion of marine fishery stocks.For example, this phenomenon is much severe in China.According to the recent report (GEA, 2017), approximately 4.95 million MT of trash fish was used in direct feeding aquaculture, of which 66% (3.24 million MT) were used for mariculture finfish production in 2014.Such trash fish wasere mainly derived from marine catch fishery (GEA, 2017). Many fish farmers are prepossessed with the idea that a hard type of EP could cause a severe digestive problem like ascites after ingested.Even though a number of the experimental results demonstrated that the EP feeding was more advantageous than that of the MP in terms of the production cost (MOF, 2005;NIFS, 2009) as well as water pollution (Kim and Lee, 2000;Kim and Shin, 2006;Kim, 2009;Kim et al., 2011), the use of EP does not show a significant increase until now (Tables 2 and 3).When the use of MP is converted to EP (Table 4), it represents 152,400 MT.Given the dead fish and wastage by feeding are considered, an actual amount of the ingested MP would be, however, less than 200,000 MT, corresponding to 61,000 MT of EP.As MP feeding raises public concerns against water pollution, introduction tof pathogenic bacteria and depletion of marine fish stock, legal measurement is now being made to prohibit the use of MP in Korea.Recently, the national project to develop the EP is being conducted to replace the MP for the whole growing period of fish.The research is devoted to decreaseing dietary fish meal using alternative protein sources like land animal by-product meal and plant protein concentrates and to increase palatability and digestibility as well as production income (Kim, 2017). Obstacles in Aquaculture Development As aforementioned, marine finfish culture ranks first in terms of total production value of mariculture.However, the fish culture involves a lot of problems remained to be solved.First of all, the MP should be switched to the EP as soon as possible.The use of trash fish in fish farms accelerates an outbreak of diseases and water pollution as well as the safety problem.A continuous drop in farmgate price urges farmers to gather on a large scale.Urgently needed is a development of new candidate species.Recently, the growing period of flounder was significantly shortened through selective breeding technologies, while rockfish still requires more than 2 years to reach a harvest size of 500 g.Antibiotic use also should be strictly controlled for product safety, although it is still being used at most farms in Korea.Although the standard stocking density should be strictly kept, many farmers do not follow the standard because they believe that 50% of the stocked seeds are dead. Conclusion As one of the fast growing industry sectors, aquaculture has occupied a significant ranks first among mariculture species groups in terms of production value.However, a sustainable development of fish farming industry could not be achieved without prohibiting the use of the MP.When the MP is totally switched to the EP, the aquafeed industry could open a new market when the MP is totally replaced with EP. Figure 4 Figure 4 Top 7 species in terms of production value (million USD) and their quantity (MT) in 2017 (KOSIS, 2018). Table 3 . Total feed consumption (MT) by marine fish species
2019-04-23T13:21:45.700Z
2019-02-11T00:00:00.000
{ "year": 2019, "sha1": "c196c95987ca0c008be013fd57785c958dc56d05", "oa_license": "CCBYNC", "oa_url": "https://mail.aquasiana.org/index.php/ai/article/download/128/134", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c196c95987ca0c008be013fd57785c958dc56d05", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
20671361
pes2o/s2orc
v3-fos-license
Crowning achievement: a case of dental aspiration Aspiration of foreign bodies during dental procedures is a rare but potentially serious complication. We present a case of a 75-year-old man who aspirated a dental crown requiring flexible bronchoscopic retrieval. We discuss the risk factors for aspiration, the radiographic features of diagnosis, and the techniques for management and retrieval. A 75-year-old man aspirated a gold dental crown when he hiccupped during a dental procedure. He was evaluated in the emergency department immediately afterward, and focal right-sided wheezing was auscultated. A chest radiograph demonstrated a radio-opaque mass near the inferior right hilum ( Fig. 1A and B). Flexible bronchoscopy was performed within hours revealing a gold crown lodged in the bronchus intermedius (Fig. 1C), which was retrieved using a basket net retrieval device ( Fig. 1D and E). The patient was discharged with a temporary crown, which was ultimately replaced by the recovered gold crown. Discussion Aspiration of foreign bodies during dental procedures is a rare complication, occurring much less often than accidental ingestion of dental foreign bodies [1,2]. The largest review of dental aspirations, a retrospective analysis of insurance records of 24,651 French dentists over 11 years, identified only 44 cases of foreign body aspiration after a dental procedure [1]. Aspiration occurs more frequently in patients with neurocognitive disability and in those at the extremes of age. The most frequently aspirated dental objects include teeth, fillings, crowns, bridges, and dental tools [2]. Prosthesis manipulation confers a higher aspiration risk, possibly because objects become slippery after cement glue application [1]. Outside the Conflicts of Interest: None. Funding Sources: None. Available online at www.sciencedirect.com dental office, dental aspiration can also occur in the context of seizures, trauma or, rarely, after endotracheal intubation [3]. There are several preventative strategies to minimize dental aspiration including the routine use of a rubber dam during dental work and the tying of suture or floss to the prosthesis or tools during placement to facilitate recovery. Despite this, the rate of adherence to these guidelines is reported to be less than 20% [4,5]. If aspiration does occur, patients can be instructed to cough forcefully to expel the object; however, the vast majority of dental aspirations require medical evaluation and intervention [4,6]. Detection of dental aspiration may be prompt, as in this case; however, significant delays in diagnosis have been reported and may be associated with greater morbidity [7]. It is prudent to assume that any object lost during dental manipulation has been aspirated and for the dentist to accompany the patient to a medical facility for prompt radiographic evaluation. Even asymptomatic patients should be evaluated as an aspirated object may shift causing airway obstruction. Radiographic findings include direct visualization of a radioopaque foreign body or identifying its effects such as atelectasis, lobar collapse, or distal hyperinflation. The most common site of tracheobronchial foreign body aspiration in adults is the bronchus intermedius because of its larger diameter and straighter course, although other airways may be affected depending on body position at the time of aspiration [8]. Computed tomography is often unnecessary, as many aspirated dental objects are radio-opaque and can be identified on a standard chest radiograph. Importantly, however, the absence of a foreign body on radiograph does not reliably exclude aspiration and further work-up may be necessary [9]. Pulmonary consultation should be sought as bronchoscopy can confirm aspiration, and prompt retrieval may prevent complications such as atelectasis, postobstructive pneumonia, and hemoptysis [10]. Other uncommon complications of dental aspiration include airway obstruction potentially leading to hypoxemia and perforation leading to potentially fatal infectious (eg, mediastinitis) or bleeding complications. Flexible bronchoscopy can be performed rapidly and safely under local analgesia or moderate sedation and is typically the first intervention. Several bronchoscopic techniques exist for retrieval via flexible bronchoscopy, including forceps, baskets, cages, and Fogarty balloons. In the case of occult aspiration with long latency before detection, granulation tissue can complicate object removal and potentially lead to persistent obstruction. If flexible bronchoscopy is unsuccessful, rigid bronchoscopy under general anesthesia may be required. Although the vast majority of aspirated foreign bodies can be retrieved bronchoscopically [11], occasionally a surgical approach may be necessary. The prognosis for dental aspiration is typically excellent, as it was in this case. Morbidity, although infrequent, is attributed primarily to delays in diagnosis or to rare complications such as bleeding or perforation [7]. Occasionally, unrecognized dental aspiration may be misdiagnosed as asthma, pneumonia, bronchitis, or even cancer [7,12,13]. Dental aspiration, especially occurring during dental procedures, is a rare but important event requiring prompt Fig. 1 e Following a dental procedure, a radio-opaque foreign body was seen on both posterior-anterior (A) and lateral (B) chest radiographs (arrows). Bronchoscopy confirmed the presence of a foreign body (C) in the bronchus intermedius (arrow), which was successfully retrieved using a basket net retrieval device (D). This recovered dental crown (E) was later reimplanted recognition, diagnosis, and treatment. Recognition of the aspirated object on chest imaging and removal of the foreign body via bronchoscopy is essential to prevent long-term sequelae.
2016-05-12T22:15:10.714Z
2015-10-09T00:00:00.000
{ "year": 2015, "sha1": "e9b647d3a8cadd0a1c20b1fa37cb31a6d8789caf", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.radcr.2015.09.001", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e9b647d3a8cadd0a1c20b1fa37cb31a6d8789caf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248843174
pes2o/s2orc
v3-fos-license
Internal Trapping of an Acutely Ruptured Dissecting Aneurysm of a Dominant Vertebral Artery Following Balloon Test Occlusion: A Case Report Abstract Objective To report a case of an acutely ruptured vertebral artery dissecting aneurysm (VADA) with a hypoplastic contralateral vertebral artery (VA) successfully treated with internal trapping following the estimation of the collateral flow from anterior circulation. Case Presentation A 46-year-old woman was diagnosed with subarachnoid hemorrhage and acute hydrocephalus. Ventriculostomy was performed under general anesthesia. CTA revealed a left VADA distal to the origin of the left posterior inferior cerebellar artery (PICA). The right VA was hypoplastic, and the right posterior communicating artery (Pcom) was fetal type. We performed balloon test occlusion (BTO) of the VA proximal to the origin of the left PICA and estimated sufficient collateral blood flow via the right Pcom and basilar artery (BA) to the anterior spinal artery (ASA) and the left PICA. Internal trapping of the left VADA was then performed. The angiograms after internal trapping revealed collateral flow from the right Pcom to the BA, and the hypoplastic right VA perfused the proximal BA and ASA. She recovered without any neurological deficits following antiplatelet therapy and vasospasm treatment. She was followed up for 6 years without any neurological events occurring. Conclusion When BTO indicates sufficient collateral flow, internal trapping could be a useful treatment for acutely ruptured VADAs on the dominant side, given a complete understanding of the angioarchitecture and the risk of vasospasm due to subarachnoid hemorrhage. Introduction Vertebral artery dissecting aneurysms (VADAs) account for 4.5% of the autopsy cases of spontaneous subarachnoid hemorrhage (SAH). 1) The clinical course of ruptured VADAs is characterized by frequent early rebleeding and a highly fatal outcome. 2) Treatment strategies for VADAs depend on the location of the aneurysm, the origin of the posterior inferior cerebellar artery (PICA), and dominance of the contralateral vertebral artery (VA). 3,4) Internal trapping is considered the first treatment choice for acutely ruptured VADAs; however, internal trapping and bypass surgery, or stent-assisted coiling (SAC), preserving the parent VA are selected for ruptured VADAs of the dominant VA. [3][4][5][6] The contralateral VA is mainly considered in the treatment strategies for acutely ruptured VADAs. 3) The collateral blood flow via the posterior communicating artery (Pcom) has been previously used to treat bilateral VADAs. 7) Herein, we report a case of an acutely ruptured VADA with an apparent hypoplastic contralateral VA, treated with internal trapping of the VADA following balloon test occlusion (BTO) proximal to the ipsilateral VA, confirming the collateral flow via the Pcom and basilar artery (BA). Case Presentation A 46-year-old woman who experienced headache 3 days prior suddenly fell into a coma and was transferred to our hospital. On admission, she was drowsy, and CT revealed SAH and acute hydrocephalus. The distribution of cisternal clots was predominant in the posterior cranial cisterns (Fig. 1A). She had no history of diseases except hypertension without medication, and her routine preoperative evaluation results were within normal ranges. We performed emergent ventriculostomy under general anesthesia and continued deep sedation. CT on the day after ventriculostomy indicated a decrease in the cisternal clot (Fig. 1B). CTA ( Fig. 1C and 1D) on admission revealed a left VADA, showing a pearl-and-string sign, distal to the origin of the left PICA. The V4 segment of the right VA was apparently hypoplastic, and the right Pcom was fetal type. Subsequently, we performed cerebral angiography and BTO to confirm collateral flow. Left VA angiograms ( Fig. 2A-2D) showed the left VADA between the origins of the left PICA and the anterior spinal artery (ASA). The right VA angiogram showed a hypoplastic V4 segment and the BA in the anterograde flow of laminae (Fig. 2E). Systemic heparinization was initiated until reaching an Fig. 2F-2I). We estimated the collateral blood flow via right Pcom as enough to perfuse the BA, ASA, and the branches of these arteries. Therefore, we did not perform the angiograms of the right VA and left internal carotid artery following BTO of the left VA. After the balloon deflation, systemic heparin was reversed using protamine sulfate. The occlusion time of the left V2 segment was 1 min and 20 s. Figure 2J shows the collateral flow patterns following BTO. After obtaining written informed consent from the legal representatives, we administered aspirin 200 mg, clopidogrel 150 mg, and ozagrel sodium 80 mg. We then performed internal trapping of the VADA, including the short proximal portion of the left VA ( Fig. 3A and 3B) using Target coils (total length of 84 cm; Stryker, Fremont, CA, USA) under general anesthesia and systemic heparinization (ACT from 117 s to 254 s) on the next day of BTO. Following internal trapping of the VADA, a right internal carotid angiogram revealed slow retrograde BA flow ( Fig. 3C-3E). A left internal carotid angiogram revealed the left P1 segment but not the BA (not shown), and a right VA angiogram revealed the proximal BA and ASA ( Fig. 3F-3H). Figure 3I shows the collateral flow patterns following the internal trapping of the aneurysm. Postoperatively, we administered intravenous heparin 15000 U/day, ozagrel sodium 160 mg/day for 2 days, oral clopidogrel 75 mg/day for 20 days, and aspirin 100 mg/day for more than 6 years. For the treatment of vasospasm, fasudil hydrochloride hydrate 90 mg/day for 2 weeks, nicardipine 3 mg-6 mg/hr for 3 weeks, dobutamine for 11 days, and low-molecular-weight dextran for 14 days were administered. Although diffusion-weighted MRI on the day after internal trapping indicated some ischemic spots of the cerebellar hemisphere and the time-of-flight MRA did not show the BA, no ischemic lesions of the brainstem were detected ( Fig. 4A and 4B). The MRA findings of the BA were consistent until 5 years later (Fig. 4C). Single-photon emission CT using 99m Tc-ethyl cysteinate dimer performed 3 days, 24 days, and 6 years later (Fig. 4D-4F) indicated normal cerebral blood flow (CBF), including in the brainstem. She recovered without any neurological deficits and was followed up for 6 years without any neurological events occurring. The CTA 6 years later ( Fig. 4G and 4H) indicated a narrowed BA and a slightly dilated right Pcom and P1 segment. Discussion In this case report, an acutely ruptured dissecting aneurysm of a dominant VA with an apparently hypoplastic contralateral VA was successfully treated with internal trapping of the aneurysm following BTO of the proximal dominant VA. The postoperative administration of anticoagulants and antiplatelets and the treatment of vasospasm resulted in no neurological deficits. Furthermore, long-term follow-up (more than 6 years) revealed complete ischemic tolerance of the internal trapping of the dominant VADA. This is the first report of acutely ruptured VADA treated by internal trapping, confirmed by sufficient collateral flow from anterior circulation, and followed up for many years, to the best of our knowledge. Endovascular treatment strategies for ruptured VADAs have been reported to select the deconstructive techniques, e.g., internal trapping with or without bypass surgeries, and the reconstructive techniques, e.g., SAC. The ruptured VADA, in this case, was located on the V4 segment of the dominant side, between the origins of PICA and ASA. Therefore, reconstructive treatment will be selected, barring an ischemic tolerance. 3,4) Although SAC is a treatment to preserve the flow of the parent artery, it is technically more demanding than internal trapping, especially during the period of acute rupture. In our case, the diameter of the string portion of VADA was 2.1 mm. The risk of thromboembolic complication of SAC is the main reason for the off-label use of SAC for acutely ruptured aneurysms, and the small diameter of the parent artery is not suitable for stenting because of the risk of in-stent thrombosis. 8) Recently, flow diverter treatment of ruptured VADAs has been reported. [9][10][11] Although flow diverter is also an off-label treatment for acutely ruptured aneurysms as SAC, flow diverter could become an alternative treatment of acutely ruptured VADAs in the future. 10,11) Sönmez et al. 12) reported a meta-analysis of long-term outcomes between internal trapping and SAC of VADA. The long-term complete occlusion rate of deconstructive techniques (88%) was significantly higher than that of reconstructive techniques (81%). Furthermore, SAC has been reported to be inferior in preventing aneurysmal rebleeding compared to internal trapping. Madaelil et al. 4) reviewed case series of ruptured VADAs from the literature with a comparison of the treatment types, internal trapping (197 cases), SAC (31 cases), and proximal occlusion (26 cases), and indicated the frequency of recurrent hemorrhage as 3.1%, 6.4%, and 19%, respectively. Preoperative BTO of the VA has been reported for treating unruptured giant aneurysms of the BA. 7,13,14) Complete BTO is reported to function by BTO with induced hypotension, CBF measurement, and neurophysiologic monitoring. 14,15) Because the patient was emergent and under deep sedation, we estimated BTO mainly with angiographic findings. However, combining the angiography with neurophysiologic monitoring, such as the auditory brainstem response, motor evoked potential, or somatosensory evoked potential, could help estimate the ischemic tolerance in more detail. In this case, because the VADA was located distal to the origin of the left PICA, BTO was performed proximal to the PICA origin. Therefore, the demanded flow from the Pcom decreased following internal trapping of the VADA, and the right hypoplastic VA started to perfuse the ASA. The mechanism of this diversion of the roles of collateral flow is not fully understood. We thought the collateral flow via the right Pcom and the BA as sufficient. Therefore, we did not perform the angiograms of the right VA and the left internal carotid artery following BTO. To estimate complete collateral flows, these angiograms should be performed following BTO. The precise balloon location to estimate the collateral flows following aneurysmal internal trapping is considered to be the left V4 segment between the aneurysmal proximal end and the origin of the left PICA. However, we hesitated to inflate a micro balloon to occlude this segment close to the dissecting aneurysm for the risk of aneurysmal rebleeding. Because the diameters of bilateral anterior inferior cerebellar arteries and the perforators of BA were apparently smaller than the diameters of bilateral PICAs and SCAs, the blood flow perfusion of the posterior fossa mainly depended on bilateral PICAs and SCAs. These angioarchitectures of blood flow could be essential to estimate ischemic tolerance following internal trapping of VADAs. In performing internal trapping of a dominant VA to treat acutely ruptured VADAs, vasospasm following SAH must be considered. In this case, ventriculostomy on admission rapidly drained the cisternal clot; other medical treatments using fasudil and nicardipine are also considered effective. 16,17) An important ischemic complication of internal trapping of VADAs is medullary infarction. In this case, the aneurysmal morphology enabled tight internal trapping inside the aneurysmal pearl portion and the proximal short VA segment. The origin of the perforators of the V4 segment is reported to range mainly from 14 mm below to the vertebrobasilar junction. 18) Internal trapping of the short segment of the VA could be a strategy to prevent medullary infarction because the internal trapping of the long segment has been reported to be a risk factor for medullary infarction. 19,20) To avoid medullary infarction due to the direct occlusion of perforating arteries of VA, proximal occlusion could be one of the treatments. However, according to the literature review, the frequencies of symptomatic ischemic complication and favorable outcome of internal trapping vs. proximal occlusion were reported as 5.3% vs. 17% and 75% vs. 50%, respectively, indicating the poor results of proximal occlusion. 4) In this case, postoperative diffusion-weighted MRI indicated the ischemic lesions of the left cerebellar hemisphere. We performed both procedures, the preoperative BTO and the internal trapping, under systemic heparinization. Dual antiplatelet loading was also started before internal trapping. The anticoagulant therapy was continued for 2 days and antiplatelet therapy for years, to prevent the occlusion of the branches and perforators of the BA and the VA following internal trapping and flow reduction. However, the asymptomatic cerebellar ischemic lesions could not be avoided. Conclusion When BTO indicates sufficient collateral flow, internal trapping could be a useful treatment for acutely ruptured VADAs on the dominant side, with a hypoplastic contralateral VA, given a complete understanding of the angioarchitecture of anterior and posterior circulation and the risk of vasospasm due to SAH.
2022-05-18T15:08:28.248Z
2022-05-17T00:00:00.000
{ "year": 2022, "sha1": "92c6858159e94e923aec1632bce6855586bb8534", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/jnet/advpub/0/advpub_cr.2021-0105/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1ff9381300fa54ef06bb1407a2385d9285eb125a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14375828
pes2o/s2orc
v3-fos-license
Bright solitary waves in a Bose-Einstein condensate and their interactions We examine the dynamics of two bright solitary waves with a negative nonlinear term. The observed repulsion between two solitary waves -- when these are in an antisymmetric combination -- is attributed to conservation laws. Slight breaking of parity, in combination with weak relaxation of energy, leads the two solitary waves to merge. The effective repulsion between solitary waves requires certain nearly ideal conditions and is thus fragile. I. INTRODUCTION One of the many interesting features of Bose-Einstein condensed atoms is that they can support solitary waves, in particular when these are confined in elongated traps. Under typical conditions, these gases are very dilute and are described by the familiar Gross-Pitaevskii equation, a nonlinear Schrödinger equation with an additional term to describe the external trapping potential. It is well known that the nonlinear Schrödinger equation (with no external potential) supports solitonic solutions through the interplay between the nonlinear term and dispersion. In the presence of an external trapping potential, the Gross-Pitaevskii equation becomes nonintegrable. In elongated quasi-one-dimensional traps, it is reasonable to approximate the three-dimensional solution of the Gross-Pitaevskii equation by separating longitudinal and transverse degrees of freedom [1]. The resulting effective one-dimensional nonlinear equation has a nonlinear term that is not necessarily quadratic [1,2]. Still, such nonlinear equations support solitary-wave solutions, which must be found numerically. Solitary waves have been created and observed in trapped gases of atoms [3,4,5,6]. In the initial experiments [3,4] the effective interaction between the atoms was repulsive. In this case, the solitary waves are localized depressions in the density, which are known as "grey" solitary waves. These waves move with a velocity less than the speed of sound. When the minimum of the density (at the center of the wave) becomes zero, they do not move at all and thus become "dark". More recently, the two experiments of Refs. [5,6] considered the case of an effective attraction between the atoms and observed "bright" solitary waves, i.e., blob(s) of atoms which preserve their shape and distinct identity. Strecker et al. [5] created an initial state of many separate solitary waves. While these independent waves were seen to oscillate in the weak harmonic potential in the longitudinal direction, they did not merge to form one solitary wave. In other words, they behaved as if the effective interaction between two of these waves were repulsive. Numerous theoretical studies have been motivated by the experiments of Refs. [5,6], see e.g., Refs. [7,8,9]. Reference [7] offered an explanation for the observed effective repulsion between solitary waves. As argued there, the experiments had been performed in a manner that gave rise to a phase difference of π between adjacent solitary waves. According to an older study [10], solitary waves with a phase difference equal to π indeed repel each other. In the present study we use a toroidal trap [11] as a model for examining the time evolution of a system that initially has two solitary waves using numerical solutions to the corresponding time-dependent one-dimensional Gross-Pitaevskii equation. Remarkably, such toroidal traps have been designed [12], and very recently persistent currents have been created and observed in such traps [13]. The basic conclusion of our study is that the effective repulsion between solitary waves is due to conservation laws and thus fragile. In what follows, we first present our model in Sec II. In Sec. III we examine the dynamics of the gas in the case of weak dissipation, starting with perfectly symmetric/antisymmetric initial conditions and with no external potential along the torus. We observe that the symmetric configuration of two blobs merges on a short time scale; the blobs in the initially antisymmetric configuration remain distinct and separated. Using these results as "reference" plots, we examine in Sec. IV the effect of a weak random potential on perfectly symmetric/antisymmetric initial conditions. We also examine in Sec. V the time evolution of states that deviate slightly from perfect symmetry/antisymmetry in the absence of any random potential. In both cases, the symmetric (or nearly symmetric) initial configuration shows essentially the same behavior as the reference symmetric system. On the other hand, the antisymmetric configuration with the addition of an extra weak random potential and the nearly antisymmetric configuration with no external potential both lead to a merger of the two blobs after a moderate transient time. In the antisymmetric case, the final state is strongly influenced by weak deviations from the "ideal" case. Finally, in Sec.VI we discuss our results. II. MODEL We consider a tight toroidal trap and use the meanfield approximation. Tight confinement along the cross section of the torus allows us to assume that the transverse degrees of freedom are frozen, and thus the corresponding time-dependent order parameter Ψ(θ, t) satisfies the (one-dimensional) equation where g = 8πN aR/S, and V (θ) is the external potential measured in units of E 0 =h 2 /(2M R 2 ). Here, M is the atomic mass, R is the radius of the torus, N is the atom number, a is the scattering length (which is taken to be negative), and S is the cross section of the torus. The total length of the torus is chosen to be 16π in our simulations. As shown in Refs. [14,15], below a critical (negative) value of the parameter g, there is an instability from a state of homogeneous density to a state with localized density that breaks the rotational invariance of the Hamiltonian. This localized state corresponds to a solitary wave, and the critical value of g is g c = −π for the parameters chosen here. We adopt the value of FIG. 2: Snapshots of |Ψ(θ, t)| (higher curve in each panel) and of φ(θ, t) (lower curve in each panel) of the order parameter Ψ = |Ψ(θ, t)|e iφ(θ,t) , for the symmetric initial configuration, α = 1, for t/t0 = 0, 10, 50, 100, and 150. The axes are the same as in Fig. 1. In all the above graphs there is no external potential, V = 0. We add an extra term on the left side of the above equation to model dissipation and write The real positive dimensionless parameter γ describes the "strength" of dissipation. Since we solve an initial value problem, we also need to specify the initial condition. This is Here ψ(θ) = λ/ cosh(λθ), with λ = 3/2, is a static, well localized blob. We choose θ 0 = 2π/5 so that the two blobs are reasonably distinct but still have a small overlap, as shown in the graphs of Fig. 1, for α = ±1. ,t) , for the symmetric initial condition, for t/t0 = 0, 10, 50, 100, and 150, for a weak random potential (shown in Fig. 14), with a symmetric initial configuration, α = 1. The axes are the same as in Fig. 1. show the corresponding energy of the gas as function of time. III. TIME EVOLUTION OF THE "IDEAL" SITUATION To understand the effects of a weak random potential and the effect of slight asymmetries in the initial condition (to be considered in Secs. IV and V), it is instructive to start with the situation where there is no external potential, V (θ) = 0 and an initial configuration which is either perfectly symmetric (α = 1), or perfectly antisymmetric (α = −1), i.e., We also fix the value of the dissipative parameter equal to γ = 0.05. Figures 2 and 8 show snapshots of |Ψ(θ, t)| as well as the phase φ(θ, t) of the order parameter Ψ(θ, t) for the symmetric and the antisymmetric case, respectively. The snapshots shown in Fig. 2 correspond to t/t 0 = 0, 10, 50, 100, and 150, and in Fig. 8 to t/t 0 = 0, 10, 100, 300, and 400. Here t 0 = E 0 /h = [h/(2M R 2 )] −1 is the unit of time. Figures 3 and 9 show the energy of the system as function of time for 0 ≤ t/t 0 ≤ 150, and for 0 ≤ t/t 0 ≤ 400, respectively. As seen in these graphs, the symmetric configuration (Fig. 2) merges quickly into one soliton and as time increases, it eventually approaches the equilibrium solution. On the other hand, the two blobs do not merge in the antisymmetric case (Fig. 8). This is a direct consequence of the fact that the initial configuration has a node at θ = 0. Because of the symmetry between θ and −θ, Ψ(θ = 0, t) must be zero for all times t > 0. As a result, the two blobs never merge as a simple consequence of parity conservation. The parity operator commutes with the Hamiltonian, and parity is therefore a conserved quantity. Only numerical errors can eventually lead to a single-soliton profile (with lower energy). That this does not happen provides a check on the accuracy of our numerics. ,t) , for the antisymmetric initial configuration, α = −1, for t/t0 = 0, 10, 100, 300, and 400. The axes are the same as in Fig. 1. In all the above graphs there is no external potential, V = 0. IV. EFFECT OF THE RANDOM POTENTIAL ON THE TIME EVOLUTION Using Figs. 2 and 8 as "reference plots", we may now examine the effect of a weak, symmetry breaking random potential V (θ). This potential is chosen to consist of ten steps of equal widths with a height that is a (uniformly distributed) random number and varies between 0 and 0.01. Figure 14 shows the specific random potential chosen. In this case we start with perfectly symmetric/antisymmetric configurations, α = ±1. The time evolution of the symmetric configuration shown in Figs. 4 and 5 is almost identical with that of Figs. 2 and 3, i.e., the case considered in the previous section with V = 0. The two blobs merge rather rapidly. The antisymmetric case shown in Figs. 10 and 11 is of greater interest. Here, after a relatively short time, the system passes through a "quasi-equilibrium" configuration, seen as the plateau in the plot of energy versus time in Fig. 11. During this time interval, there are two localized blobs. However, parity is no longer a conserved FIG. 10: Snapshots of |Ψ(θ, t)| (higher curve in each panel) and of φ(θ, t) (lower curve in each panel) of the order parameter Ψ = |Ψ(θ, t)|e iφ(θ,t) , for the antisymmetric initial condition, for t/t0 = 0, 10, 100, 300, and 400, for a weak random potential (shown in Fig. 14), with an antisymmetric initial condition, α = −1. The axes are the same as in Fig. 1. quantity in this case. There is no symmetry in the system to preserve the node that was built into the initial conditions. As a result, the two blobs eventually merge into one in contrast to the results of Fig. 8. In other words, the apparent repulsion of the two solitary waves is not present at sufficiently large t. V. EFFECT OF SLIGHT ASYMMETRIES IN THE INITIAL CONFIGURATION In another set of runs, we set the random potential to zero and select slightly asymmetric initial configuration, with α = ±1.01. Our initial condition is thus not a parity eigenstate and our calculations show that again the two initially distinct blobs merge after a characteristic timescale. The qualitative features of this calculation are the same as in the case of a random potential described in the previous section. In the case of an almost symmetric initial configuration, α = 1.01 shown in Fig. 6, the two separate blobs merge rapidly, very much as in Figs. 2 and 4. On the other hand, the almost antisymmetric case, α = −1.01 shown in Figs. 12 and 13, shows a plateau in the energy and a period of "quasi-equilibrium" during which the two blobs have relatively well-determined shape and location. Eventually, however, the two blobs merge again into a single solitary wave, much as in Fig. 10 but unlike Fig. 8. VI. DISCUSSION AND CONCLUSIONS According to the results of our study, the observed repulsion between bright solitary waves in the experiment of Ref. [5] implies that the conservation laws were not substantially violated during the time interval investigated. More precisely, it suggests that deviations from axial symmetry in the trapping potential must have been small, the initial configuration was very close to a (neg-ative) parity eigenstate, and that dissipation must have been weak. It is instructive to estimate the timescale, t 0 , for our study. If one considers a value of R equal to the longitudinal size of an elongated trap, R ∼ 0.1 mm, then t 0 ∼ 10 sec, which is a rather long timescale for these experiments. Therefore, it seems likely that the characteristic timescale over which the experiment of Ref. [5] was performed was significantly smaller than the timescale required to see the separate blobs merge. Higher temperatures would enhance the dissipation in the gas and would decrease the characteristic time that is required for the blobs to merge. To the extent that the deviations from axial symmetry in the trapping potential and the antisymmetry in the initial configuration considered here are representative of the actual experimental situation, our results support the explanation offered in Ref. [7]. Direct experimental determinations of these quantities and of the strength of dissipation would thus be welcome. It would also be of interest to investigate the long time stability (or instability) of the configurations observed in Ref. [5]. The questions examined here may also have important consequences on possible technological applications. For example, propagation of such solitary waves in waveguides may serve as signals that transfer energy or information. Therefore, understanding and possibly controlling the way that such waves interact with each other may be important. Recent experimental progress in building quasi-one-dimensional and toroidal traps should make such experiments easier to perform and worth investigating.
2008-01-15T19:08:43.000Z
2008-01-15T00:00:00.000
{ "year": 2008, "sha1": "c313d4ce3594321e2e993d25f9785e05b663b198", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0801.2364", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c313d4ce3594321e2e993d25f9785e05b663b198", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
241743908
pes2o/s2orc
v3-fos-license
Comparative analysis of candidate vaccines to prevent covid 19 pandemic Covid-19 which is a SARS-CoV-2 (severe acute respiratory syndrome) has caused the new overall pandemic moreover is an arising virus profoundly contagious namely coronavirus. The unavailability of a particular antiviral treatment had led to the utmost destruction of life because of this virus. During this time, impressive efforts were placed into creating safe medications and immunizations against SARS-CoV-2. Just 56 vaccines made it at the different clinical stages from more than 80 clinical trials which had started including 23 antibody candidates got looked at moreover had affirmed for the use. Various types of variables are utilized for the production of such vaccines to start the immune reaction to produce antibodies in a person for killing the virus. The countries which are top of the race for producing vaccines are Russia, India, U.S.A., China as well U.K. Among the vaccines produced by these countries are Covaxin (India), Sputnik V (Russia), CoronaVac (China), AZD1222 (United Kingdom), BNT162b2 (Germany), and mRNA-1273(United States of America). We bring forth the certain potential factors that are required when creating vaccines as well as a comparative analysis of data obtained from SARS-CoV-2 vaccine trials for different vaccines as well as environmental impacts of its preparation. Introduction A severe concern has become the rapid spread of coronavirus disease throughout the world [1]. The advancement of a powerful antibody against the virus was requested due to the quickly spreading contamination as well as the rising number of deaths of human life. Many Antibodies had effectively entered clinical trials like vector antibodies including nucleic acids-based antibodies & inactivated antibodies. Numerous research companies, as well as specialists all over the world, had begun attempts for the preparation of successful antibodies so that could give durable as well as quick protection against this virus. In 2019 December in Wuhan which is a city in China, the World Health Organization also known as WHO later proclaimed a pandemic caused by a serious active respiratory condition coronavirus on March 11 in 2020 [2]. Since recently, corona virus-related mortality has been increasing in a vast percentage of the world. D. Smrž et al guides us that widespread safety precautions, including mandatory mask use, social separation, and handwashing, may have an impact on this illness. Unfortunately, the epidemic would not be over until an effective coronavirus antibody vaccination was produced [3,4]. For the past two decades, covid-19 has been the 3rd unique beta-Covid thus far that is extremely contagious but also easily disseminated from one person to another. Covid-19 has recently been diagnosed in 96,981,033 persons throughout the globe [5]. According to the WHO [6], the outbreak has resulted in 120,428,199 reported cases and 2,665,548 fatalities globally as of March 20, 2021. This virus has a genome that is identical to RaTG13 with RmYN02 infections discovered in Rhinolophus affinis as well as Rhinolophus malayanus, respectively. It is widely agreed that the virus was transmitted by bats [7]. The diagnostic disease caused by the severe acute respiratory condition is known as Coronavirus (COVID-19), and it can cause gastrointestinal infection, hyper inflammation, cardiovascular pathology, and coagulopathy, as well as cardiac failure. The most common causes of death include co-morbid illnesses such as hypertension, obesity, and diabetes, as well as the patients' weakened immune systems after being infected with the virus [8]. Y. Dong et al provides that different studies have also been undertaken to understand better the illness, as approaches have been established in the hopes of halting the spread of the virus while developing efficient yet safe treatments or vaccinations [9]. Host-cells are infected by the virus by combining ACE2 (angiotensinconverting encym2) receptors for homotrimeric spike (S) viral trans-layer glycoproteins. Once genomics was found, extensive efforts began to create antibodies mostly aimed at the viral spike protein. Various methods were introduced in the human circulatory system for transporting the viral S protein via antibodies. These vaccinations are accompanied by viral vector antibodies, inactivated virion, fatty nano-molecular mRNA plus DNA methods. Various antibodies were clinically tested but most also discussed the effectiveness as well as vaccination immune responses [8]. WHO informed that currently, 56 potential virus antibodies are in clinical testing, while 23 vaccines were authorized/ approved. Overview of Covid-19 SARS-CoV, MERS-CoV as well as SARS-CoV-1 are the cytoplastic single-strain RNA virus with positive meaning and structural proteins (especially S, envelope including membrane proteins) (containing nucleocapsid) [10]. To proliferate a virus, it must first get into cells in the human body. These receptors are generally employed to control blood flow via capillaries via the enzyme known as ACE2 (angiotensin-conversing enzyme 2) [11]. The S protein generally plays an important function in generating immunological responses during the progression of the disease [12]. S protein is needed for the virus to access hot cells via virus receptor, angiotensin converting enzyme (ACE2), as well as cell entry [13][14][15]. The novel coronavirus does have the same receptor but is still stronger. It uses a spike or 'S' protein that attaches on its whole surface utilizing one of its four primary structural proteins [11] Two subunits S1 as well as S2 that mediate binding of the receptor for membrane fusion respectively are present in the trimeric S protein. The S1 subunit contains a region termed the RBD, which can bind ACE 2 [16,17]. The connection of the S protein to the ACE 2 receptor leads to intricate conformation modifications and pushes the S protein to the form of a post-fusion configuration. A possible method for the host immune response to the virus was proposed to decorate the posflusion conformation with N-linked glycans [18]. Past investigations have shown that SARS-CoV S protein vaccines generate robust cell as well as humoral immunological response in animals including clinical investigations of mouse challenges [19][20][21]. Likewise, the S gene is considered a crucial objective for vaccines [22]. The coronavirus S protein, in particular RBD, could trigger neutralizing antibodies (NAbs) including immune response to the T cell [23][24][25][26]. The RBDspecific igG of coronavirus representing half the S protein response was found in an experimental examination [27]. Specific RBD plus T cells have also been discovered in patients afflicted [28]. Furthermore, NAb's titers are significantly related to anti-RBD IgG levels, whereas RBD's specific IgG's titers are supposed to replace neutralizing [26,28]. B. D. Quinlan et al provided that in addition, the RBD immunization has been effective initially when NAbs have been produced in mice sans antibody-dependent enhancement mediation [29]. The RBD may be directed as a future-oriented viral target by the earliest RBDbased vaccine development for both SARS-CoV as well as MERS-CoV. Other proteins, aside from S protein, may also act as antibodies as well as N proteins, M proteins, non-structural proteins (nsps), including supplementary proteins. However, the disparity of IFN-I with IFN-III host, including higher pro-inflammatory cytokine, has been linked to both viral protein and its links to host element [30,31]. Various coronavirus found in humans The various types of corona virus and their illness found in human body are SARS-CoV-2(Covid-19, SARS-CoV (severe acute respiratory syndrome), MERS-CoV (Middle East respiratory syndrome and HCoV-NL63, HCoV-229E, HCoV-OC43, HKU1(minor respiratory symptoms). "Nobody has any immunity to this virus, as its infection is new. This indicates that a very huge proportion of people may be infected. Although the amount of really serious instances is a very low proportion, many persons with acute sickness have a tiny proportion of a very big number. All seven human viruses are known to have been transferred from other animals to people. The virus has been present in animals for years but it was exceedingly unusual highly extreme in people. It was likely bats that caused MERS, SARS, as well as COVID-19. As a temporary host in UK-based research & innovation, it may be possible for another animal species like the pangoline, to transfer the new Coronavirus from its original host species to people [32]. Difficulties in vaccine production Hamsters get SARS infection lung alterations but do not seem to get ill. Lung diseases with SARS coronavirus are developing by ferrets and various monkey species but not continuously [33]. W. Liu et al stated that there are difficulties in generating both protective immune system as well as proof for testing SARS vaccination in animal models, that if the protection from infection was insufficient, the improper immune response might produce possible adverse effects. So various vaccinations generated antibodies against spike protein in testing for early SARS in ferrets and singes, but only partially protected them against lung illness [32]. Some lung inflammation vaccinations were also related to subsequent immunization of the virus by mice [34]. These examples provide some understanding of the problems with vaccine production. They demonstrate the importance of stimulating the correct immunological responses, as well as why safety testing is essential. Types of vaccine United Kingdom studies & data indicated that "the human immune response is extremely carefully tuned. First, viruses or other pathogenic agents must be identified and killed. The body's healthy tissues must not be damaged by an over-active immune reaction. immunological concept has changed in order to combat that protect against a wide range of illnesses." For instance, lymphocytes, called assist T cells, exist in several kinds that aid with immune responses. The first, Th1, kills cell-infectant bacteria but also viruses. The second regulates larger parasites such as helminths (Th2) (worms). Enabling the incorrect course might raise inflammation but also make the condition worse. Bukreyev, Alexander et al. reported that lymphocytes and other cells create immune signaling substances known as cytokines as part of normal immunological responses. This coordination increases immunological reactions. Therefore, if these responses are awful, inflames may lead to the closure of important organs, such as the pulse, heart, and kidneys, in extreme situations. In the instance of this virus, several late consequences of the disease may be attributable to this. A very tiny fraction of the patients can contribute to considerably longer coronavirus by an excessive immune reaction. Although, it is yet unknown but also an essential field for research. Vaccines against Coronavirus are therefore needed to generate the proper balance of immune response and safeguard against infection. Therefore, extensive clinical studies will be essential to guarantee that new vaccinations are highly safe and effective [35]. The design of vaccination has three key methods. They differ whether they utilize a complete virus or a bacterium, simply portions of the germ that stimulates the immune system or only genes for certain proteins, not for the full virus [36]. Inactivated Inactivated vaccinations work by exposing a virus that has lost its ability to cause illness. The virus is grown in cell lines, which serve as a medium for the synthesis of enormous amounts of antibodies. Before vaccine inactivation, virus replication is usually preceded by purification as well as concentration [37]. To inactivate the virus, formaldehyde plus beta-propiolactone is utilized in most approved human antivirus vaccinations [38]. Inactivated vaccines need many doses or adjuvants to obtain adequate effectiveness [39]. Examples are Sinopharm (BBIBP), CoronaVac, Covaxin, Sinopharm (WIBP), and CoviVac. Viral Vector Genetically manipulating the coronavirus gene transportation virus vector and slowly multiplying in infected cells, VV vaccine is produced. Multiplication results in coronavirus synthesis of a protein with subsequent activation of the immune system. Such viral vectors are being designed to be reproducing or nonreproducing [40]. The host's innate immunity may well have a major impact on VBV effectiveness. Non-human or unusual serotype vectors are used to circumvent this [41,42]. Examples are Sputnik V, Oxford-AstraZeneca, Johnson & Johnson as well as Convidecia. Live-attenuated P.D. Minor indicates that live attenuated vaccinations are successful in the treatment of illnesses like smallpox as well as poliomyelitis [43]. The antibody is being assessed in a preclinical manner and 3 live-attended SARS-COV-2 vaccines use a weakened virus. These vaccinations can, nonetheless, in rare situations return to virulence. Whilst the use of this vaccine is possible, there have been worries about the presence of epitopes that will not cause NOBs or safeguard the immune response being slowed down [44]. Subunit Vaccines Subunit vaccines are made up of purified antibodies rather than complete microbes, thus various carriers act as a transporter for these kinds of antibodies. The antibodies of anti-SARS-CoV-2 subunit vaccinations are infectious proteins, peptides, as well as nanoparticles. Since subunit vaccinations are generally low in efficacy, adjuvants are necessary to elicit a greater immune system reaction [45]. However, such vaccination methods are quite effective. Antibody delivery systems are the most often utilized technique for obtaining highly expressed recombinant proteins. Therefore, for antibodies that need post-translational alteration, mammalian or insect cells could be used [46]. Bacteria, insects, or cell-based expression systems can also be used as recombinant technologies to produce virus-like particles (VLPs). Anti-coronavirus vaccination built on VLPs is now being evaluated in I/II phase clinical studies. Examples are EpiVacCorona, RBD-Dimer. DNA Vaccines transmit the genes of the Coronavirus to living organisms. The concept of immunization is founded upon the translocation of DNA into the cell nucleus, which is coupled by analysis and antigen transcription. Plasmids are often used as vectors in DNA vaccinations. The route of delivery of myocytes and keratinocytes has been considered (intramuscular, intradermal, as well as subcutaneous). DNA translocation to the cell nucleus where there is an initiation of antigen production but also interpretation depends on the presuppositions of vaccination. In DNA vaccines, plasmids are frequently utilized as vectors. The route of delivery of myocytes and keratinocytes has been considered (intramuscular, intradermal, as well as subcutaneous). Nevertheless, at the site of injection, DNA vaccines can potentially efficiently implant anticoagulant cells. To elicit a strong immunological response, several delivery mechanisms are utilized [47,48]. mRNA J. Ross provides that mRNA vaccines had originally been explored in the 1990s, but their usage was limited due to their inconsistency [49]. Because mRNA contains the genetic data needed to make an antibody, RNA vaccinations, therefore, result in the creation of coronavirus proteins in vivo. An interaction between some of the DNA plasmid patterns and the recombinant RNA polymerase in vitro is established to produce an RNA vaccine. In order to achieve a stable RNA sequence, a synthetic cap analog and a poly(A) tail are added. In order to achieve a stable RNA sequence, a synthetic cap analog and a poly(A) tail are added. Several transport systems (including, cationic peptides, nano-emulsions, as well as lipid nanoparticles) other techniques permitting easier transmission (electroporation but also gene gun) are used to better stabilize the cells [50]. Examples are Pfizer-BioNTech, Moderna. Impact of new variants on the COVID-19 vaccine Information is continuously collected then assessed on new coronaviral variants. WHO collaborates with academics, health officers with scientists to investigate how different varieties alter the behavior of the virus, particularly their impact on vaccination efficacy. As we understand further, we must do our utmost to limit the viral spread to prevent changes that can impair the efficiency of current vaccinations. Furthermore, producers & vaccination programs might have to adapt to the growth of the coronavirus: for instance, vaccines could need to integrate more than one strain while developing. Tests should also be established and sustained to evaluate but are of sufficient scale enough diversity to permit a clear evaluation of the results. The findings should be evaluated and evaluated. To understand their effects, the impact studies of vaccinations deployed are equally crucial. It remains vital to stop the spread at the source. The amount of viral transmission but also consequently reduction in viral mutation continue to counteract growing variations [69]. Vaccine manufacture, transport plus trash disposal use resources also cause contamination of the environment. The creation of a vaccine is a very effective output of biotechnology. The energy consumption is little, which makes it possible to considerably minimize waste, while the resources themselves are insignificantly utilized. Ecological immunization concerns primarily address two facets: liquid waste treatment (waste liquid) and the destruction of solid waste (producer biomass). Liquid waste is a culture liquid once it's been separated from the mycelium as removed from the final product. Wasted liquid management is the system of several consecutive units: first suction tank, as well as aeration tank -an air supply, reinforced concrete bottom pool through pipes. Air transcends the total thickness including its fluid, oxygen saturates it, contributing to the intense oxidative process. The use of mycelial waste as part of the nutritional medium for the preliminary treatment of microorganisms with enzymes is a further possible route. Disposal of solid (and particularly hazardous) wastes. After the mycelium has dried the most rudimentary approach to dispose of is to bring it into urban landfills. Another approach is to lay the mycelium in the earth onto the ground, mixing it with dirt then leaving it for decades (composting is not a cost-effective method). Two potential solid waste disposal routes should be available. One being the sterility of mycelium, which will be supplemented by construction materials for livestock. The next consists of mining different mycelium fractionations, such as lipid, but also using them as detergent rather than inadequate ingredients (wheat oil) or synthetic sprays [70]. All that is needed to keep the vaccinations cool from the enormous freezer to the as well as aircraft need to get the jabs out, to the millions of trash vials plus syringes are potentially problematic. Organizations are using HFC gases to freeze immunizations to very cold temperatures -less than 70°C that enable extended distances of storage and delivery [71]. The danger of irresponsible trash disposal that needs education is not known for large numbers of individuals in the public. There are several strategies to tackle this problem: packing, adjusting production, recycling, delivery, manufacturing processes are all extremely promising areas to focus on [72]. Conclusion Vaccine Efficacy -The vaccine made from mRNA technology was a definite victor of effectiveness in the 95percent of overall range with ages accompanied by an efficiency basis of a Protein Subunit of less than 90percent of the total in the United Kingdom. On the currently available variable data, the inactivated viral platform is the lowest. Dose regimen -In the different platforms studied in this study, 2 vaccines that are CanSino with Janssen (also known as Johnson & Johnson) of the 12 vaccinations had a 2 dose system. Reactivity/security -The best pick of all is actually the inactivated virus framework. Due to some persisting worries over stalled trials with negative outcomes, the viral vector system rates below the other 3. Target price/accessibility -mRNA vaccine manufacturing can be reasonably scaled up; however, they are already one of the costliest vaccines; the most inexpensive to create is the VVB vaccine. Vaccines against inactivated viruses are reasonably simple and cost-effective, given the COVAXIN. There is nonetheless some evidence that the prices of vaccines imported from China are relatively high. Logistics -mRNA vaccinations with a heavy cold chain demand are the least. The vaccine is generally developed from zero throughout a period of several years, however, we have previously licensed vaccinations for combating coronavirus propagation. This consists of efforts of various states to ensure that it conforms to pandemics. This has been achieved by pre-clinical studies in early Phase 1 strategic risk measures and adaptive test designs. I would like to thanks and show my gratitude towards Dr. Lalita Chopra and respected HOD Dr. Renu for their guidance and support throughout the process of writing this paper. I was grateful for the help of my friends and family without whom I wouldn't be able to complete this paper in a given time. Vaccine type Advantage Disadvantage Live Attenuated Vaccine (LAV) 1.It stimulates an innate immune system TLR 3 with TLR 7/8 as well as TLR 9 which includes cells B, CD4 but also CD8 t but is inherent to immunological activation. 2. It could be produced from "coldadapted" strains of viruses, reassortants through genetic reversal. 1.To verify efficacy as well as safety LAV requires comprehensive supplementary testing. 2. A nuclear substitution is possible throughout viral replication, which results in the development of postvaccination recombinants.
2021-10-15T16:19:07.643Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "fcb902591ad268531a0c9957fe6d731c21c4c37f", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/85/e3sconf_icmed2021_01038.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "de22601ba582872b49b2b6d6f51327559c23bbaf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
150241403
pes2o/s2orc
v3-fos-license
Destiny or Free Will Decision? A Life Overview from the Perspective of an Informational Modeling of ConsciousnessPart II: Attitude and Decision Criteria, Free Will and Destiny As it was shown in the Part I of this work, the driving of our life is determined by series of YES/NO - type elemental decision, which is actually the information unit (Bit), so we operate actually in an informational mode. The informational analysis and modeling of consciousness reveals seven informational systems, reflected at the conscious level by the cognitive informational centers suggestively called Iknow (Ik - memory), Iwant (Iw decision center), Iove (Il-emotions), Iam (Ia-body status), Icreate (Ic-informational genetic transmitter and educator), Icreated (I-cd genetic generator inherited from the parents) and Ibelieve, (Ib - connection to the informational field of the universe). As the mind can operate in a bipolar mode, we dispose of two alternatives to use our free will: in a positive (YES) or negative (NO) mode. Such modes can be applied as a function of the personal criteria, especially that inhered from the family, which are the fundamental ones, but also from that acquired during the life. The persistence to operate in a positive or a negative mode is fundamental for the designing of our personal trajectory and therefore for our destiny. The destiny and the role of our free will in deciding the course of our own life are analyzed therefore with respect to our YES/NO attitude, which is an integrated informational output composed by the contribution of all cognition centers, showing that when this is favorable to the life rules, it is avoided its deterioration determined by a permanently negative attitude and thinking mode. It is shown that our intervention should be directed both to the operating mode but also to the decision criteria, when it is ascertained that they do not correspond to the reality, allowing a favorable adaptation. As parents and therefore as forming and educator of the next generation, an activity specific for the center Ic, it is necessary to observe, discover and encourage the talents and predispositions of their Children, to put them on a privileged way of the destiny, because they will benefit of an advanced informational “budget” with respect to their generation and a certain sort of preparation. The comparative analysis between personal operating mode and the reality for adaptation necessities should guide permanently the selected trajectory way, to include new criteria, to change the older, or to reprogramming the operation mode, with the participation of all centers. The stereotype thinking chains triggered by repetitive negative thoughts acts on the informational system as stressing agents, leading to chronic, sometime serious disorders. Such thinking operation forms should be therefore detected and substituted or eliminated without delay, according to some recommended suggestions. Furthermore, it is shown that a specific self-control of the thoughts as positive or negative information and of the lifestyle, helps the health maintenance and the life prolongation under high qualitative conditions. Introduction In the Part I of this work we have presented the specific physics and informational arguments [1][2][3], showing that our informational system operates in a bipolar YES/NO mode and this can be described by seven informational subsystems defined as: Center of Acquisition and Storing of Information (CASI), reflected at the conscious level as a center suggestively called Iknow (abbreviated as Ik), consisting practically in memory and associated external and internal sensors; Center of Decision and Command (CDC), detected at the conscious level as Iwant (Iw), expressing practically the attitude, as a adaptation informational output; the Info-Emotional Center (IES), corresponding to Ilove (Il); Maintenance Informational System (MIS), sustaining the automatic matter-related processing, which corresponds to Iam (Ia); Genetic Transmission System (GTS) reflected in Icreate (Ic) and the Info-Genetic Generator (IGG) cor responding to Icreated (Icd), carrying the info-genetic information from parents. The Anti-entropic Connection (AC) represents the info-relation with dark matter, which is believed to have antisymmetric properties vs. matter properties, specifically antigravity [4], anti-entropy [1,5] and time arrow reversely oriented, from future to present [6]. This connection is detected at the conscious level as the center Ibelieve (Ib), expressing the trust and life confidence. The AC gate allowed to explain [2] the phenomena associated to the Near-Death Experiences (NDE) [7] and to discuss a possible "after-life "existence of consciousness [8] and immortality [9]. On the basis of the informational modeling of consciousness [10] presented in the Part I of this work and the obtained results concerning the cognitive centers, in this Part II are discussed the consequences of the bipolar operating YES/NO mode of our decisional system on our life trajectory and destiny. It is shown that the positioning of our free will on the positive (YES) side, according to the requirements of the life prerogatives, we have more chances to benefit of a satisfactory life and destiny. The mechanisms of a repetitive negative operating mode are analyzed, pointing out the emerging processes inducing sometimes some serious and chronic disorders, not only concerning the informational system itself (brain and the nervous cells), but also the other vital organs and systems of our body, like heart, immunity, digestive and sexual systems, leading to a premature aging and shorter life duration. The definition and understanding of the functions of the informational centers allow observations and conclusions on how we can live our lives optimally, accordingly to its laws and not outside of them, contributing in such manner through our decisions, by the self and lifestyle control, to a harmonious collaboration with our body and with the environment and the society where we live in, for a successful life, aging and destiny. The attitude as a bipolar informational operator and the decision criteria As it was shown in the Part I of this article, our life is driven by YES/NO type decisions. The CDC informational system reflected at the conscious level by Iw, mainly acts from this point of view as an informational operator, influencing or even determining continuously the generation of our trajectory in the life. However, in order to decide, it is necessary to dispose of decision criteria, allowing a distinction between the "Good" and the "Bad". The selection is thus a binary, informational system. We get the first notions of Good and Bad during our first years of life, when we learn how to behave in the environment and in the relationship with family and society, so the criteria learned in that period become the most powerful and durable during the life, constituting the fundamental structure of our decision system. From some life experiences we extract positive conclusions, if the application of the final decision corresponded to our expectations, or negative conclusions, indicating that the theoretical decision did not match the practical situation. From each experience a conclusion is inferred, which can play a role as a decision criterion for the future experiences. And to complete the discussion on the role of decision criteria, let us remember again that the education during the first years of life (so called "the seven years from home") becomes defining for the personality of the later adult. The concepts of culture, faith, religion, everything which is included in the personal mentality, overlapping on the predispositions inherited from parents, fundamentally determine the thinking mode and the characteristic behavior of the future adult, translated from informational point of view by the formation and consolidation of the center Icd. If we also refer to the characteristic impulses received from GTS and MIS, reflected in the centers Ic and Ia respectively, we can understand that they can also play an important role in the defining a decision. MIS provides the body power by the feeding and by its absorption/desorption process, therefore the energy for its operation. GTS plays the role as a species' continuity factor, mobilizing the body's resources not only for the reproduction, but also for the assurance of the survival and harmonious development of the next generation. Similarly, the center Ib can play a defining role, not only by the affiliation to a particular belief, but also by its stabilizing role for the body health, which we often instinctively appeal to, especially in the moments of life difficulty. We will therefore define the Attitude as an informational output of OIS (Operative Informational System), elaborated by the contribution of the entire information system of the organism [11]. Another important remark about Iw center: under the light of previous comments, it results that Iw can also be considered as a component of the Attitude. Indeed, depending on the concrete goal, not always what we truly want by Iw, it is really what we actually express through the Attitude. That is because at the elaboration of the Attitude, as we have pointed out above, participate all the other centers, by input of Information and by decision criteria. So, Iw remains one of the components, often the main, but not the unique component. We are therefore a complex information system, defined by seven informational centers, each of them with a distinctive, but interrelated functional role, working for the short and medium-term adaptation by means of OIS and for the species (long-term) survival by reproduction. We can therefore refer to the informational system of the organism as an integrated information system, resulting from the structural and functional co-participation of all its components: informational processors (systems), sensors, execution elements and informed-matter in general, as the information "hard". The attitude with respect to a certain criterion could be therefore positive (YES) or negative (NO). If our decisions with respect to the same criteria will conduct to a coincidence between the expectation and the results, then we will say that the selection criteria are "Good". However, if after repetitive attempts the result is not coincident, then we should conclude that the applied criteria are "Bad". A change it is necessary thus to operate within our thinking system. Any change in our thinking system, or any change in our living conditions, must be subjected to a fundamental rule: not to contravene to the natural laws of life, taking into account thereby the one's own life or that of the others. In order to consciously decide our life way and therefore our destiny, we must therefore clarify toward what side we want actually to go on: that where we are going to defend the prerogatives of life, the health, keeping us therefore on its positive side, by conserving the joy of living, observing the gain and not the losses, the positive aspects and not the negative ones, or on a contrary side. For this, we have to start to observe the sort of information with we may accept or not. The selection of information which we have to operate with, is like that of food: "Good" or "Bad". A proper nutrition, ensuring an adequate functioning of the body, is selected from the vast existing variety, in order to be best suited to the current state of the body, reflected by Ia. Therefore, the proper selection of information, which is the "food" of the informational system, is equally important, providing the individual mental and corporal health. We have the possibility to choose by our free will: if we choose YES, a positive information, we will always find ourselves on the side that The choice between YES and NO does not only determines our long-term life trajectory, our destiny, but also our own health, which eventually definitely and definitively marks a happy destiny or not. The regular connection to Ib is always beneficial, bringing to the body the signals which it needs, that of encouragement, peace, confidence and optimism. However, the change of the thinking system is not an easy task, requiring a prolonged effort of self observing and control of the received and decisional information. As the attitude is a consequence of a sum of decision criteria coming from all informational centers, this should be carefully analyzed. Moreover, as it will be pointed out below, only repetitive cycles of experience goes to stabilize a new informational chain. This change is therefore a time and effort consumption mechanism, integrated in a reprogramming process of the thinking system. We have also to make a distinction between a controlled, conscious and progressive change process, and a spontaneous attitude change from YES to NO and reversely. When this pass from an innocent capricious game to a chronic, uncontrolled behavior mode to live, leading to a bipolar unstable personality, then serious consequences on the personal health could be Registered, as it will be discussed below. A stable and reliable balance between YES and NO should be therefore permanently maintained. Free will and the destiny The life cycle described in the Part I is the destiny of the species, the major coordinate of the life course, and there is no any possibility, at least for the moment, to be modified, only improved. This destiny applies naturally to every member of the species, and also to all the creatures on the Earth. The amount of information contained in the origin egg is sufficient to trigger the growth and the development of the fetus and that of the child, leading then the life of each individual, until his disappearance. But the genetic chains could intervene in our lives also in another manner, namely manifesting in Icd the personal specific predispositions, talents and skills. This is a gain that has to be detected and stimulated from very early age, as it is a "lesson" learned and transmitted by the previous generation or generations. By following the inner "call," once discovered the aptitudes, the distinct intrinsic attributes, it is much easier to traverse the professional training steps, because of the amount of initial information inherited from parental by genetic transmission and which is a privileged debut with respect to the competing partners in the same competition. The start "in advance" provides therefore an easier and more successful trajectory. Inhibiting the skills of children for the sake of compelling to follow what the parents want insistently and authoritatively for their offspring, is a wrong attitude from this point of view. The Ic center of the parents, focused for the education and training of children, must also be adjusted in agreement with the reality. The discovery, stimulation and support of children' predispositions, skills and talents are not only recommended, but also necessary, to put the children on their own successful life path, because they benefit in this way of the informational "dowry" gained by the previous generation. Following such a trajectory, the destiny is predictable and is life easier to be driven. A more difficult situation is that when the life does not seem to smile us, or at least not as much as we would expect, so that the destiny seems to not help us. Two possibilities can be distinguished in this case, as shown below. 1. We are too exigent, we want too much, that is, the Iw center is super-active and unfitted to the reality; we will have two alternatives to choices in this case: to conform to reality, or to change the environment which we live in with other, corresponding to our aspirations. 2. We always follow the same path already used of a long time, the same way that made unhappy or dissatisfied our parents, our decisions being stereotyped, always or most often the same, declaring us to be the victims of our unhappy destiny. However, observing carefully what actually happens, we will need a major decision, that to change our mentality, be it inherited, or acquired through the parental education. This decision is not easy to be applied, but becomes absolutely necessary, involving substitution of a series of habits and thinking stereotypes with new ones, able to lead us on a different path than the one of the Failure. It is therefore a matter of deciding the actual change of our system of values, criteria and mechanisms of thought. We can call this process a conscious remodeling or a reprogramming process, as will be explained in detail in a later dedicated book. Indeed, we cannot leave our own life at the hand of the random events, or destiny, it is necessary and justified our intelligent intervention in our own system of values through our free-will, to remedy the "defects" and to be able to benefit, by adaptation, to a new system of thinking, oriented towards a successful life and destiny. The maintenance of the emotional system II on the positive side is fundamental for the quality of life, because it represent actually the reaction of one's own body to the received information, so it affects or can affect the functioning of all the organs of the body. A sure way to poison our own existence is to "feed" ourselves with harmful, negative information. Any information that arouses or could arouse negative emotions is by definition negative. We will live as captive in the life of our own choice: if our choice is always negative and we are prone to receive and operate with such kind of information, then our destiny will be always unsuccessful, and we will blame the "fate" as guilty. Worse than that, indulgent with ourselves, but not with the others, we will always see the others as responsible for our own failures or dissatisfactions and we will never find the right way of output from this vicious circle. The form under which the thinking and thus the associated negative emotions acts on the body is studied and reported as stress. By stress it is generally understood the pressure on the nervous system, deviating the organism from its normal, equilibrated functioning, but in a reversible way. However, if the stress is repetitive and of a long-time duration, this could affect irreversibly the normal functioning of organism, and it is translated into the body by reactions both of the brain itself, and of the body organs [12][13][14] ( Figure 1). The dysfunctions of a normal mode of operation of the body organs are manifested starting with the simplest forms, such as the lack of appetite, apathy for any kind of activity, lack of energy, lack of sexual appetite, up to extreme forms. One of the most powerful effects is manifested on the brain and the nervous system by the installing and maintaining of the depression, the manic forms, anxiety and even schizophrenia [12,14]. Therefore, in order to assess whether the fate will prove to be attractive to us, or not, it is necessary to analyze the way in which we regard the life, by means of the two alternatives, YES or NO, available to the free will. So, it is good to be very careful with the information we acquire, and we memorize in CASI, becoming thereby the information source (Ik) for Iw. As we have commented, nature, as an informational system, both by its structural properties and by its laws, offers at the macro level two fundamental possibilities to decide, YES and NO. The YES part is the positive one, which protects the life, expressing its Associative character, the one which helps it, by the integration of information into matter to form and support living structures [1]. The love (Il) is an associative property, manifesting itself by a Union force between a person, by means of his own informational system, and the object or the subject of love. From this point of view, the love has an anti-entropic, constructive character, helping the life and its prerogatives. The hate is the contrary of the love, having an entropic, disorder and destructive role, so does not help the prerogatives of life. As well as the hate, the envy, the jealousy, are destructive emotions, negatively affecting the own organism and that of the others, inducing long-term chronic diseases especially of the heart and of the circulatory system (palpitations, arrhythmias, coronary diseases, even the infarction and sudden death [15]), of the stomach, pancreas, intestines (digestive system in general (la)), of the muscles and joints (EE), of the reproductive system (Ic) [13,14], as indicated in Figure 1. The respiratory system can also be affected by asthma [13,14]. Thus, the stress leads to release of histamine, which can trigger severe bronchoconstriction in asthmatics and respiratory disorder. The stress increases the risk of diabetes, particularly in overweight people, because the psychological stress changes insulin requirements [14]. The stress also alters the acid concentration in the stomach, which can lead to peptic ulcer, ulcers or ulcerative colitis and may increase the risk of prostate disease and the risk of severe reproductive system malfunctions in both women and men [13,14]. The contracting of chronic disorders can also be done indirectly by the affecting of the immune system. The long-term effect of stress on the body's immune system is translated by two ways: (A) the weakening of it due to the brain's reaction by which it orders the production of cortisone with anti-inflammatory effect, reducing the action of the fighting agents and thus the defending efficiency against infections and colds; (B) the generation of autoimmune diseases such as rheumatoid arthritis, lupus or other autoimmune diseases that people recognize as inflammation, due to the excess of hormones transmitted as a response of the brain to the stress factors [12]. The mechanism by which the intoxication could occur, the poisoning of our own organism with negative information, is easy to be understand. Indeed, since the perception and storage of information in the brain is done by associating of the new information to the similar older one, already consolidated in the memory [1,16], it results that once we ourselves have "contaminated" us with some negative obsessive thoughts, we will begin to absorb increasingly more, easier and easier, similar information to those which we already have been "furnished" our mental "scene", namely the negative one. Through repetition, this growing thought congeries is assimilated and integrated into the automatic thinking system [3], which condemns us to be the prisoners of our own way of thinking. We become so dependent on the accumulated toxic information that was converted in our own way of living, and we no longer want (Iw) to detach from this whirlpool that descends us ever deeper to an inferior level of life. Acquiring and integrating the negative information into our informational system arrives to control our lives as a drug, and an effort of observation and will by means of Ia, Il and Iw is necessary to defeat this negative vortex where we are attracted to, harmful to the body and life. We will say that the life does not smile us, that the destiny is not favorable to us and that we have no luck in life, but in fact we are nothing more than the victims of our own negative thinking system. An absolute means of negative nervous poisoning is also the use of comparative examples, although apparently this would be harmless. Among the laws of memory is the fact that we memorize primarily the beginning and especially the end of a message. The vocal formulation of a message, whatever would be it, strengthens the chances of memorizing, while the repetition reinforces it. If we recourse to the method of contrast, trying to explain how good it is today, in relation to what was bad for example in the past, we will unfortunately remain in memory with priority with the last part of this message, the negative one. Therefore, instead to ignore the negative part, increasing the chances to be forgotten, we activate it and strengthen it in consciousness, keeping it as a toxic source of information. In this way, we will never be content actually with the present, and we will see the destiny as adversary, by means of the perpetual evocation of the negative past, even if the present is completely different from it. And as the present creates the foundations and perspective for the future, we will always remain tributary to the past, which we cannot separate from, even if we try (but in a wrong way) to reject it permanently. Another method of informational poisoning is the imagination of catastrophic scenarios, which could be evoked for instance as a contrast alternative even when everything was carried on a positive way, or of the personal image as a victim, as a "subscriber" ex officio to an unfair destiny. This manner to live the life prevents any manifestation of joy for the personal successes or that of the others. It seems strange, but yet this style is often practiced, especially by people who have probably lived some dramatic or traumatic situations in their childhood or during their youth and have become extremely "cautious" to not let be the "prey" of a sincere and natural joy. Certain people confuse the wisdom with the prohibition of enjoying their life, defying the benefits of this wonderful state of spirit, and taking on a severe "seriousness" role, in order to create an image of authority and pre-potency, especially with the passage of years and aging. This style is also inadvisable, because it subjects the body to a permanent stress, forbidding it to a truly relax and recover of its consumed forces. And as information is acquired through association with the existing one, the followers of such a style will become the captives of their own thinking mode, seeking, selecting and accepting with absolute priority the accumulation of only the information that satisfy in a similar way those already acquired, i.e. the negative one. Therefore, they become some powerful "magnets" for the attraction of dramatic scenes, imagined or borrowed from others by today's abundant audio-visual means (especially through TV) and become its prisoners, willingly designing (but unfortunately in a wrong way), the negative destiny of their life. This kind of thinking leads to anxiety and depression. Joy is the most beautiful reward of fulfillment, as small as this could be, and fills and enriches the life, elevating its quality and prolonging it duration. It no must in any case be overshadowed, neither by the traumas borrowed or taken from the destinies of the others, and nor by the exacerbation of some imaginary dangers nowise. The alternation between YES and NO works also in this case: after a period of dedication for achieving a goal, as small it would be, must be permitted to the body to enjoy of its fulfilment, conveying to it the gratitude for what it has done and Confidence in its forces for the future. The aggression, as a form of behavior, is also triggered and sustained by the obsessive negative thoughts, frequently operated and accepted as a form of a problem solving in the life. These thoughts generate impulses to violently attack, to hit someone, to harm or even kill a person, to Abuse of someone, to violently punish someone, or to say something rude, inappropriate and ugly to somebody [17]. An aggressive thought is a primary form of a future obsessive-compulsive disorder, because if at the beginning this seems to be only an innocent "imp" [18], but actually a young "devil", this can become by amplification and repetition a stereotype, a chronic and dangerous form of thinking, not only for himself or herself, but especially for those around. The aggressive thoughts, which become chronic, are psychotic and depressive forms to approach the life, requiring finally a proper medication [18]. The "solving" by aggression the differences with others is strictly forbidden, not only because it is a repugnant form to approach the inter-human relations, with grave and sometime irreparable consequences, but also because it can only lead to a mutual destruction. Indeed, according to the law of action and reaction, applicable also in this case, it results that the injury/wound of a person returns by reaction against his or her own person sooner or later. This is a natural compensation law. The human species has reached the present performance level not by cultivating and exercising the brutal force, repelled, rejected and proved as inadequate by the history of humanity, but by the improvement of the informational system, developing it through experimentation and continuous adaptation, seeking and applying optimal solutions to each of the life problems in an intelligent way. If inherited, as it is the case for instance of the induced-depression risk [19], this thinking system needs to be modified by reprogramming. If it is a system acquired over the course of life, it must also be modified by the intervention of the free will, acting through Iw center. Otherwise, the effects of such kind of thinking can be manifested by mental affections [14], and if added the effect of the alcohol consumption, by severe disorders, including obsessive and manic behavior, deep depression, bipolar disorder of personality, aggressive behavior. Unbalancing the nervous system, otherwise functioning normally and consistently in a permanent and harmonious correlation with the environment and with itself, can therefore embrace extreme forms, becoming chronic in a long term, and which could be difficult to be treated even by specialized medical intervention. Schizophrenia, a widespread disease in our modern society, manifested, among other things, by an uncontrolled mental reception of "voices", has recently been described as a form of nervous disorder by spontaneous "connection" to the information system of others [20]. From the point of view of the informational model described here, this connection is easy to be understood, because it can be done through the info-creational field of consciousness [2,10]. The bipolar attitude could be also reflected by an uncontrolled change between the extreme limits of YES and NO and reversely. Besides of a persistent negative thinking, this would be a consequence of a low control of mind and personal impulses, use of unreliable criteria and information, uncompensated stressing periods, high emotional/disproportional implication in the life events, some of the reasons discussed already above, or a combination of a part of the mentioned items, or of all of them. Therefore, a combined intervention plan should be applied to prevent the advance to a chronic disorder, including a reprogramming of the negative thinking operating system to a positive one. The ways we can control the negative thinking and thus the stress as a preventative method or as an immediate action could be the following: (i) avoiding and disconnecting from the negative information sources; (ii) substitution of negative information with positive information; (iii) the mandatory observance of the resting and recovery by the sleeping during the night, because the nervous system also works according to YES (activated) and NO (disabled) alternations; (iv) the analysis of the motives that cause the negative thinking and their remediation [11]; (v) the participation at support groups [12,13], or to other alternatives [11], as a form of therapy; (vi) a regularly compensation of stress with physical and hobby activities, walks in nature, sports [12,13], disabling thus the psyche from the focus of negative thoughts; (vii) the stimulation of the positive variant of life [21], by positive memories and positive reinterpretation of earlier older negative conclusions formulated during the past; (viii) the mental reprogramming by methods which will be presented in a future dedicated book; (ix) meditation and mental relaxation [12,18]; (x) addressing to professional counsels and applying a specialized medical treatment [12,13]. Besides the recommended, antiaging preventive methods cited above, but valid also to any age, a special attention should be paid to the amount of sleeping, which seems to be an independent risk factor of the mortality. A specific study dedicated to this issue [22], reports that the women who sleep 6 -7 hour per night show a minim mortality risk, while the women who sleep less than 5 hours or more than 9 hours, exhibit a significantly increased risk. So, besides the care on the alternation between the periods of the sleeping and of activity, necessary for the equilibration of the nervous system, it is necessary to apply an optimal rhythm and an optimal duration of the sleeping periods. In relation with the above specified recommendations, a special additional reference could be done for a successful aging and life. A flexible adaptation, equilibrated balance between YES and NO is a helping and wise option for the older adults. The wisdom is, or should be, a specific quality during this stage, because of the entire life accumulated experience allows a appropriate selection between YES and NO. However, due to the rapid advance of the information technologies in our informatics era, a quick adaptation is necessary even at such seigniorial ages. The positive thinking is strongly recommended not only to prevent and recovery of some mild dysfunction and life prolongation, but also because this can consistently improve the quality and life expectancy, even in chronic terminal illnesses such as cancer [14]. Therefore, for their own benefit, it is strongly recommended that the older adults continue to learn, to be present and active in their day by day life, applying and promoting a positive and optimist thinking and feeling behavior in relation with themselves and with the others. This would be a favorable way to maintain a reliable communication with the younger generations, the family and the society in general, stimulating also the longevity and personal wellbeing. Conclusion We operate between two alternatives, YES and NO, and we have the possibility to choose by our free will one of them. To operate in a decisional system, it is necessary to dispose of decision criteria, the distinction between the "Good" (YES) and Bad (NO). The fundaments of the decision criteria are get by the child during the first years of life within the family, the parents transmitting to him their basic value system, characteristic for their culture, beliefs and social relations. As an extension of the genetic-related information received from the parents by IGG (Icd), the formation of the Icd center of the child is practically continued therefore by the direct info-transmission of specific parental life experience, which is integrated into his own criteria system. The attitude, as an info-reactive adaptation output of the informational system is a result mainly of the decisional operating system CDC (Iw), but all other informational centers (Ib, Ik, Il, Ia, Ic, Icd) can contribute with specific criteria to the final decision, especially the info-emotional one (Il). According to our free will, it is possible to choose one of the two alternatives, YES or NO. By choosing the operating side (YES), preserving the life basic requirements and their stimulating characteristics like love, joy and especially a positive mode of thinking, we contribute to a positive destiny. A negative (NO) operating mode, can induce dysfunctions and sometimes serious disorders not only of the informational System itself, specifically of brain and the nervous cells, but also of the other organs and systems of the organism, like the digestive, immunity, circulatory and sexual system. The operating in a combined YES/NO bipolar mode is also undesirable, so the control of the info-emotional center (Il), the practicing of a positive thinking and of an equilibrated lifestyle is from this point of view a key to avoid the advance toward a chronic bipolar disorder. Some mechanisms contributing to the drop into a negative vortex were presented, based on the specific properties of the process of the information acquiring, consisting in a priority memorizing of that information which is of the same type with that already integrated in Ik by a repetition process. The using of a comparative analysis with the negative past, the catastrophic view of life, the victimization, the practice of negative emotions and thoughts like envy, jealousy, hate, are only a few of the starting platforms for a further large-scale proliferation of a negative thinking operational mode. The aggressive thoughts are dangerous even when these appeared as "innocent" newcomers, because by repetition they can be assimilated and integrated by the informational system. If a decision and/or some decisional criteria are proved to not correspond already with a satisfactory testing cycle, a reprogramming process of the thinking operational mode have to be applied.
2019-05-12T14:24:14.150Z
2018-10-25T00:00:00.000
{ "year": 2018, "sha1": "1ac56a94687fca5d088fbed90704b19a3fd28724", "oa_license": "CCBY", "oa_url": "http://crimsonpublishers.com/ggs/pdf/GGS.000576.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "1b16c68f8548fa0e2f6e7899028ee06d715cf4ef", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
251158810
pes2o/s2orc
v3-fos-license
Parents' responses to children's math performance in early elementary school: Links with parents' math beliefs and children's math adjustment Abstract A new parent‐report measure was used to examine parents' person and process responses to children's math performance. Twice over a year from 2017 to 2020, American parents (N = 546; 80% mothers, 20% other caregivers; 62% white, 21% Black, 17% other) reported their responses and math beliefs; their children's (M age = 7.48 years; 50% girls, 50% boys) math adjustment was also assessed. Factor analyses indicated parents' person and process responses to children's math success and failure represent four distinct, albeit related, responses. Person (vs. process) responses were less common and less likely to accompany views of math ability as malleable and failure as constructive (|r|s = .16–.23). The more parents used person responses, the poorer children's later math adjustment (|β|s = .06–.16). Parents appear to play a significant role in fostering children's motivation and learning in the academic context via a variety of practices (for reviews, see Barger et al., 2019;Pomerantz et al., 2012). Among these practices are parents' responses to children's performance (e.g., Gunderson et al., 2013;Pomerantz & Kempner, 2013). Some parents frequently use person responses linking children's performance to stable, personal attributes, namely intelligence (e.g., "You are so smart" and "Math just isn't your thing"). Other parents predominantly use process responses linking children's actions, such as effort or strategy use, to their performance (e.g., "You worked hard" and "What might be useful to do next time you have a math test?"). Notably, parents' person responses to children's success predict dampened motivation in school among children over time (Pomerantz & Kempner, 2013), whereas their process responses predict enhanced motivation and achievement Gunderson, Sorhagen, et al., 2018). Although parents' person and process responses have received attention in the context of children's success, they have received almost no attention in the context of children's failure. Failure, however, can be important to motivation and learning (e.g., Brunstein & Gollwitzer, 1996;Diener & Dweck, 1978;Taylor, 1991). Moreover, experimental research manipulating whether children receive person or process criticism indicates that such criticism influences children's adjustment in the face of failure (Kamins & Dweck, 1999). One reason there has been so little research on parents' person and process responses to children's failure is that the research to date has generally used naturalistic observations (e.g., Gunderson et al., 2013) or daily reports (Pomerantz & Kempner, 2013) in the home, which may not reliably capture children's failure given it is relatively infrequent . Thus, the current research used a new parent-report measure to assess parents' person and process responses to children's success and failure in math in the early years of elementary school. We examined the links of parents' responses to not only children's math adjustment (e.g., motivation and achievement), but also parents' math beliefs and goals to elucidate how parents' responses align with their beliefs and goals. The role of parents' responses in children's adjustment Parents' responses to children's performance can provide an attributional framework (see Graham & Taylor, 2016) for children to understand what caused their performance, which children may use to make judgments about their ability as well as how they should approach learning. Parents' person responses largely focus on children's ability, which may convey to children that their performance reflects a stable, internal attribute (e.g., Mueller & Dweck, 1998;Pomerantz & Kempner, 2013). Although internal, stable attributions for success can enhance children's feelings of competence (e.g., Marsh et al., 1984), they can be debilitating in the face of challenge, as children may view failure as reflecting a lack of ability. Children may thus feel anxious about challenge and try to avoid it (e.g., Dweck & Leggett, 1988;Mueller & Dweck, 1998). Parents' person responses to failure may be particularly detrimental as they explicitly indicate children lack ability. Parents' process responses to children's performance, in contrast, focus on internal, controllable causes, such as effort and strategy use. This may lead children to see performance as reflecting behavior under their control, thereby allowing them to focus on mastery (Mueller & Dweck, 1998). Thus, they may see challenge as a learning opportunity. The existing research on parents' person and process responses to children's performance generally yields findings consistent with this analysis. As in experimental studies (Kamins & Dweck, 1999;Mueller & Dweck, 1998), using mothers' daily reports of their praise in the academic context with elementary school children, Pomerantz and Kempner (2013) found that the more mothers used person praise, the less children held a growth mindset about ability and the more they avoided challenge 6 months later, adjusting for children's earlier mindsets and challenge avoidance. Naturalistic observations of parents' praise in the home during daily activities (e.g., playtime, meals, and cleaning up) with their toddlers indicate that parents' process praise is predictive of children's beliefs (e.g., about trait stability) and motivation (e.g., preference for learning over performance) as well as achievement in elementary school Gunderson, Sorhagen, et al., 2018). Unfortunately, there has been almost no examination of parents' person and process responses to children's failure. This is likely due in part to the fact that the daily-report and naturalistic observational approaches to assessing parents' person and process responses make it difficult to capture children's failure, as it occurs relatively infrequently. For example, mothers' daily reports of their middle school children's academic successes and failures revealed that whereas children experienced a success on average on 30% of the 12 days they completed the reports, they experienced a failure on average on only 11% of the days, with many children experiencing no failure . Despite its relative infrequency, failure presents an important opportunity for parents to scaffold the development of children's motivation and learning. Failure may be a particularly salient experience for children, leading them to direct substantial resources toward trying to understand why it occurred (see Taylor, 1991). In addition, adverse reactions to failure can undermine future learning (Eskreis-Winkler & Fishbach, 2019). As a consequence, parents' responses to failure may be quite meaningful for children. In research with kindergarten children, experimentally manipulated person and process criticism had similar effects to person and process praise (Kamins & Dweck, 1999). In concurrent research using one-item child-report measures of parents' person and process responses to children's academic performance during elementary and middle school, the more parents used person responses in the context of children's failure, the less children held growth mindsets (Gunderson, Donnellan, et al., 2018). Taken together, the theory and research to date suggest that elucidating parents' person and process responses to both success and failure among children is important. The role of parents' beliefs and goals in their responses Given the role parents' person and process responses to children's performance appear to play in children's academic adjustment, a key question is what contributes to parents' responses. Cognitive constructs such as parents' beliefs and goals have been argued to be important drivers of parenting behaviors (e.g., Bornstein, 2015;Darling & Steinberg, 1993) with both correlational and experimental evidence supporting this notion (e.g., Grolnick et al., 2002;Muenks et al., 2015). In the current research, drawing from prior theory and research (e.g., Haimovitz & Dweck, 2016;Moorman & Pomerantz, 2010;Pomerantz & Kempner, 2013), we focused on a set of interrelated beliefs and goals held by parents that are likely to undergird their person and process responses to children's performance: (1) growth mindsets (i.e., the view that ability is malleable and dynamic), (2) failureis-constructive mindsets (i.e., the view that failure can be beneficial for learning), (3) mastery goals (i.e., a focus on children developing their competence), and (4) performance goals (i.e., a focus on children demonstrating their competence). Parents with growth mindsets are more likely to hold failure-is-constructive mindsets, and these two mindsets are associated with more of a mastery orientation and less of a performance orientation among parents (e.g., Haimovitz & Dweck, 2016;Muenks et al., 2015). These beliefs and goals among parents likely align with how they respond to children's performance. When parents see ability as malleable, view failure as beneficial, and place emphasis on mastery, they may use process responses to cultivate behavior (e.g., effort and strategy use) among children that develops their competence. In contrast, when parents see ability as fixed, view failure as debilitating, and place emphasis on performance, they may use more person responses, praising what they view as children's immutable gift in the event of success (e.g., "You are so smart!") and downplaying the significance of personal attributes in the face of failure (e.g., "You are just not a math person."). Parents' growth mindsets have been consistently linked to more constructive parenting practices such as heightened autonomy support and dampened control with children (e.g., Matthes & Stoeger, 2018;Moorman & Pomerantz, 2010;Muenks et al., 2015). In the one study examining the link with parents' praise, however, parents who held more of a growth mindset used more person praise . Parents' mastery (vs. performance) goals have also been linked to more constructive parenting practices (e.g., Gonida & Cortina, 2014;Grolnick et al., 2002;Renshaw & Gardner, 1990), with one study indicating that the more parents hold mastery (vs. performance) goals for children, the less they use person praise compared to other forms and the more they use process praise (Pomerantz & Kempner, 2013). The present study The overarching goal of the current research was to enhance understanding of parents' person and process responses to children's success and failure in math. To this end, we developed a new parent-report measure of parents' person and process responses to children's success and failure in math. Given that the measure asked parents about their responses if children were to do well or poorly, it overcame the challenge faced by prior research using daily reports and naturalistic observations in the home (e.g., Gunderson et al., 2013;Pomerantz & Kempner, 2013) of capturing parents' responses to children's failure, which is relatively infrequent . The parent report measure is also more efficient in terms of time and cost, allowing for larger and less selective samples, which is important given that small samples can lead to spurious findings (Button et al., 2013) and often lack generalizability. Gunderson, Donnellan, et al. (2018) navigated these issues by having children report on their parents' responses, but they used only one item for each type of response, which poses issues of reliability as well as breadth in capturing each type of response. Moreover, such an approach may be useful with older children but not younger children, who may have difficulty reporting on parents' responses. The self-report measure assessed parents' responses to children's success and failure in math. Math is an important area of learning in which children may often encounter difficulty or feelings of anxiety (e.g., Boaler, 2015;Gunderson, Park, et al., 2018;Ramirez et al., 2013Ramirez et al., , 2018, such that constructive responses to failure may be of much importance. In addition, parents may be particularly likely to use person responses to children's performance in math as it is seen as requiring more innate talent than other areas (e.g., Heyder et al., 2020;Leslie et al., 2015). Given these issues, parents' responses to children's math performance may be a key target for interventions aimed at parents. The research to date, however, has examined parents' responses to children's performance in the general context of daily activities such as meals and cleaning up (e.g., Gunderson et al., 2013) or in the academic context without attention to specific subjects (e.g., Pomerantz & Kempner, 2013). Although it is likely that parents' responses to children's performance in math operate in a similar manner to their responses to children's performance in other areas, it is possible that there are differences given the more fixed mindsets around math as well as the fact that 20% of adults suffer from math anxiety (e.g., Ashcraft & Ridley, 2005). We studied parents' responses to children's math performance, as well as the beliefs and goals expected to accompany them, when children were in the first and second grades of elementary school. During these early years of formal schooling, parents may not only be forming their response styles to children's math performance, but also setting the foundation for children's beliefs, motivation, and skills important for children's math learning in the later years of schooling (Gunderson, Park, et al., 2018). Indeed, children are first exposed to formal math learning in the early years of elementary school and thus may be just developing their beliefs and motivation in the domain. As a consequence, they may possess rudimentary math beliefs (Levine & Pantoja, 2021), as well as motivation and skills, which are particularly open to the messages conveyed by parents, as well as others such as teachers and peers. We assessed four distinct, albeit related, dimensions of children's math adjustment to provide insight into the nature and breadth of the effects of parents' responses during the early elementary school years. Drawing from conceptual perspectives on how person and process responses shape children's beliefs, motivation, and achievement (e.g., Kamins & Dweck, 1999;Mueller & Dweck, 1998), as well as the dimensions of children's math adjustment assessed in prior research on parents' person and process responses (e.g., Gunderson et al., 2013;Gunderson, Sorhagen, et al., 2018;Pomerantz & Kempner, 2013), we measured children's growth mindsets about math ability, preference for challenge in math, and math achievement. Adding to prior research, we also examined children's math anxiety, which may be heightened by parents' person responses to math as children become anxious about failure and is associated with children's math achievement (for a review, see Barroso et al., 2021). The four dimensions of math adjustment we assessed are not only all likely to be shaped by parents' person and process responses, but also form mutually reinforcing feedback loops over time (e.g., for a review, see Levine & Pantoja, 2021) such that they are at least modestly associated. The current research was guided by three specific aims. The first was to establish that the self-report measure distinguishes between parents' person and process responses, with attention to whether such responses vary across success and failure. The second aim was to examine parents' beliefs and goals that may accompany their person and process responses to children's performance. Although there has been some prior research on such links between parents' beliefs and parenting (e.g., Haimovitz & Dweck, 2016;Moorman & Pomerantz, 2010;Pomerantz & Kempner, 2013), it has not been comprehensive; only some of the links have been examined, and not in the math domain. We expected that the more parents view math ability as malleable, believe math failure can be constructive, and hold mastery (vs. performance) goals for children in math, the less they use person responses and the more they use process responses. The third aim was to evaluate the implications of parents' person and process responses for the four dimensions of children's math adjustment. Such adjustment was assessed both at the same time as parents' responses as well as a year later, thereby permitting a window into the direction of effects as we were able to control for each dimension of children's earlier math adjustment in predicting that dimension a year later. This longitudinal approach was also important as parents' responses may need to accumulate over time to impact children's math adjustment. We hypothesized that parents' person responses would predict dampened growth mindsets about math ability, less of a preference for math challenge, greater math anxiety, and poorer math achievement among children over time, whereas the inverse would be evident for parents' process responses. The research aims fit on a continuum from exploratory to confirmatory analyses. For the first research aim, we expected a distinction between person and process responses, but it was unclear whether there would be a distinction between parents' responses to success and failure; thus, we used exploratory factor analysis. The second and third aims were more confirmatory. Prior theory and some evidence allowed us to make directional hypotheses for how person and process responses would relate to parents' beliefs and children's adjustment, but again the distinction between success and failure was more exploratory. Participants Participants were 561 parents and their children (50% girls) who took part in the Early Math Learning Project, which was carried out between 2017 and 2020 in the Midwestern United States in a small urban area and surrounding areas, as well as a mid-sized urban area. Eighty percent of participating caregivers (M age = 37.74, SD = 6.81) were mothers, 17% were fathers, and 3% were other caregivers (e.g., grandmothers). The majority (62%) of parents identified as European American or white; 21% were African American or Black, 7% were Asian American or Pacific Islander, 5% were Latino/a, and 5% identified as multiracial or another race or ethnicity. Of the 99% of parents reporting on their highest level of educational attainment, 35% had a high school diploma or less, 30% had a bachelor's degree, and 35% had a more advanced degree (e.g., MA or PhD). At the start of the project, children (M age = 7.48 years, SD = 0.65) were in either first (55%) or second (45%) grade. The sample on which this report is based is part of a larger sample of 614 parent-child dyads who began the project approximately 3 months prior to what is described here as Wave 1. At this time, parents completed an online survey at home and then made an initial visit to the lab a week or two later with their children; parents and children completed most of the measures (e.g., growth mindsets about ability) described in this report for the first time at home or the lab. During the initial visit, half of the parents received math growth mindset information and half received math Common Core information. In each of these conditions, parents were either given math or non-math activities (e.g., games, worksheets, and story completion tasks) to take home to do with their children. Analyses including the experimental conditions as covariates yielded findings practically identical in size and significance to those reported here. Attrition from the initial visit to the visits described here as Wave 1 and 2 was 9%. Families who did not return, and are thus not included in the current report, differed from those who returned in that parents were less educated, t(607) = 4.28, p < .001, and less likely to identify as white, χ 2 (4, N = 615) = 27.20, p < .001; children scored less well on the math achievement test administered at the initial visit, t(612) = 3.58, p < .001. Procedure Parents and children visited the lab in the spring when children were in first or second grade (Wave 1) and a year later when children were in second or third grade (Wave 2). At both visits, parents completed a set of surveys assessing their math mindsets and goals, along with the measure of their responses to children's math performance. Children's math adjustment was assessed at each visit by a trained research assistant. As a token of appreciation for their time and energy, parents received a total of US$150 across the two visits. At the end of each visit, children received a small prize (e.g., rubber animal). The vast majority of participants (87%) took part in both Wave 1 and 2. Compared to parents taking part in only one visit, parents taking part in both were more likely to identify as white, χ 2 (4, N = 561) = 42.45, p < .001, and were more educated, t(553) = 4.41, p < .001. Comparisons on the Wave 1 variables included in this report indicated that parents taking part in both visits also used fewer person responses to success, t(544) = 2.87, p = .002, and failure, t(544) = 2.76, p = .003, along with more process responses to failure, t(544) = 2.11, p = .018. Children taking part in both visits were less math anxious, t(545) = 2.31, p = .011, and higher achieving, t(543) = 3.99, p < .001. The procedures were approved by the University of Illinois at Urbana-Champaign Institutional Review Board (Protocol: The Early Math Learning Project, #16575). Parent measures The means, standard deviations, and internal reliabilities of each measure are presented in Table 1; the correlations between the measures are presented in Table 2. Responses to children's math performance To assess the frequency of parents' person and process responses to children's success and failure in math, parents were asked to think about a time their child had a success in math and a time their child struggled in math. Immediately after each, parents rated how often (1 = never, 5 = very often) they would use six-person responses and six process responses in such a situation (see Table 3). The items were based on conceptualizations and operationalizations of person and process responses in prior theory and research (Kamins & Dweck, 1999;Mueller & Dweck, 1998;Pomerantz & Kempner, 2013). Person response items focused on the child's smartness or innate talent, as well as whether the child is a math person. Process response items focused on effort, strategy use, progress, and parent assistance in the case of failure. Some of the items came from a pool of items developed with Carol Dweck to examine parents' responses to performance in the school context in general, rather than in math specifically. After examining the item total correlations, some of these items were combined with new items to create a parent response measure pilot tested with a sample of 255 parents (83% mothers; 66% white, 19% Black; 57% with a college degree or higher) of children in first, second, and third grade residing in similar geographic areas (i.e., small urban Midwestern areas) to the large majority of families in the current sample. Items with relatively low item-total correlations for the person or process scales, rated as occurring relatively infrequently, or with relatively low standard deviations were revised or replaced. The scale was then adapted to be used to assess parents' responses to performance in math specifically and administered to a sample of 128 mothers (84% white, 3% Black; 84% with a college degree or higher) of first and second graders residing in the same geographic area (i.e., a small urban Midwestern area) as the large majority of the families in the current sample. Again, items were revised or replaced based on their item-total correlations as well as frequency ratings and their standard deviations, yielding the current measure. The mean of six items comprising each type of response (i.e., person and process) to each type of performance (i.e., success and failure) was taken, with higher numbers reflecting more frequent use of the response. Math mindsets Parents' growth mindsets about math ability were assessed with six items about the extent to which math ability is malleable (e.g., "No matter how good people are at math, it's always possible to change their math ability quite a bit") adapted from Dweck's (1999) measure of such mindsets about intelligence in general. Parents' math failure-isconstructive mindsets were assessed with six items about the extent to which math failure can be beneficial for children (e.g., "The effects of failure in math are positive and should be utilized") adapted from Haimovitz and Dweck's (2016) measure of beliefs about failure in general. For both measures, parents rated their agreement with each item (1 = strongly disagree, 10 = strongly agree). After reverse scoring items when relevant, the mean was taken for each of the two mindsets, with higher numbers reflecting more growth and failure-is-constructive mindsets. Math goals for children Parents' mastery goals for children in math were assessed with six items about the importance parents place on children developing their math competence (e.g., "Even if it is difficult, I like my child to have math work that makes him/her think hard"). The development of these items was based on Grant and Dweck's (2003) conceptualization and operationalization of mastery goals as reflecting a concern with learning and challenge. Parents' performance goals for children in math were assessed with six items focused on children demonstrating their math competence (e.g., "It is important to me that my child show that he/she is smart in math"). Grant and Dweck's conceptualization and operationalization of performance goals as reflecting a concern with confirming, validating, or showing the possession of ability guided the development of these items. For both measures, parents rated how true each item is of them (1 = not at all true, 10 = very true). After reverse scoring when necessary, the mean was taken for each set, with higher numbers reflecting greater mastery and performance goals. Math growth mindsets Children's growth mindsets about math ability were assessed with four items based on items used with older children and adults to assess growth mindsets about intelligence in general (Dweck, 1999). They were adapted to math and modified so younger children could more easily understand them. Given that growth mindset measures used with young elementary school children have not yielded internal reliabilities above the standard threshold Gunderson, Sorhagen, et al., 2018), we conducted two pilot studies (Ns = 128 and 63) with children (73% white; 10% Black) in first and second grade; these children resided in the same geographic area (i.e., a small urban Midwestern area) as the large majority of the families in the current sample. Examination of the inter-item correlations indicated that only items about fixed concepts of ability held together reliably. Thus, we limited our items to such concepts (e.g., "Your smartness in math is something that stays pretty much the same"). Children rated how true they thought each item is (1 = a lot false, 4 = a lot true). To aid children in making these ratings, each point on the scale was illustrated with circles: (1) a large orange circle for a lot false, (2) a small orange circle for a little false, (3) a small blue circle for a little true, and (4) a large blue circle for a lot true. The research assistant explained each circle to children who then used it on an example question (i.e., "Vanilla ice cream is better than chocolate ice cream"); the research assistant described children's answers back to them to ensure they understood the scale (e.g., "So you think vanilla ice cream is really better than chocolate ice cream because you chose a lot true"). After reverse scoring the items, the mean was taken, with higher numbers reflecting more growth mindsets. Math anxiety Children's math anxiety was assessed with 12 of the 16 items from Maloney et al.'s (2015) Revised Child Math Anxiety Questionnaire, which is an adapted version of the Child Math Anxiety Questionnaire (Ramirez et al., 2013;Suinn et al., 1988) suitable for first and second graders. Hypothetical scenarios involving math (e.g., "How do you feel when you are in math class and your teacher is about to teach something new?") were presented to children. For each, children responded by pointing to one of five faces with facial expressions of varying degrees of nervousness (1 = not nervous at all, 5 = very, very nervous), which were explained by the research assistant. To ensure children understood the facial expression scale, they used it with practice items (e.g., "How nervous would you be if you were standing on top of a tall building and you looked down?"). The mean was taken, with higher numbers reflecting greater math anxiety. Preference for math challenge Yeager et al.'s (2016) Make-a-Math-Worksheet measure of challenge preference for college students was adapted for the young children in the current study. Children were told they would be working on a math worksheet they would make themselves by choosing the problems to be included on it. There were four types of math problems (i.e., addition, subtraction, time, and coins). For each type, children chose three problems from a set of three easy and three hard problems. The worksheet had three empty boxes for each type of math problem. In each box, children placed a laminated square, which was labeled with words and colors (i.e., "easy" squares were blue and "hard" squares were yellow) as well as verbally by the research assistant. For each type of problem, children could choose from zero to three hard problems, with the remaining being easy problems. A preference for challenge index was created by calculating the proportion of hard problems out of the total of 12 problems, with higher numbers indicating greater preference for difficult (vs. easy) math. Math achievement Children's math achievement was assessed with the Applied Problems subtest of the Woodcock-Johnson III Tests of Achievement (Woodcock et al., 2001). This test assesses the application of math knowledge, calculation skills, and quantitative reasoning. The raw scores were transformed into Rasch-scaled scores with equal intervals yielding W scores which are recommended as they account for children's grade in school and are suitable for examining individual growth over time (Woodcock et al., 2001). R E SU LT S We conducted three major sets of analyses. In the first set, exploratory factor analyses (EFA) of the items comprising the measure of parents' responses to children's math performance were conducted to identify the structure of parents' responses; a repeated measures analysis of variance (ANOVA) was then used to compare parents' responses across the four categories (person and process responses to success and failure). The second set of analyses investigated whether parents' math mindsets and goals are linked to their responses to children's math performance using partial correlations to take potential confounds into account. In the third set of analyses, multiple regression analyses were used to evaluate the predictive significance over time of parents' responses for children's math adjustment (i.e., math growth mindsets, math anxiety, preference for math challenge, and math achievement) taking into account children's earlier math adjustment and other potential confounds. Structure To examine the factor structure of parents' responses to children's performance, we submitted the 24 items comprising the parent person-process response measure at Wave 1 to EFA. To determine the number of factors, we used the Kaiser-Guttman Criterion (eigenvalues greater than 1) and parallel analysis (PA) results, given eigenvalues tend to overestimate the number of factors (Lance et al., 2006). PA compares the eigenvalues from random samples based on uncorrelated variables. The "parallel" function in the "nFactors" R package (Raiche, 2010) was used to calculate the mean and the 95th percentile for the eigenvalues of 100 randomly generated datasets. The number of factors was determined by the real-data eigenvalues that exceeded the randomdata eigenvalues. There were three eigenvalues greater than one, whereas PA identified suggested four factors. Thus, we proceeded to test both three-and four-factor EFAs. The three-factor model seemed to differentiate between person responses to success and failure but grouped process responses to success and failure in a single factor; the four-factor model further split process responses into success and failure. Comparing the three-factor model and the four-factor model using the "fa" function in the "psych" R package (Revelle, 2018) Given that the four-factor model fits better than the three-factor model, and responses to success and failure may provide different information to children, we adapted the four-factor model, and conducted another EFA with the four-factor model on the Wave 2 data. Again, we found that the PA yielded four factors at Wave 2. As shown in Table 3, at Wave 1, for 21 of the 24 items, the factor loadings on the expected factor were 0.30 or above; at Wave 2, this was the case for 23 of the 24 items. Only one item consistently loaded on an unexpected factor: "Tell my child what matters is that I know he/she is smart at math" was anticipated to load on the person response to failure scale, but instead loaded on the person response to success scale. This may be due to the item being more positive than the others on the person response to failure scale. We decided to leave the item on the person response to failure scale because it was rated in the context of children's failure. Removing the item from the scale did not affect the results from the analyses reported below in terms of the size or significance of the effects. However, in future efforts to refine or shorten the scale, it may be most useful to include only the items that most clearly load on each of the conceptual factors. The four responses were fairly stable, with correlations greater than .50 for each response over the course of a year (see Table 2). The four were also positively associated with one another, suggesting that some parents may simply be more responsive to children's math performance than other parents. However, dependentcorrelation comparisons indicated that embedded within this general tendency, process responses to success and failure were more strongly associated with one another (rs = .63 at Wave 1 and .54 at Wave 2, ps < .001) than with person responses to either success and failure (rs = .21 to .42 at Wave 1 and .20 to .38 at Wave 2, ps < .001), zs >2.75, ps < .01. Similarly, person responses to failure and success were more strongly associated with one another (rs = .52 at Wave 1 and .51 at Wave 2, ps < .001) than with process responses, zs >2.07, ps < .05. Frequency To examine the relative frequency of the four responses, we conducted a repeated measures ANOVA with type of response (i.e., person vs. process), children's performance (i.e., success vs. failure), and time (i.e., Wave 1 vs. Wave 2) as within-participant variables. As shown in Table 1, parents reported using process responses far more frequently than person responses, F(1, 489) = 3021.0, p < .001. They also reported responding to math success more frequently than math failure, F(1, 489) = 587.0, p < .001. These main effects, however, were qualified by a Type × Performance interaction, F(1, 489) = 182.7, p < .001, such that person responses to failure were less frequent than would be expected by the two main effects alone. Time of assessment did not have an effect on its own, F(1, 489) = 0.48, p = .49, or in an interactions with the type of response or children's performance, Fs <1.9, ps > .17. Aim 2: Examine the association between parents' mindsets and goals and their responses The next set of analyses examined whether parents' growth and failure-is-constructive mindsets, along with their mastery and performance goals, are associated with their person and process responses to children's success and failure. At each wave, we ran partial correlations for parents' mindsets or goals with their responses, controlling for parents' education (−1 = high school diploma or less, 0 = bachelor's degree, 1 = advanced degree) and gender (−1 = father, 1 = mother) given that they were both associated with parents' responses (see Table 2). Partial correlations can be compared to each other using the 95% CIs which were computed using bootstrapping (n = 1000). As shown in Table 4, at both waves of the study, the more parents endorsed a growth mindset about math ability, the more they used process responses for both math success and failure and the less they used person responses for math failure, but not necessarily success, with all the associations falling in the small range. As indicated by non-overlapping confidence intervals, parents' growth mindset was more positively associated with their process than person responses for both success and failure. The more parents held a math failure-is-constructive mindset, the more they refrained from using person responses to children's math success and failure. Despite the effects being small in size, this association was stronger than that for process responses, which were not associated with parents' failure-is-constructive mindset. Note: Partial correlations adjust for parents' education (−1 = high school degree or less, 0 = bachelor's degree, 1 = advanced degree) and gender (−1 = father, 1 = mother). Wave 1 coefficients are for concurrent analyses at Wave 1; Wave 2 coefficients are for concurrent analyses at Wave 2. The more parents held mastery goals for children in math, the more they used process responses to children's math success and failure. Surprisingly, although parents with mastery goals for children in math were less likely to use person responses in the context of children's math failure, they were also more likely to use person responses in the context of children's math success. The effect size of all the coefficients fell in the small range. The more parents held performance goals for children in math, the more they used both person and process responses to both failure and success. Interestingly, based on nonoverlapping confidence intervals, at both waves, the association between parents' performance goals and their person responses to success was significantly stronger than all the other associations. Moreover, unlike the other associations which were all small effect sizes, the coefficients were moderate to large in size. To identify if the association between mastery goals and person responses to success was due to the association between mastery and performance goals, both goals, along with the covariates, were included in regressions predicting person responses to success at Waves 1 and 2. Mastery goals were no longer related to person responses to success, βs < .04, zs <0.9, ps > .38, but performance goals remained significant predictors, βs > .44, zs > 11.6, p < .001. Aim 3: Examine whether parents' responses predict children's math adjustment over time To examine the contribution of parents' person and process responses to children's math adjustment over time, we conducted multiple regression analyses using the lavaan package in R (Rosseel, 2012) to handle missing data with the full information maximum likelihood method to reduce response bias (Duncan et al., 2006). We predicted children's math adjustment (i.e., math growth mindset, math anxiety, preference for math challenge, and math achievement) at Wave 2 from their math adjustment at Wave 1 along with parents' educational attainment and gender at Step 1; parents' responses to children's performance were entered at Step 2. A separate regression was conducted for each of the four parent responses because they may share overlapping variance with the dependent variables. Given American stereotypes about differences in girls' and boys' math ability present among children as early as elementary school (e.g., Cvencek et al., 2011), we examined the possibility that parents' responses to performance differentially impact girls' and boys' math adjustment over time. To this end, we added children's gender on its own and in interaction with parents' responses to the regression analyses. There was no evidence that children's gender moderated the relations between parents' responses and children's later adjustment, as the interaction term was never significant, zs < 0.91, ps > .36. For the sake of brevity, a summary of the key results from Step 2 is presented in Table 5; the complete results from each step can be found in Supporting Information (Tables S1). Parents' person, but not process, responses to success and failure were predictive of children's math adjustment over time. Parents' person responses to children's math failure predicted heightened math anxiety, z = 3.28, p = .001, dampened preference for math challenge, z = −3.46, p < .001, and dampened math achievement a year later, z = −2.38, p = .018, adjusting for children's earlier math adjustment as well as parents' educational attainment and gender, with all the effects being small in size. A similar pattern was observed for the relations between parents' person responses to success and children's math anxiety, z = 2.24, p = .025, and achievement, z = −2.28, p = .023. Parents' person responses were not predictive of children's growth mindsets over time once parents' education and gender along with children's earlier growth mindsets were taken into account. Note: Each type of parent response was entered in a separate regression. The type of prior adjustment being predicted, parent education (−1 = high school degree or less, 0 = bachelor's degree, 1 = advanced degree), and parent gender (−1 = father, 1 = mother) were included as covariates (for these results, see Tables S1). We conducted supplemental analyses to directly compare whether person responses to success or failure were better predictors of math anxiety and achievement when both responses were included as simultaneous predictors in Step 2 (see above). For math anxiety, parents' person responses to failure remained a significant predictor, β = .12, z = 2.52, p = .012, but their person responses to success were no longer significant, β = .04, z = 0.84, p = .403. For math achievement, including both responses in the same model reduced both to non-significance, βs < .05, |z|s < 1.51, ps > .13, (although a combination of the two was significant in a linear regression model, t = −2.60, p = .010), suggesting that for children's math achievement, parents' person responses to success and failure had overlapping predictive significance. DI SC US SION Parents' person and process responses to children's success appear to play a role in children's motivation and achievement (e.g., Gunderson, Sorhagen, et al., 2018;Pomerantz & Kempner, 2013). Little is known, however, regarding whether parents' person and process responses to children's failure matter, in large part because the daily and observational measures used to date have made it difficult to assess children's failure: Although likely to be important to children's motivation and learning (e.g., Brunstein & Gollwitzer, 1996;Taylor, 1991), failure occurs infrequently . The current research used a new parent-report measure to examine parents' person and process responses to children's success and failure in math, an important domain of learning for which parents' responses have not been examined, during early elementary school. The measure reliably distinguishes parents' person and process responses, with EFAs indicating that parents do not always adopt similar responses for children's success and failure in math. Regardless of performance, however, person responses were less common than process responses and less likely to be accompanied by views of math ability as malleable and math failure as constructive. Importantly, parents' person, but not process, responses were predictive over time of children's math adjustment. The more parents used person responses to children's math performance, the more children were math anxious, avoided challenging math, and had poor math achievement a year later, with responses to failure being somewhat more predictive than responses to success. The structure and frequency of parents' responses to children's math performance The current research used a new parent-report measure of parents' person and process responses to both success and failure in math. The design of the measure (i.e., six items assessing each type of response to success and six items assessing each type of response to failure) along with the relatively large sample of parents permitted EFAs to identify the structure of parents' responses. These analyses are important as prior research has been unable to examine whether person and process responses represent distinct response styles. The two types of responses were positively associated with one another for both success and failure suggesting that some parents use both process and person responses to both children's success and failure. Consistent with the notion that parents' person and process responses are distinct styles of responding, however, EFAs indicated that the two comprised distinct factors. In addition, parents' use of the two depended on whether they were in the context of children's success or failure. In total, with a few complex loadings, there were four factors: (1) person responses to success, (2) person responses to failure, (3) process responses to success, and (4) process responses to failure. Notably, person responses, particularly to failure, were less common than process responses despite research indicating that math is often viewed as requiring more innate talent than other areas (e.g., Leslie et al., 2015). It may be that person responses have become generally less common than process responses among parents, given the substantial attention to person and process responses in the media (e.g., Camarta, 2015;Hamblin, 2015;Underwood, 2020). Interestingly, more educated parents, who may be the largest consumers of such media, were most likely to report dampened person responses and heightened process responses to children's math performance. Social desirability may drive parents' reports as they over report what the media has conveyed as beneficial for children. Process responses may be more frequent across all school subjects, but the difference between person and process responses may be smaller in math than in domains in which innate talent is viewed as less important, such as literacy. Research comparing parents' person and process responses in the math domain to other domains is needed to identify if this is the case. It may also be that parents view young children's performance in math as driven largely by hard work, but as children develop, parents see innate talent as more important. Parents may have been reluctant to endorse some of the harshest of the person responses to failure. Indeed, person and process responses to children's success in prior research using daily and observational methods yields more similarity in the rates of the two responses Pomerantz & Kempner, 2013), but the prior research was conducted before person and process responses became common in the media, so whether it is the method or time of assessment that accounts for the imbalance in our sample is unclear. Links of parents' mindsets and goals with their responses Parents' person and process responses to both success and failure in math appear to be embedded in a system of interrelated beliefs and goals about math. The more parents saw math ability as malleable, the less they used person responses to failure, but not success, and the more they used process responses to success and failure. Parents' views that failure is constructive were linked to less frequent person, but not process, responses to success and failure. At both waves of the research, these patterns were evident adjusting for parents' educational attainment and gender. Although it is unclear why the two mindsets, which were substantially associated, were linked to somewhat different patterns of responses among parents, it appears that parents hold mindsets conceptually aligned with their responses. The associations generally, however, fell in the small range, suggesting that other factors may be important in how parents' respond. Parents' goals had a more complex relation to their responses. As anticipated, the more parents held mastery goals, the more they used process responses to both success and failure and the less they used person responses to failure, although this link was weaker. It was also the case, however, the more parents held mastery goals, the more they used person responses to success. It may be that parents with mastery goals believe that if children have confidence in their abilities, they will want to continue learning. Mastery and performance goals often co-occur among parents (e.g., Ablard & Parker, 1997;Curelaru et al., 2020), including in the current sample, which may also have caused the association between mastery goals and person responses to success. Indeed, this association was no longer evident once performance goals were included as a covariate. The more parents held performance goals, the more they used all four types of responses, perhaps because they see a variety of methods as instrumental in motivating children to perform. This makes some intuitive sense; if parents want children to demonstrate their math ability, they may try to do everything in their power to foster success, including giving lots of feedback. Research found that parents with performance goals are more controlling with their children is consistent with this idea (e.g., Gonida & Cortina, 2014). The tendency for performance goals to be associated with all four types of responses suggests that instead of viewing parents' performance goals as opposite to mastery goals as well as growth and failure-is-constructive mindsets, it may be more accurate to treat them as a separate dimension for understanding parents' responses. Such a framework is in line with the modern goal theory approach that treats performance goals as part of a complex system in which people endorse combinations of goals simultaneously (e.g., Wormington & Linnenbrink-Garcia, 2017). Nevertheless, it is of note that parents' performance goals were more strongly linked to person responses to success (with a moderate to large association) than to any other type of response, suggesting that emphasizing children's natural skills is uniquely aligned with parents' aims for their children to appear competent. The predictive significance of parents' responses for children's math adjustment The current research found that, although infrequent, parents' person responses to children's performance predicted poorer math adjustment among children over time, controlling for children's earlier math adjustment and parents' educational attainment and gender. These findings are consistent with those of Pomerantz and Kempner's (2013) study using mothers' daily reports of their praise in the academic context with elementary school children, as well as experimental research manipulating the type of praise or criticism children receive (Kamins & Dweck, 1999;Mueller & Dweck, 1998). It may be that even rare instances of person praise and even rarer person criticism accumulate over time to exert an influence on children because they stand out from the normative process responses. Interestingly, the distinction between person and process praise seemed to be more important than whether the response was to success or failure, except for the unique relation between parents' person responses to failure and children's challenge preference. Thus, facilitating children's math adjustment may be more about how (i.e., person vs. process) parents respond rather than to what (i.e., success vs. failure) they respond. The current research also broadened the types of adjustment among children to which parents' person responses may contribute by examining children's math anxiety, which can interfere with children's math achievement (e.g., Ramirez et al., 2018). Thus, parents' person responses appear to contribute to a variety of dimensions of children's adjustment, including their behavior (i.e., challenge seeking), achievement, and emotional experience in the math context. Although the effects of parents' person responses on children's math adjustment fall in what is considered the small range, they are still meaningful. First, children's math adjustment is multiply determined by complex influences ranging from multiple dimensions of the social context (e.g., teachers' and parents' practices) to individual attributes (e.g., genetics; Oliver et al., 2004). As a consequence, no single indicator is likely to explain a large amount of variability in children's math adjustment. Second, we controlled for prior math adjustment, which was fairly stable over the course of a year (see Table 2), leaving less variability to explain. It is also possible that by controlling for prior math adjustment we are controlling for the influence of parents' prior responses. Indeed, Gunderson, Sorhagen, et al. (2018) found that parents' responses before children entered school predicted children's math achievement once they were in school. Third, the effects of parents' responses are likely to accumulate beyond the early elementary school years we studied here. They may also initiate a developmental cascade in children; for example, parents' responses may lead children to be anxious about math, which increases their tendency to avoid challenging math, thereby disrupting their math learning which itself can have further consequences, including reinforcing the initial math anxiety. Parents' process responses to children's math performance did not predict children's adjustment over time in the current research, which is consistent with Pomerantz and Kempner's (2013) findings, but not Gunderson et al. (2013); Gunderson, Sorhagen, et al. (2018) findings or those of experimental research (Kamins & Dweck, 1999;Mueller & Dweck, 1998). It is possible that American parents now use process praise for young children so frequently that it does not impact children's math adjustment. It also may be that process responses have some unintended consequences (see Amemiya & Wang, 2018), which cancel out the benefits of directing children's attention to the process of learning. For example, children may interpret process responses such as "you worked so hard" or "you could have tried harder" as inauthentic if they do not match their own perceptions (Henderlong & Lepper, 2002;Pomerantz & Kempner, 2013). Parents' process responses were also positively associated with their person responses; the latter responses may be more salient than process responses, thereby overriding the constructive messages conveyed by such responses. Contrary to prior research on parents' responses Pomerantz & Kempner, 2013), as well as research manipulating responses (Mueller & Dweck, 1998), parents' responses did not predict children's growth mindsets about math ability. Notably, the children in the current research were younger than those in prior research when their mindsets were assessed. The younger children in our study may not have developed coherent growth mindsets yet or may not be skilled at reporting on them, negating the potential for children to interpret parents' responses in ways that shape their beliefs. As evidence of this, the reliability of the child mindset measure improved from Wave 1 to Wave 2. Alternatively, the lack of significant findings in this study could be related to the focus on math rather than academics in general (Pomerantz & Kempner, 2013) or a combination of the academic and social domains . Although assessing mindsets in specific domains may be useful for older students (Costa & Faria, 2018), children may not have clearly differentiated beliefs about whether math ability, specifically, can change. There is also some evidence that early difficulties in math precede the formation of a fixed math mindset (Levine & Pantoja, 2021), suggesting parents' early person responses may undermine achievement, which in turn later manifests in children as a fixed mindset about math ability. Interestingly, the concurrent associations between parents' person responses and children's mindsets were in the expected direction (rs = −.23 to −.21, ps < .01) before controlling for covariates. It may be that parents' responses to success and failure are only weakly associated with children's developing mindsets during a developmental phase in which such beliefs are still forming or children struggle to respond to abstract item wording (Dweck, 2002). Limitations and future directions Several limitations of the current research require interpreting the findings with caution and point to important directions for future research. First, guided by the idea that parents' beliefs and goals drive their parenting (e.g., Bornstein, 2015;Darling & Steinberg, 1993), our assumption in examining the links of parents' mindsets and goals with their responses was that parents' mindsets and goals shape their responses. Parents' mindsets likely form a stable system with their responses, rather than leading to changes in them (for a similar argument in regard to the role of parents' goals in their parenting, see Ng, Xiong, et al., 2019). It was for this reason we examined the concurrent, rather than longitudinal, associations between parents' mindsets and responses. Unfortunately, this approach does not provide insight into the direction of effects. It is possible, for example, that because parents' person responses to failure undermine children's math adjustment, parents' use of such responses lead them to hold mindsets that failure is unconstructive rather than constructive. It will be important for future research to manipulate parents' mindsets and goals as has been done successfully in prior research (e.g., Haimovitz & Dweck, 2016;Moorman & Pomerantz, 2010) to identify the causal role of parents' mindsets and goals in their responses. Second, the new measure used parents' retrospective reports of their responses to capture them in the context of failure, which occurs infrequently ; this self-report approach also allowed for a large sample of families. Despite these strengths, parents' retrospective reports also have weaknesses (for a review, see Pomerantz & Monti, 2015). For example, parents' responses may be influenced by self-presentational concerns or memory lapses that are less of an issue with observational approaches. Indeed, although there is an association between parents' reports of parenting and observations of parenting, quantitative synthesis indicates it falls in the small range (Hendriks et al., 2018). Investigators have speculated that these different methods of assessment may capture different slices of the socialization process (e.g., Cheung et al., 2016). The new measure also focused specifically on children's math performance. Although math is an important area of children's learning which poses unique challenges (e.g., Boaler, 2015), it is not clear if the findings yielded by the new measure generalize to other domains, such as literacy. Given that prior research identified effects of domain-general parent responses similar to those identified in the current research Pomerantz & Kempner, 2013), it is likely that the patterns are similar in other domains. Promising recent work along this line in the domain of science suggests that process language, such as "doing science," instead of person language, such as "be a scientist," enhances children's motivation in science (Lei et al., 2019;Rhodes et al., 2020). Future research directly comparing parents' responses to children's performance in different domains will be fruitful. Third, the representativeness of the sample was limited along several dimensions. Of particular note, although parents varied in their educational attainment as well as race and ethnicity, they were largely white and well educated. It is possible that the structure and frequency of parents' responses as well as their associations with parents' mindsets and goals and children's adjustment may be different in families from different cultural and educational backgrounds. In addition, mothers comprised the majority of the sample used in the current research, making it difficult to generalize these findings to other caregivers, such as fathers. Much of the research on parents' involvement in children's learning has focused on mothers, but fathers are also important, with their involvement often appearing to have a similar effect on children (for a review, see Kim & Hill, 2015). In the current research, there was a tendency for mothers to respond more frequently than fathers to children's math performance. Whether this translates into differences in the role that mothers' and fathers' responses play in the socialization process is an important issue for future research. CONC LUSIONS As children experience their first successes and failures in math in the formal education setting of school, parents' responses to their math performance appear to be of importance to children's math adjustment. Parents' person responses to children's math performance predict heightened avoidance of challenging math and math anxiety, as well as dampened math achievement among children, with person responses to failure being the most consistent predictor. Given that parents' person responses predict poor math adjustment among children over time, recommendations to parents to limit their person responses are likely to be constructive. However, such recommendations need to be made in light of the tendency for parents' responses to be anchored in an aligned system of beliefs and goals for children. Notably, the less parents believe that math ability is changeable and math failure can be constructive, the more they use person responses. Thus, simply telling parents to refrain from person responses may not be enough to support parents in refraining from such responses. Parents' growth mindsets and failure-is-constructive mindsets should be facilitated alongside their responses to success and failure in math to foster children's math adjustment. F U N DI NG I N FOR M AT ION This research was made possible by NSF grant HRD 1561723 (PIs: Eva Pomerantz and Andrei Cimpian). CON F L IC T OF I N T E R E ST We have no known conflict of interests to disclose.
2022-07-30T06:16:50.839Z
2022-07-29T00:00:00.000
{ "year": 2022, "sha1": "1a04b3dd5fb112d9163493a1825e18b6b321e3f0", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/cdev.13834", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "9031cc7c2b4e1e2a4b5496632033685d4023017d", "s2fieldsofstudy": [ "Education", "Mathematics", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
119170403
pes2o/s2orc
v3-fos-license
Study of B0 ->pi0 pi0, B ->pi pi0, and B ->K pi0 Decays, and Isospin Analysis of B ->pipi Decays We present updated measurements of the branching fractions and CP asymmetries for B0 ->pi0 pi0, B+ ->pi+ pi0, and B+ ->K+ pi0. Based on a sample of 383 x 10^6 Upsilon(4S) ->B Bbar decays collected by the BABAR detector at the PEP-II asymmetric-energy B factory at SLAC, we measure B(B0 ->pi0 pi0) =(1.47 +/- 0.25 +/- 0.12) x 10^-6, B(B+ ->pi+ pi0)= (5.02 +/- 0.46 +/- 0.29) x 10^-6, and B(B+ ->K+ pi0) = (13.6 +/- 0.6 +/- 0.7) x 10^-6. We also measure the CP asymmetries C(pi0 pi0) = -0.49 +/- 0.35 +/- 0.05, A(pi+ pi0) = 0.03 +/- 0.08 +/- 0.01, and A(K+ pi0) = 0.030 +/- 0.039 +/- 0.010. Finally, we present bounds on the CKM angle $\alpha$ using isospin relations. PACS numbers: 13.25.Hw, 12.15.Hh,11.30.Er In the Standard Model (SM) of particle physics, the charged-current couplings of the quark sector are described by the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements V qq ′ [1]. The consistency of multiple measurements of the sides and angles of the CKM Unitarity Triangle provides a stringent test of the SM, and also provides constraints on non-SM physics. The CKM angle α ≡ arg [−(V td V * tb )/(V ud V * ub )] can be measured from the interference between b → u quark decays with and without B 0 ↔ B 0 mixing. In the limit of one (tree) amplitude, sin 2α can be extracted from the CP asymmetries in B 0 → π + π − decays [2]. However, the size of the branching fraction of B 0 → π 0 π 0 , relative to B ± → π ± π 0 and B 0 → π + π − , indicates that there is another significant (penguin) amplitude, with a different CP -violating (weak) phase, contributing to the decay. The deviation of the asymmetry obtained from B → ππ decays, sin 2α eff , from sin 2α can be measured using the isospin-related decays B ± → π ± π 0 and B 0 → π 0 π 0 [3,4,5]. In the SM, the charge asymmetry is expected to be very small in the decay B ± → π ± π 0 since penguin diagrams cannot contribute to the I = 2 final state. However, a non-zero time-integrated CP asymmetry in the decay B 0 → π 0 π 0 is expected if penguin and tree amplitudes have different weak and CP -conserving (strong) phases. The B → Kπ system also exhibits interesting CPviolating features, including direct CP violation in B 0 → K + π − decays [6,7]. Sum rules derived from U-spin symmetry and parameters from the B → ππ system relate the branching fraction and charge asymmetry of B ± → K ± π 0 decays to other decays in the Kπ system [8,9]. The CP asymmetry in B ± → K ± π 0 is expected to have the same sign and roughly the same magnitude as the CP asymmetry in B 0 → K + π − in the absence of color-suppressed tree and electroweak-penguin amplitudes. Based on a sample of 383×10 6 Υ (4S) → BB decays, we report updated measurements of the branching fraction for B 0 → π 0 π 0 and the time-integrated CP asymmetry, C π 0 π 0 , defined as where A 00 (A 00 ) is the B 0 (B 0 ) → π 0 π 0 decay amplitude. We also measure the branching fractions for B ± → h ± π 0 (h ± = π ± , K ± ) and the corresponding charge asymme- The BABAR detector is described in Ref. [10]. Charged particle momenta are measured with a tracking system consisting of a five-layer silicon vertex tracker (SVT) and a 40-layer drift chamber (DCH) surrounded by a 1.5-T solenoidal magnet. An electromagnetic calorimeter (EMC) comprising 6580 CsI(Tl) crystals is used to measure photon energies and positions. The photon energy resolution in the EMC is σ E /E = 2.3( GeV) 1/4 /E 1/4 ⊕ 1.9 %, and the angular resolution from the interaction point is σ θ = 3.9 o / E/ GeV. Charged hadrons are identified with a detector of internally reflected Cherenkov light (DIRC) and ionization measurements in the tracking detectors. The average Kπ separation in the DIRC varies from 12σ at a laboratory momentum of 1.5 GeV/c to 2σ at 4.5 GeV/c. For the reconstruction of B ± → h ± π 0 events, we require the track from the B candidate to have at least 12 hits in the DCH and be associated with at least 5 photons in the DIRC. The measured Cherenkov opening angle θ C must be within 4σ of the expectation for the pion or kaon hypothesis and θ C must be greater than 10 mrad from the proton hypothesis. Electrons are removed from the sample by vetoing candidates based on their energy loss in the SVT and DCH and a comparison of the track momentum and deposited energy in the EMC. While π 0 meson candidates are mostly formed from two EMC clusters, we increase our π 0 efficiency compared to Ref. [4] by ∼ 10% by including π 0 candidates consisting of two overlapping photon clusters ("merged" π 0 ) and candidates with one photon cluster and two tracks consistent with being a photon conversion inside the detector. Photon conversions are selected from pairs of oppositely charged tracks with an invariant mass less than 30 MeV/c 2 , a vertex that lies within the detector, and a total momentum vector that points back to the beamspot. EMC clusters are required to have energies greater than 0.03 GeV and a transverse shower shape consistent with a photon. To reduce the background from random photon combinations, the cosine of the angle between the direction of the decay photons in the center-of-mass system of the parent π 0 and the π 0 flight direction in the lab frame must be less than 0.95. For candidates consisting of two EMC clusters or one cluster and a converted photon, the reconstructed π 0 mass is required to be between 110 and 160 MeV/c 2 , and the candidates are then kinematically fit with their mass constrained to the π 0 mass. We distinguish merged π 0 candidates from single photons and other neutral hadrons using the second transverse moment, S = i E i × (∆α i ) 2 /E, where E i is the energy deposited in each CsI(Tl) crystal, and ∆α i is the angle between the cluster centroid and the crystal. Because merged π 0 s are caused by two overlapping photon clusters, they have a larger S than solitary photons. We use a large sample of π 0 s from τ ± → ρ ± ν decays to validate that our Monte Carlo simulation (MC) accurately simulates merged π 0 s and photon conversions, as well as our overall π 0 efficiency. We use two kinematic variables to isolate B 0 → π 0 π 0 and B ± → h ± π 0 candidates from the large background of e + e − → qq (q = u, d, s, c) continuum events. The first is the beam-energy-substituted mass We define the main signal region in the B 0 → π 0 π 0 analysis as m ES > 5.20 GeV/c 2 and |∆E| < 0.20 GeV. To further discriminate the signal from qq backgrounds, we exploit the event topology variable θ S : the angle in the CM frame between the sphericity axis of the B candidate's decay products and that of the remaining neutral clusters and charged tracks in the rest of the event. Since the distribution of | cos θ S | peaks at 1 for qq events, we require | cos θ S | < 0.8 (0.7) for events with a B ± → h ± π 0 (B 0 → π 0 π 0 ) candidate. To further improve background separation, we construct a Fisher discriminant F from the sums i p i and i p i cos 2 θ i , where p i is the CM momentum and θ i is the angle with respect to the thrust axis of the B candidate's daughters, in the CM frame, of all tracks and clusters not used to reconstruct the B meson. We use an extended, unbinned maximum likelihood (ML) fit to determine the number of signal events and the associated asymmetries. The probability density function (PDF) P i ( x j ; α i ) for event j and signal or background hypothesis i is the product of PDFs for the variables x j , given the set of parameters α i . The likelihood function L is where N is the number of events, n i is the PDF coefficient for hypothesis i, and M is the total number of signal and background hypotheses. In the B 0 → π 0 π 0 fit, the variables x j are m ES , ∆E, and F . In addition to the signal and qq background, we expect background events from the charmless decays B ± → ρ ± π 0 and B 0 → K 0 S π 0 (K 0 S → π 0 π 0 ) to contribute 61 ± 7 events in the signal region, as determined from MC, so we include an additional component in the fit to account for this BB background. For the B 0 → π 0 π 0 signal and the BB background, we observe a correlation coefficient between m ES and ∆E of ∼ 0.2, so a two-dimensional PDF, derived from MC simulation, is used to parameterize these distributions. The qq background PDF is described by an ARGUS threshold function [11] in m ES and a polynomial in ∆E. We divide the F distribution from signal MC into ten equally-populated bins, and use a parametric step function to describe the distribution for all of the signal and background hypotheses. We fix the relative size of the F bins for the signal and BB background to values taken from MC. These values are verified with a sample of fully reconstructed B meson decays. Continuum F parameters are free in the fit. In order to measure the time-integrated CP asymmetry C π 0 π 0 , we use the remaining tracks and clusters in a multivariate technique [12] to determine the flavor (B 0 or B 0 ) of the other B meson in the event (B tag ). Events are assigned to one of seven mutually exclusive categories k (including untagged events with no flavor information) based on the estimated mistag probability w k and on the source of the tagging information. The PDF coefficient for B 0 → π 0 π 0 is given by where N π 0 π 0 is the total number of B 0 → π 0 π 0 decays, χ d = 0.188 ± 0.004 [13] is the time-integrated mixing probability, and s j = +1(−1) when the B tag is a B 0 (B 0 ). The fraction of events in each category, f k , and the mistag rate are determined from a large sample of B 0 → D ( * ) (nπ)π decays. For the B ± → h ± π 0 fit, along with m ES , ∆E, and F , we include the Cherenkov angle θ C to measure the B ± → π ± π 0 and B ± → K ± π 0 yields and asymmetries simultaneously. The difference between the expected and measured Cherenkov angle, divided by the uncertainty, is described by two Gaussian distributions. The values for m ES and ∆E are calculated assuming the track is a pion, so a B ± → K ± π 0 event will have ∆E shifted by a value dependent on the track momentum, typically −45 MeV. For the signal, the m ES and ∆E distributions are modeled as Gaussian functions with low-side power-law tails. The means of these distributions and the m ES width are determined in the fit, while the ∆E width is determined by MC simulation. We expect 69 ± 3 background events in the B ± → π ± π 0 signal region from other B meson decays, mainly from the same B decays as in the B 0 → π 0 π 0 case. For the B ± → K ± π 0 signal region we expect 9±2 events from B → X s γ and B 0 → ρ + K − . The PDFs for the BB backgrounds, the qq background, and the signal F are all treated the same as in the B 0 → π 0 π 0 case. The PDF coefficient for B ± → h ± π 0 is given by where A i is the charge asymmetry, and q j = ±1 is the charge of the B candidate. The results from the B 0 → π 0 π 0 and B ± → h ± π 0 ML fits are summarized in Table I. In a total of 17,881 events we find 154 ± 27 B 0 → π 0 π 0 decays and an asymmetry C π 0 π 0 = −0.49 ± 0.35. For the B ± → h ± π 0 fit, we find 627 ± 58 B ± → π ± π 0 and 1364 ± 57 B ± → K ± π 0 events in a total of 85,895 events. All of the correlations among the signal variables are less than 5%. In Fig. 1 we use the event weighting and background subtraction method described in Ref. [14] to show signal and background distributions for B 0 → π 0 π 0 events. Signal and background distributions for B ± → h ± π 0 events are shown in Fig. 2 using the same method. In order to account for a small bias in the B ± → h ± π 0 asymmetries arising from the difference in the π + and π − reconstruction efficiencies and the K + and K − hadronic interaction cross-sections in the BABAR detector, the B ± → π ± π 0 asymmetry is corrected by +0.005 ± 0.004 and the B ± → K ± π 0 asymmetry is corrected by +0.008 ± 0.008. We determine the π ± π 0 bias from a study of τ ± → ρ ± ν decays and verify it using the continuum background in data. For the B ± → K ± π 0 charge asymmetry bias, we use the continuum background and combine the results of the π ± π 0 asymmetry study and the K ± π ∓ asymmetry study in Ref. [6]. After the bias correction we find A π ± π 0 = 0.03 ± 0.08 and A K ± π 0 = 0.030 ± 0.039. We evaluate the systematic errors on the branching fractions and asymmetries either using data control samples or by varying fixed parameters and refitting. The systematic uncertainties on the branching fraction and asymmetry measurements are summarized in Tables II and III, respectively. The largest systematic errors for the B 0 → π 0 π 0 and B ± → h ± π 0 branching fractions are from uncertainties in the π 0 reconstruction efficiency, signal selection efficiencies, F parameters, and BB background yields. We simulate radiative effects using the PHOTOS simulation package [15] and assign a systematic error equal to the difference between PHOTOS and the scalar QED calculation in Ref. [16]. For the B ± → h ± π 0 analysis, we also include as a systematic a small (< 2%) fit bias due to correlation among fit variables. The largest systematic uncertainties in the measurement of C π 0 π 0 are from the uncertainty on the B background CP content, tag-side interference, and the tagging fractions and asymmetry of B tag . The major contributions to the systematic error on A h ± π 0 are from the detector charge asymmetry and the ∆E and F PDF parameterization. We extract information on ∆α ≡ α eff − α and α using isospin relations [3] that relate the decay amplitudes of B → ππ decays and measurements of the branching fraction and time-dependent CP asymmetries in the decay B 0 → π + π − from BABAR [6]. For each of the six observable quantities required to calculate α [B(B 0 → π + π − ), B(B ± → π ± π 0 ), B(B 0 → π 0 π 0 ), S π + π − , C π + π − , and C π 0 π 0 ], we generate an ensemble of simulated experiments with uncorrelated Gaussian distributions where the width on each distribution is the sum in quadrature of the statistical and systematic errors of that measurement. Sets of generated experiments that result in an unphysical asymmetry or violate isospin are removed from the sample. Using the resulting distributions for ∆α and α, we calculate a confidence level (C.L.) for each solution and plot the maximum value of 1-C.L. of the various solutions in Fig. 3. One can further constrain α by using the fact that the penguin amplitude contribution to B → ππ decays must be very large if α is near 0 or π. We obtain a bound on the magnitude of the penguin amplitude from the branching fraction of the penguin-dominated decay B s → K + K − [17] by making the conservative assumption of SU (3) breaking at less than ∼ 100% [18]. In Fig. 3 we also show bounds on α when the size of the penguin amplitude is constrained by this assumption. In summary, we measure the branching fractions and CP asymmetries in B 0 → π 0 π 0 , B ± → π ± π 0 , and The results for the B 0 → π 0 π 0 and B ± → h ± π 0 decays. For each mode we show the number of signal events, NS, number of continuum events, Ncont, number of B-background events, N Bbkg , total detection efficiency ε, branching fraction B, and asymmetry A h ± π 0 or C π 0 π 0 . Uncertainties are statistical for NS and Ncont, while for the branching fractions and asymmetries they are statistical and systematic, respectively. We are grateful for the excellent luminosity and machine conditions provided by our PEP-II colleagues, and for the substantial dedicated effort from the computing organizations that support BABAR. The collaborating institutions wish to thank SLAC for its support and kind hospitality. α expressed as one minus the confidence level as a function of angle. We find an upper bound on ∆α of 39 • at the 90% confidence level. In (b) the curve shows the bounds on α using the isospin method alone, while the shaded region shows the result with the SU (3) requirement as discussed in the text.
2007-07-18T20:29:37.000Z
2007-07-18T00:00:00.000
{ "year": 2007, "sha1": "829f8a5557ceb32ef61e6754e5dbb8c1355f3bc4", "oa_license": "CC0", "oa_url": "http://diposit.ub.edu/dspace/bitstream/2445/131439/1/558185.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f5edc6422b429737b80c27fae5db1f84153a48f7", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
22052844
pes2o/s2orc
v3-fos-license
Hydrophobic Pairwise Interactions Stabilize a -Conotoxin MI in the Muscle Acetylcholine Receptor Binding Site* The present work delineates pairwise interactions underlying the nanomolar affinity of a -conotoxin MI (CTx MI) for the a - d site of the muscle acetylcholine receptor (AChR). We mutated all non-cysteine residues in CTx MI, expressed the a 2 bd 2 pentameric form of the AChR in 293 human embryonic kidney cells, and measured binding of the mutant toxins by competition against the initial rate of 125 I- a -bungarotoxin binding. The CTx MI mutations P6G, A7V, G9S, and Y12T all decrease affinity for a 2 bd 2 pentamers by 10,000-fold. Side chains at these four positions localize to a restricted region of the known three-dimensional structure of CTx MI. Mutations of the AChR reveal major contributions to CTx MI affinity by Tyr-198 in the a subunit and by the selectivity determinants Ser-36, Tyr-113, and Ile-178 in the d subunit. By using double mutant cycles analysis, we find that Tyr-12 of CTx MI interacts strongly with all three selectivity determinants in the d subunit and that d Ser-36 and d Ile-178 are interdependent in stabilizing Tyr-12. We find additional strong interactions between Gly-9 and Pro-6 in CTx MI and selectivity determinants in the d subunit, and between Ala-7 and Pro-6 and Tyr-198 in the a subunit. The overall results reveal the orientation of CTx MI when bound to the a - d interface and show that primarily hydrophobic interactions stabilize the complex. Recent studies have used protein toxins to probe active sites of ligand- and voltage-gated ion channels (1–6). By identifying multiple pairwise interactions, Recent studies have used protein toxins to probe active sites of ligand-and voltage-gated ion channels (1)(2)(3)(4)(5)(6). By identifying multiple pairwise interactions, these studies define dimensions of the active site according to the known structure of the toxin. The studies also establish the underlying basis for molecular recognition in high affinity protein complexes. Here we probe the muscle AChR 1 with the peptide toxin ␣-conotoxin MI and use double mutant cycles analysis to identify pairs of residues that confer the nanomolar affinity of the complex. Mutagenesis and site-directed labeling studies establish that the ligand binding sites of the muscle AChR are formed at interfaces between ␣ 1 and either ␦, ⑀, or ␥ subunits (7,8). Residues on the ␣ 1 face of the binding site are found in three well separated regions of the primary sequence, termed loops A, B, and C. Using the numbering system for the mouse ␣ 1 subunit, key residues in these loops include Tyr-93 in loop A, Trp-149 in loop B, and Tyr-190 and Tyr-198 in loop C. Similarly, residues on the non-␣ face of the binding site are found in four well separated regions of the primary sequence, termed loops I through IV. Using the numbering system for the mouse ␦ subunit, key residues in these loops include Ser-36 in loop I, Trp-57 in loop II, Tyr-113 in loop III, and Ile-178 in loop IV. The observation that these seven loops converge to form a localized binding site has led to a multi-loop model of the major extracellular domain of the AChR (8). ␣-Conotoxins are small, disulfide-rich peptides that competitively inhibit muscle and neuronal nicotinic AChRs (9). All ␣-conotoxins have a conformationally constrained two-loop structure formed by two disulfide bridges. However, the various ␣-conotoxins differ by the number and type of residues in each loop, allowing specific targeting of receptor subtypes. ␣-Conotoxins specific for muscle AChRs include MI, GI, and SI, and contain three residues in the first loop and five in the second (Fig. 1). Muscle-specific ␣-conotoxins can be further subdivided according to their ability to select between the two AChR binding sites; CTx MI and GI select between the two binding sites by 10,000-fold, whereas CTx SI selects between the sites by 100-fold (10 -12). Moreover, CTx MI binds to the ␣-␦ site of the muscle AChR with nanomolar affinity and stays bound for more than 6 h (13). Their site selectivity and exceedingly high affinity make CTx MI and GI powerful probes of the structure of the muscle AChR binding site. Residues from both ␣ and non-␣ faces of the AChR binding site stabilize bound CTx MI and include residues from four of the seven loops. The ␣ face contributes Tyr-198 and Tyr-190 from loop C (14), whereas the ␦ face contributes Ser-36 from loop I, Tyr-113 from loop III, and Ile-178 from loop IV (10). Selectivity of CTx MI for the two AChR binding sites owes to residue differences in ␦ and ␥ subunits at these three positions and can be transferred from one binding site to the other by exchanging residues at these key positions. That both ␣ and non-␣ subunits contribute to CTx MI binding suggests that the toxin bridges the subunit interface, whereas the modular exchangeability across ␥ and ␦ subunits suggests the key residues contribute directly to CTx MI binding. By mutating residues in both the AChR and CTx MI, the present work further tests the hypothesis that the toxin bridges the binding site interface. We use double mutant cycles analysis to distinguish interacting from non-interacting pairs of residues in the complex. We find that CTx MI interacts with the ␣-␦ site of the AChR through four hydrophobic residues in its N-and C-terminal loops. Furthermore, the key side chains in CTx MI localize in a hydrophobic cluster that interacts with hydrophobic and aromatic residues from both the ␣ and ␦ subunits. EXPERIMENTAL PROCEDURES Materials-␣-Conotoxin MI was purchased from American Peptide Company; 293 human embryonic kidney cell line (293 HEK) and BOSC 23 HEK cell line were from the American Type Culture Collection; 125 I-labeled ␣-bungarotoxin was from NEN Life Science Products; dtubocurarine chloride was from ICN Pharmaceuticals; and 5,5Ј-dithiobis-2-nitrobenzoic acid was from Sigma. Synthesis and Purification of Conotoxin MI-Wild type and mutant ␣-conotoxin MI were synthesized by standard Fmoc (N-(9-fluorenyl)methoxycarbonyl) chemistry on an Applied Biosystems 431A peptide synthesizer. Cysteine protecting groups (S-triphenylmethyl) were incorporated during synthesis at cysteines 4 and 14, and acetamidomethylprotecting groups (ACM) were incorporated at cysteines 3 and 8. The linear peptide was purified by a reversed-phase high performance liquid chromatography using a Vydac C18 preparative column with trifluoroacetic acid/acetonitrile buffer. The two disulfide bridges were formed as follows: the cysteine S-triphenylmethyl-protecting groups of cysteines 4 and 14 were removed during trifluoroacetic acid cleavage of the linear peptide from the support resin, and the peptide was oxidized by molecular oxygen to form the 4 -14 disulfide bond by stirring in 50 mM ammonium bicarbonate buffer, pH 8.5, at 25°C for 24 h. The peptide was lyophilized and then the second bridge was formed as follows: the ACM-protecting groups on cysteines 3 and 8 were removed oxidatively by iodine as described (15), except the peptide/iodine reaction was allowed to progress for 16 h prior to carbon tetrachloride extraction. Residual iodine was separated from the pure product by high performance liquid chromatography. The purified product was verified by mass spectrometry (Table I). The CTx MI mutants are named as follows: the first letter and number refer to the wild type residue and position, and the following letter is the substituted residue at that position. Confirmation of Disulfide Bond Synthesis by Ellman's Analysis-To confirm disulfide bond formation, we compared reactions with Ellman's reagent for linear, non-oxidized CTx MI, commercially available CTx MI, and all of our synthetic CTx MI mutants. For each conotoxin, 100 g was dissolved in 200 l of 0.1 M phosphate buffer; 4 l of 5,5Ј-dithiobis-2-nitrobenzoic acid was added, and the mixture was incubated at room temperature for 30 min. The absorbance at 405 nm was measured. Reactivity for each synthetic mutant is expressed relative to that obtained for the non-oxidized, linear CTx MI (Table I). Human embryonic kidney cells (293 HEK) were transfected with mutant or wild type subunit cDNAs using calcium phosphate precipitation as described previously (16). BOSC 23 HEK cells were used in some experiments with low expressing mutant AChRs. AChR subunit cDNAs were combined in the ratio 2:1:2 for ␣, ␤, and ␦ subunits, respectively. After 24 h at 37°C, the transfected cells were incubated at 31°C for an additional 48 h. Three days after transfection, intact cells were harvested by gentle agitation in phosphate-buffered saline containing 5 mM EDTA. Ligand Binding Measurements-␣-Conotoxin MI binding to intact cells was measured by competition against the initial rate of 125 Ilabeled ␣-bungarotoxin (18). After harvesting, the cells were briefly centrifuged, resuspended in potassium Ringer's solution, and divided into aliquots for ␣-conotoxin binding measurements. Potassium Ringer solution contains 140 mM KCl, 5.4 mM NaCl, 1.8 mM CaCl 2 , 1.7 mM MgCl 2 , 25 mM HEPES, and 30 mg/liter bovine serum albumin adjusted to pH 7.4 with 10 mM NaOH. Specified concentrations of ␣-conotoxin were added 60 min prior to the addition of 125 I-␣-bungarotoxin, which was allowed to occupy approximately half the surface receptors. Binding was terminated by the addition of 2 ml of potassium Ringer's solution containing 600 M d-tubocurarine chloride. Cells were harvested through Whatman GF-B filters using a Brandel cell harvester and washed three times with 3 ml of potassium Ringer's solution. Prior to use, the filters were soaked in potassium Ringer's solution containing 4% skim milk for 2 h. Nonspecific binding was determined in the presence of 300 M d-tubocurarine. The total number of ␣-bungarotoxin sites was determined by incubation with radiolabeled toxin for 120 min. The initial rate of toxin binding was calculated as described previously to yield the fractional occupancy of the ligand (18). Binding measurements were analyzed according to the Hill equation: (Tables II and III). Mutagenesis of CTx MI-Mutations of CTx MI were generated by standard peptide synthesis methods as described under "Experimental Procedures." Molecular weight and the presence of two disulfide bonds were verified by mass spectrometry and by negative reaction with Ellman's reagent (Table I). Because CTx MI binds 10,000-fold more tightly to the ␣-␦ than to the ␣-␥ interface of the AChR (10), we measured CTx MI binding to the ␣ 2 ␤␦ 2 pentameric form of the AChR expressed on the surface of 293 HEK cells. As measured by competition against the initial rate of 125 I-␣-bungarotoxin binding, our synthetic CTx MI bound with a K d of 0.94 nM (Table II) Table II. from commercially available CTx MI. We mutated all non-cysteines in CTx MI and measured binding of each mutant conotoxin to ␣ 2 ␤␦ 2 pentamers. Our mutagenic scan of CTx MI reveals four residues essential for high affinity binding as follows: Ala-7 and Pro-6 in the N-terminal loop and Tyr-12 and Gly-9 in the C-terminal loop ( Fig. 2; Table II). Within the N-terminal loop, mutation of Ala-7 to valine decreases affinity nearly 10,000-fold, whereas mutation to serine decreases affinity by 50-fold (Table II), indicating that position 7 requires a side chain that is both small and hydrophobic. Mutation of the adjacent Pro-6 to glycine decreases affinity nearly 10,000-fold, whereas mutation to alanine or valine decreases affinity 400-and 800-fold, respectively, suggesting the need for both restricted rotation of the peptide backbone and hydrophobic contributions at position 6. Within the C-terminal loop of CTx MI, mutation of Tyr-12 to threonine decreases affinity 10,000-fold, whereas mutation to phenylalanine maintains high affinity (Table II), indicating a purely aromatic contribution at position 12. Mutation of Gly-9 to alanine decreases affinity 1400-fold, whereas mutation to serine decreases affinity 5000-fold, indicating a nearly absolute requirement for glycine, which is small and can accommodate unique combinations of and bond angles (19). Surprisingly, the mutants acetyl-G1 and K10Q, which neutralize positive charges previously thought to be essential for CTx MI bioactivity (11,12), produce relatively small changes in affinity (Fig. 2). The overall mutagenesis results reveal four energetically equivalent sources of high affinity in CTx MI, making them potential points of interaction with the ␣-␦ binding site. The four bioactive residues of CTx MI map to a restricted region of its three-dimensional structure ( Fig. 3; Ref. 19), indicating that the contact surface at the ligand binding site is likely to be small and complementary to the active region of the toxin. Side chains of the bioactive residues occupy corners of an irregular trapezoid 6.4 to 8.0 Å long and 3.9 to 6.6 Å wide, creating a hydrophobic patch in an otherwise hydrophilic peptide. The ring structures of Pro-6 and Tyr-12 stack parallel to each other, whereas Ala-7 protrudes at right angles to Pro-6, and the methylene ␣-carbon of Gly-9 leaves a pronounced cavity rimmed by hydrophobic side chains. Thus bioactivity of CTx MI owes to a three-fingered hydrophobic structure at one end of the toxin. Contributions of the ␦ Subunit to CTx MI Binding-Previous Table III. work (10) showed that residue differences at three equivalent positions of the ␥ and ␦ subunits confer the 10,000-fold selectivity of CTx MI for the ␣-␦ over the ␣-␥ interface of the AChR. We therefore re-examined these selectivity determinants as potential points of interaction with CTx MI. Single point mutations of the selectivity determinants, ␦S36K, ␦Y113S, and ␦I178F, produce only modest changes in affinity for CTx MI ( Fig. 5; Table III). However, when S36K and I178F are combined into a single ␦ subunit, affinity for CTx MI decreases considerably more than with either mutation alone (Fig. 5), indicating that these residues are interdependent in stabilizing CTx MI (10,14). The remaining combinations of double mutations, (S36K/Y113S) and (Y113S/I178F), produce roughly additive changes in affinity, indicating little interdependence of these selectivity determinants. When mutations of all three selectivity determinants are combined into a single ␦ subunit, CTx MI affinity falls 10,000-fold to that of low affinity ␣ 2 ␤␥ 2 pentamers (Fig. 5). Thus all three selectivity determinants in the ␦ subunit are candidates for interaction with CTx MI. Pairwise Interactions between CTx MI and the ␣-␦ Site-Thermodynamic double mutant cycles analysis has been widely used to identify noncovalent interactions between residues within a single protein and between residues joining different proteins (1)(2)(3)(4)(5)(6)20). To generate a mutant cycle for pairs of CTx MI and AChR mutations, dissociation constants are determined for the four possible combinations of wild type (W) and mutant (M) receptors (r) and conotoxins (t): WrWt, MrWt, WrMt, and MrMt. The resulting dissociation constants are then used to calculate a coupling coefficient ⍀ (20) shown in Equation 1, If ⍀ equals unity the pair of residues does not interact, whereas if ⍀ deviates from unity the pair of residues interacts. To identify pairs of interacting residues, we focus on residues in the AChR and CTx MI that significantly affect affinity of the complex and then apply double mutant cycles analysis to all possible pairs of receptor-conotoxin mutations. We find the strongest interaction between the triad of selectivity determinants in the ␦ subunit and Tyr-12 of CTx MI; binding curves for the corresponding mutant cycle are shown in Fig. 6A. Individually, mutations in either the receptor or the conotoxin decrease affinity by 10,000-fold. However when mutations in both the receptor and conotoxin are examined together, affinity decreases by 100,000-fold, which is 3 orders of magnitude less than predicted if the contributions were additive. Double mutant cycles analysis reveals a coupling coefficient of 1584 for the ␦(S36K ϩ Y113S ϩ I178F)/Y12T pair, corresponding to an interaction free energy of 4.3 kcal/mol (Table IV). Whereas mutant cycles analysis can identify interacting pairs of residues, it can also identify non-interacting pairs of residues, as illustrated for the pair ␣Y198T/N11K (Fig. 6B). The receptor mutation ␣Y198T decreases affinity by 1000-fold, whereas the CTx MI mutation N11K decreases affinity by 30-fold. When the two mutations are examined together, affinity decreases by 30,000-fold, which is purely additive, demonstrating that ␣Tyr-198 and Asn-11 do not interact. Thus double mutant cycles analysis readily distinguishes interacting from non-interacting pairs of residues in the AChR-CTx MI complex. Pairwise Interactions between CTx MI and the ␣ Subunit-Applied to the ␣ subunit face of the AChR binding site, mutant cycles analysis reveals that Tyr-198 interacts significantly with the bioactive residues Ala-7, Pro-6, and Tyr-12 in CTx MI ( Fig. 7 and Table IV). The coupling coefficient for the ␣Y198T/A7V pair is 638 and corresponds to an interaction free energy of 3.8 kcal/mol. The ␣Y198T/P6G pair exhibits a weaker coupling coefficient of 100, perhaps owing to reduction of a joint contact surface formed by Ala-7 and Pro-6 or to increased conformational flexibility of Ala-7 caused by the P6G mutation. The weaker coupling coefficient of 39 for the Y12T/␣Y198T pair is likely due to an indirect interaction in which the Y12T mutation produces global changes that propagate to either Ala-7 or Pro-6 ( Fig. 3), both of which couple strongly to ␣Tyr-198. Alternatively, the Y12T mutation may allow reorientation of the conotoxin due to loss of the interaction between Tyr-12 and the ␦ subunit (see Fig. 6A below). The fourth bioactive residue in CTx MI, Gly-9, shows a conspicuous lack of coupling to any of the conserved tyrosines in the ␣ subunit. The rank order of coupling to ␣Tyr-198, Ala-7 Ͼ Pro-6 Ͼ Tyr-12 Ͼ Ͼ Gly-9, suggests that CTx MI binds with Ala-7 opposing Tyr-198 of the ␣ subunit. Pairwise Interactions between CTx MI and the ␦ Subunit-Applied to the ␦ subunit face of the AChR binding site, mutant cycles analysis reveals only weak coupling for the 24 pairs of receptor-conotoxin mutations ( Fig. 8; Table IV). The weak coupling mirrors the relatively small changes in CTx MI affinity produced by mutations of individual selectivity determinants in the ␦ subunit (Fig. 5). On the other hand, when all three selectivity determinants are mutated in the same ␦ subunit, we find strong coupling to Tyr-12 of CTx MI (Fig. 6A). To identify which of the three selectivity determinants are interdependent in coupling to Tyr-12, we paired double mutations in the ␦ subunit against the Y12T mutant. Pairing either of the double mutations, ␦(S36K/ Y113S) or ␦(Y113S/I178F) against Y12T reveals weak or undetectable coupling (Fig. 9). However, pairing ␦(S36K/I178F) against Y12T reveals significant coupling with ⍀ equal to 200 (Figs. 6A and 9). Thus Ser-36 and Ile-178 jointly stabilize the tyrosine side chain at position 12 of CTx MI. To look for additional interactions between CTx MI and the ␦ subunit, we paired each CTx MI mutation against the triple mutation ␦(S36K/Y113S/I178F). Mutation of each of the four bioactive residues in CTx MI reveals significant coupling to the triad of selectivity determinants in the ␦ subunit, with the rank order Tyr-12 Ͼ Ala-7 Ͼ Gly-9 Ͼ Pro-6 ( Fig. 10). The rank order of coupling suggests CTx MI binds with Tyr-12 opposing the ␦ subunit. The much weaker coupling between Tyr-12 and Tyr-198 of the ␣ subunit (Fig. 7) further supports orientation of Tyr-12 toward the ␦ subunit. For some residues in CTx MI, we detect strong coupling to residues in both ␣ and ␦ subunits (Figs. 7 and 10). The conotoxin/receptor pairs, A7V/␦(S36K/Y113S/I178F) and A7V/ ␣Y198T, show equivalent coupling coefficients (Figs. 7 and 10; Table IV). That Ala-7 couples equivalently to the same residues in ␣ and ␦ subunits suggests either close approach of ␣Tyr-198 and the triad ␦(Ser-36/Tyr-113/Ile-178) or interaction of these residues with a common residue that couples to Ala-7. We also FIG. 7. Coupling between aromatic residues in the ␣ subunit and residues in CTx MI. The coupling coefficients (⍀) were determined for pairs of AChR and CTx MI mutations as described in the text and Ref. 20. The ⍀ values are given in Table IV. Table IV. find equivalent coupling coefficients for the pairs, P6G/␦(S36K/ Y113S/I178F) and P6G/␣Y198T. However, because proline in the conotoxin is mutated, the apparent equivalent coupling to ␣ and ␦ subunits likely owes to global conformational changes in the conotoxin that prevent it from bridging the binding site interface. Finally, Gly-9 interacts with residues in the ␦ but not the ␣ subunit, as the G9S/␦(S36K/Y113S/I178F) pair shows strong coupling (Fig. 10), but the G9S/␣Y198T pair shows weak coupling (Fig. 7). Thus Ala-7 in CTx MI couples strongly to residues in both ␣ and ␦ subunits, but Tyr-12 and Gly-9 couple preferentially to residues in the ␦ subunit. The overall results demonstrate that the three selectivity determinants in the ␦ subunit, Ser-36, Tyr-113, and Ile-178, together provide the major source of stabilization for CTx MI in the AChR binding site. Furthermore, these selectivity determinants in the ␦ subunit, together with Tyr-198 in the ␣ subunit, produce the nanomolar affinity between CTx MI and the ␣-␦ binding site. DISCUSSION The present work establishes the essential hydrophobic nature of the interaction between CTx MI and the ␣-␦ site of the muscle AChR. Together with the known structure of CTx MI, the results place into close proximity key residues from both ␣ and ␦ subunits. The four essential bioactive residues in CTx MI localize to a restricted region of its three-dimensional structure, creating a hydrophobic surface for presentation to the ␣-␦ binding site. Three of the four bioactive residues in CTx MI interact with residues from both ␣ and ␦ subunits, suggesting remarkably close association of the two subunits. The rank order of the strength of the interactions establishes the orientation of CTx MI in the binding site; Ala-7 orients toward the ␣ subunit and Tyr-12 toward the ␦ subunit. Of the seven loops in the AChR known to converge to the binding site interface, four loops contain residues that interact with CTx MI as follows: Tyr-198 in loop C of the ␣ subunit and Ser-36, Tyr-113, and Ile-178 in loops I, III, and IV in the ␦ subunit. The multiple focal interactions, together with the nanomolar affinity of the complex, suggest highly complementary surfaces of the toxin and receptor at the region of contact. NMR studies have established the solution structure of CTx MI (19), which appears as a partially flattened tripod, with feet formed by side chains of Arg-2, His-5, and Lys-10 (Fig. 3). The space between the His-5 and Lys-10 appendages contains the bioactive residues, which extend their side chains outward from the convex side of the tripod. Side chains of the bioactive residues occupy corners of an irregular trapezoid, with the surface contour between the corners falling away into a pronounced cavity. The shape of the bioactive region is like that of an outstretched left hand with the outer two fingers closed to the palm. The Pro-6 and Tyr-12 side chains extend as fingers parallel to each other, the Ala-7 side chain extends as the thumb at right angles, and the ␣-carbon of Gly-9 recedes to form the cavity between the fingers and the palm. Thus positioned on the convex side of the tripod scaffold, the bioactive region presents three hydrophobic fingers to the ␣-␦ subunit interface. Considerable work demonstrates that both ␣ and non-␣ subunits form the AChR binding sites (7,8). However, the pairwise coupling observed here demonstrates even closer association of ␣ and ␦ subunits than previously thought. The evidence comes from the very strong coupling between an individual residue in CTx MI and residues from both ␣ and ␦ subunits; Ala-7 and Pro-6 in CTx MI each couple strongly to Tyr-198 in the ␣ subunit and to the triad of selectivity determinants in the ␦ subunit. This interpretation of close association of subunits seems inescapable, whether Tyr-198 in the ␣ subunit and the selectivity determinants in the ␦ subunit contact CTx MI directly or whether they contact a third residue in the binding site that does. In addition to the pairwise interactions detected here, the key binding site determinants in the AChR satisfy other criteria for direct interaction with CTx MI. First, these residues are exposed on the surface of the protein. Surface exposure of ␣Tyr-198 was established by photoaffinity labeling by nicotine (21) and by interaction of ␣Tyr-198 with one of the two quaternary nitrogens in the competitive antagonist dimethyl-d-tubocurarine (22). Similarly, ␦Tyr-113 is equivalent to ␥Tyr-111 in the Torpedo AChR, which was photoaffinity labeled by [ 3 H]dtubocurarine (23), and is close to ␦Thr-119 and ␦Leu-121, which were accessible to methanethiosulfonate reagents when mutated to cysteine (24). Also, ␦Ile-178 neighbors ␦Asp-180, which was cross-linked by a bifunctional reagent tethered to the ␣ subunit (25). Second, extensive mutagenesis studies have detected only these determinants as stabilizing CTx MI. Mutagenesis of loops A, B, and C in the ␣ subunit revealed contributions of only Tyr-198 and Tyr-190 to CTx MI affinity (14). Likewise, screening the entire extracellular domains of ␥ and ␦ subunits by constructing chimeras revealed only ␦Ser-36, ␦Tyr-113, and ␦Ile-178 as contributors to CTx MI binding (10). The chimera studies thus exclude all other residues differing between ␥ and ␦ subunits as contributors to CTx MI binding. Whereas unexamined residues in the ␣ and ␦ subunits remain as formal possibilities for directly contacting CTx MI, the large contributions to affinity and strong pairwise interactions we observe are best explained by direct contributions of ␣Tyr-198, ␦Ser-36, ␦Tyr-113, and ␦Ile-178 to CTx MI binding. Both the CTx MI and the ␣-␦ binding site contribute hydrophobic and aromatic residues to form the high affinity complex. The stabilization likely owes to exclusion of water from the predominantly hydrophobic surfaces and to hand-in-glove complementarity between surfaces of the toxin and the binding site that maximize van der Waal's interactions. The pairwise coupling observed here, together with the structure of CTx MI, suggests the following picture of the complex. Isoleucine at position 178 of the ␦ subunit fits into the hydrophobic cavity in CTx MI, interacting with all four bioactive residues (Fig. 3). The stabilization by ␦Ile-178 depends on Ser-36 located on the same ␦ subunit (Figs. 9 and 10), which may supply a hydrogen bond that positions ␦Ile-178 or, due to its small size, may allow the isoleucine side chain to penetrate into the hydrophobic cavity of CTx MI. The third determinant of the ␦ subunit, ␦Tyr-113, may interact with the rim of the hydrophobic cavity of CTx MI, lodging closest to Ala-7 and Gly-9 (Fig. 10). The aromatic hydroxyl of ␦Tyr-113 likely hydrogen bonds to an acceptor not yet identified, as its mutation to phenylalanine markedly decreases affinity of CTx MI. 2 Finally, Tyr-198 of the ␣ subunit completes the stabilization through hydrophobic interactions with Ala-7 and Pro-6 of CTx MI (Fig. 7). The aromatic hydroxyl of ␣Tyr-198 is not essential for high affinity binding, as its mutation to phenylalanine is without effect. This picture of the complex represents a testable hypothesis for future mutagenesis and site-directed labeling studies. The remaining portion of CTx MI is hydrophilic, comprising the three legs of the tripod structure, but does not interact with key residues in the ␣-␦ binding site. However, because the mutations R2Q, H5A, K10Q, N11K, and S13A decrease CTx MI affinity from 20-to 50-fold, these hydrophilic residues contribute importantly to the overall high affinity of the complex. Arg-2 and Lys-10 of CTx MI may partner in long range electrostatic interactions with anionic side chains in either ␣ or ␦ 2 S. Sine, unpublished data. The CTx MI⅐AChR complex exhibits similar overall features to those found in other high affinity protein-protein complexes. The picomolar complex formed between fasciculin and acetylcholinesterase is held together largely by hydrophobic interactions between alkyl and aromatic side chains that closely complement surfaces of both partners of the complex (26). The complex formed between growth hormone and its receptor contains two strong hydrophobic contacts at its center, flanked by multiple weaker contacts mediated by charged groups (27). The binding interface was pointed out to be like that of a crosssection through a folded globular protein, with hydrophobic residues inside and hydrophilic residues outside. Analogously, the core of the CTx MI⅐AChR complex is strikingly hydrophobic, whereas the periphery is hydrophilic. Hydrophobic contacts are the predominant source of the nanomolar affinity of the CTx MI⅐AChR complex. Free energy of burying hydrophobic residues has been estimated to be Ϫ15 cal/mol per Å 2 of hydrophobic surface (28). The total accessible surface area of the seven side chains of the complex is 730 Å 2 , giving a potential hydrophobic contribution of Ϫ10.9 kcal/mol, which approaches Ϫ12.2 kcal/mol of binding free energy expected for a complex with nanomolar affinity. Also, residues flanking the selectivity determinants in the ␦ subunit are particularly hydrophobic, with the following local sequences: VALSL (residues 33-37), LVY (111-113), and WIII (176 -179). Thus residues flanking the selectivity determinants may introduce additional hydrophobicity at the ␦ subunit portion of the binding site interface. Hydrophobic contacts are therefore the predominant sources of the nanomolar affinity of the CTx MI⅐AChR complex. Solution structural studies of CTx MI demonstrate slowly interconvertible major and minor conformations, with the major conformation representing approximately 80% of the total (19,30). The structure of CTx MI depicted in Fig. 3 is that of the major conformation, but the structure of the minor conformation has not been reported. However, for CTx GI, which has a similar pharmacological fingerprint to CTx MI, atomic coordinates of both major and minor conformations have been reported (31). Comparison of all reported structures of CTx GI indicates that the major conformers of CTx MI and GI are structurally similar (31)(32)(33)(34). If the two conformers for CTx GI are comparable to those for CTx MI, we can ask whether any of the four bioactive residues change positions between major and minor conformations. Side chains of Pro-6, Ala-7, and Gly-9 have similar coordinates in CTx MI and GI and change very little between major and minor conformations. However, to achieve the minor conformation of CTx GI, the peptide backbone between glycine 9 and serine 13 twists so the side chain of Tyr-12 moves out of the hydrophobic pocket to protrude on the opposite side of the toxin (31,32). If the minor conformer is the one bound to the AChR, the ␣ and ␦ subunits would be esti-mated to be farther apart than if the major conformer is bound. Additionally, when we mutate CTx MI one conformer may be favored over the other, potentially affecting the affinity of the complex. For example, glycine at position 9 may stabilize the peptide backbone of residues 9 -13 to maintain the major conformation of the native structure, owing to its ability to accommodate unique and bond angles. Mutation of glycine 9 to serine or valine could potentially destabilize the major conformation, allowing the bioactive Tyr-12 to move out of the hydrophobic pocket; affinity would increase or decrease, depending on which of the two conformers is bound in the high affinity complex. Additional studies are required to determine whether the minor conformation of CTx MI is similar to that of CTx GI and whether CTx MI changes conformation upon binding to the AChR. Previous studies suggested that muscle-specific ␣-conotoxins interact with the AChR by presenting appropriately spaced positive charges to anionic loci in the binding site interface (11,12,29). Although ␣-conotoxins MI, GI, and SI each contain multiple positively charged nitrogens, the present work clearly demonstrates that these positive charges are not the predominant source of high affinity of this class of ␣-conotoxins. On the contrary, each of the muscle-specific ␣-conotoxins contains the four hydrophobic residues we identify as essential for bioactivity: Pro-6, Ala-7, Gly-9, and Tyr-12 (Fig. 1). Like CTx MI, CTx GI selects between ␣-␥ and ␣-␦ binding sites by 10,000-fold, but CTx SI is not as selective and binds with only micromolar affinity to muscle AChRs (11). The proline at position 10 of CTx SI likely distorts the C-terminal loop such that Tyr-12 is no longer positioned to interact with its hydrophobic counterpart in the AChR binding site. Perhaps the natural target for CTx SI better accommodates Pro-10 and Tyr-12 in the C-terminal loop. Thus muscle-specific ␣-conotoxins rely on hydrophobic rather than polar or electrostatic interactions to achieve high affinity. The overall results reveal the essential hydrophobic nature of the interaction between CTx MI and the ␣ and ␦ subunits of the AChR binding site interface. High affinity of CTx MI is due to three hydrophobic fingers extending from an otherwise hydrophilic scaffold. Analogously, a hydrophobic pharmacophore may underlie bioactivity of other members of the ␣-conotoxin family (35). The region of contact at the AChR binding site comprises closely packed residues from both ␣ and ␦ subunits. The pairwise interactions we identify provide spatial constraints to refine our picture of the AChR binding site.
2018-04-03T00:41:41.543Z
2000-04-28T00:00:00.000
{ "year": 2000, "sha1": "4f73aa92dd3a4309ad8f5e4de336a9fe34f5741c", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/275/17/12692.full.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "70e0ffdd007eee8b54b6a22d06bf23ead21e25a6", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
17210986
pes2o/s2orc
v3-fos-license
The Effect of Novel Research Activities on Long-term Survival of Temporarily Captive Steller Sea Lions (Eumetopias jubatus) Two novel research approaches were developed to facilitate controlled access to, and long-term monitoring of, juvenile Steller sea lions for periods longer than typically afforded by traditional fieldwork. The Transient Juvenile Steller sea lion Project at the Alaska SeaLife Center facilitated nutritional, physiological, and behavioral studies on the platform of temporary captivity. Temporarily captive sea lions (TJs, n = 35) were studied, and were intraperitoneally implanted with Life History Transmitters (LHX tags) to determine causes of mortality post-release. Our goal was to evaluate the potential for long-term impacts of temporary captivity and telemetry implants on the survival of study individuals. A simple open-population Cormack-Jolly-Seber mark-recapture model was built in program MARK, incorporating resightings of uniquely branded study individuals gathered by several contributing institutions. A priori models were developed to weigh the evidence of effects of experimental treatment on survival with covariates of sex, age, capture age, cohort, and age class. We compared survival of experimental treatment to a control group of n = 27 free-ranging animals (FRs) that were sampled during capture events and immediately released. Sex has previously been show to differentially affect juvenile survival in Steller sea lions. Therefore, sex was included in all models to account for unbalanced sex ratios within the experimental group. Considerable support was identified for the effects of sex, accounting for over 71% of total weight for all a priori models with delta AICc <5, and over 91% of model weight after removal of pretending variables. Overall, most support was found for the most parsimonious model based on sex and excluding experimental treatment. Models including experimental treatment were not supported after post-hoc considerations of model selection criteria. However, given the limited sample size, alternate models including effects of experimental treatments remain possible and effects may yet become apparent in larger sample sizes. Introduction The two distinct population segments (east and west) of Steller sea lions (Eumetopias jubatus) have been the subject of extensive study in the past few decades due to substantial decline in some portions of their range (e.g., [1,2]). As part of the intensive research effort to better understand the population dynamics of the western population segment, two novel approaches were developed in order to gain extended access to wild individuals and to determine potential causes of mortality. Temporary captivity, an approach here referred to as the Transient Juvenile Steller sea lion Project (TJ) was implemented to gain greater access to individuals, while attempting to minimize disturbance to the population at large [3]. The project has facilitated studies with nutritional, physiological, and behavioral contributions [4][5][6][7][8]. Long-term tracking of these individuals (Transient Juveniles, TJs) was facilitated through hot-iron brands on their left flank upon release as mandated by terms of project-specific handling authorization by the federal government. An additional cohort of catch-and-release, branded, free-ranging sea lions (n = 27, FRs) served as a control group, similar to those of other institutions using brandresight methods for population monitoring (e.g., [9]). Life History Transmitters (LHX tags; [10]) were intraperitoneally implanted into n = 35 TJs under standard aseptic surgical procedures and gas anesthesia [11], starting in 2005 and through 2011. LHX have a projected life span of 10 or more years and generate end-of-life, post-mortem known-fate data [9,10,11,12]. The current analysis was implemented to evaluate the effect on long-term survival of these two novel approaches on endangered Steller sea lions. Many efforts have been made to study the survival and behavior of Steller sea lions utilizing external satellite and dive tags [7,13], as well as mark-resight studies of flipper tagged and hot-iron branded individuals [14][15][16]. However, the impact of these studies on survival is typically only documented in the short-term, with an assumption of little impact after the handling event. This is especially true of studies involving tagging of animals for resight purposes, but has not been evaluated in many species due to logistical constraints [17,18]. The analysis presented here was facilitated by a decade of shared data from multiple institutions, allowing for a survival analysis of treatment groups. The Cormack-Jolly-Seber (CJS) model design is an open population mark-recapture model that includes survival and recapture or resight probabilities, relying on live encounters [19,20]. Encounter histories of individuals consist of simple logistic data organizing each encounter occasion as presence and absence during a resighting period. Models predicting survival within these groups are then based on covariates in various combination, both continuous and categorical [21]. Using this design template, we assessed survival between the control (FR) and experimental groups that participated in captivity and received LHX implants (TJ) with the addition of several demographic covariates. Study animals All work was carried out under National Marine Fisheries Service permits #881-1668, 881-1890, 14335. All work was approved by the Alaska SeaLife Center, Oregon State University and University of Alaska Fairbanks IACUCs. Included in this study, 62 juvenile Steller sea lions were captured via underwater lasso technique between 2005 and 2011 [3,22]. Thirty-five of these individuals (TJs) were retained for temporary captivity for research purposes up to a maximum of three months and received dual LHX implants, with only two individuals receiving a single implant at the start of the project [3,11]. Of the TJs, 27 were male and 8 were female. Twenty-seven animals were sampled, hot-iron branded and immediately released (FRs, 12 male and 15 female, [3,23]). All individuals received a unique 4-digit alphanumeric brand. Age at capture was determined through canine length as per King et al. (2007) as the approximate age in months used to back calculate to the closest mean peak pupping date, June 10 th , to get an estimated birth date [24]. Three individuals did not have their canines measured during sampling and were aged from a standard length-to-age correlation [25]. Study animals were captured at a mean age of 1.6 ± 0.51 years as estimated by canine length or extrapolation from standard length, and binned in a 1 (14-24 months) or 2 (25-36 months) year age group. comm. J. Maniscalco). All brand resights were compiled into a database with at minimum the location, date, brand readability, and observer confidence in the accuracy of the resight. Only those resights that could be confirmed with an accompanying photograph were used in this analysis. Model development All resight data for each brand was reorganized into a simple binary code encounter history for use and input into program MARK [20]. Individual encounter histories contained nine resight intervals between the months of March and November from 2005 to 2013. Each resight year was set at a default '0' for no resight events, and a '1' if a resight occurred, regardless of the frequency. Models were developed a priori and parameterized around demographic covariates of sex, age, age at capture (14-24 or 25-36 months grouping bins), cohort group, and time. Age class was also included as a covariate with two juvenile classes of 14-24 months and 25-36 months of age as well as a single adult cohort, including animals older than 36 months, to account for differential juvenile and adult survival. Sex, age at capture, and cohort were also included separately as time-dependent factors. Models were subsequently ranked using Akaike Information Criterion model selection methods and corrected for small sample size (AICc, [20,26,27]). These covariates were used for building survival and resight probabilities through the CJS method for estimation. Resight effort was included in all p models as a covariate in order to properly scale yearly differences in institutional effort and prevent inflation of resight probabilities. Resight effort data in the form of number of resighting survey days in a given year was normalized on a scale of 1 to 10. Each model was run through MARK via the RMark package [28]. Grouping variables for all animals included the age an individual was first sampled to account for differences in ages within resighting periods in addition to sex, and cohort groups. Our analysis included our experimental group that experienced temporary captivity and received LHX implants (TJs, n = 35) as well as those that were marked and released immediately following capture (FRs, n = 27). Demographic covariates detailed above were also included, with models doubled to have an identical set that included a model term (TJFR) to test the relative importance of our experimental group in model ranking. Best models, deemed to have considerable support in model ranking, were determined as being within 3 ΔAICc values, but were also considered equivalent and indistinguishable in their support level [29]. These highest ranked models were then used to generate comparison sets of survival and resighting probability models. Models within 5 ΔAICc were deemed to have minor support in the data and were also discussed in their implications for our study. Goodness-of-fit testing was used for global models of each grouping factor to assess the potential for model overfitting through the program U-CARE [30]. Sex and treatment had Chi-squared values that reflected non-significant P-values at an alpha value of 0.05. The derived c-hat values were all approximately equal to 1, so no adjustments were necessary for a priori models for overdispersion. Results A Fisher's exact test found that the sex ratio was skewed within groups by comparing the actual ratios in experimental groups to an equal sex ratio contingency table. TJs had a significantly skewed sex ratio (p = 0.02), and FRs did not (p = 0.81). A Fisher's exact test also found that age at capture was significantly skewed between TJs and FRs (p = 0.001), but not within the TJs alone (p = 0.29). In all extant studies, age and sex have been shown to have the strongest and most consistent effects on the survival of juvenile pinnipeds [1,9,15,31]. Therefore, sex was included as a mandatory covariate in all models. The known effect of age on survival is diminishing with increasing age, and capture age was therefore not forced into each model. Model ranking results and beta values for top models are included in S1 and S2 Tables, respectively. Top ranked models ( 3 ΔAICc) included sex (Rank 1) and the additive effect of sex and experimental group factor (Rank 2). The two top models ( 3 ΔAICc) shared 91% of the overall model weight. However, for all nested models the addition of the experimental group factor consistently resulted in an increase of approximately two AICc units (see S1 Table), suggesting the possibility of this being a 'pretending variable' [32]. Appearance of the experimental group factor was inconsistent (in models ranked 2, 4, 7 and 8), and addition of the factor altered the deviance by 0.3% or less in all cases (S1 Table). The treatment factor beta parameter estimate for the highest ranked model including the factor (2) exhibits large confidence intervals that span zero (S2 Table), and was the case for all models including the treatment. Together, these considerations support the notion that the addition of the experimental group factor does not explain any additional variance in response; the difference in AICc is simply driven by the penalty associated with the addition of a variable. This in turn leads to the post-hoc removal of models 2, 4, 7 and 8 from consideration [32]. Model weights were then recomputed, and led to an evidence ratio for our top model of 10.4, found in Table 1. Our experimental group parameter was ultimately removed due to its lack of fitting improvement. These models represent the most parsimonious with the most support found ( 5 delta AICc) in our analysis of assessing long-term survival. Models demonstrating differential survival between our control and those that were temporarily captive and received LHX implants were deemed not plausible and were removed due to their suspected inclusion of pretending variables. The resulting ranking provides substantial evidence that within the models considered and our limited data set, sex has a considerable effect on survival. Males had a lower averaged survival than females (Table 2). Mean survival rates for individuals in the experimental group were slightly lower than our control group, though confidence intervals widely overlap ( Table 2). Minor support ( 5ΔAICc) was also identified for the additive effect of age class and sex on survival, but was only weighted 8.8% of all models (Table 1). No support was ultimately identified for the effects of experimental treatment, age, cohort, or capture age, nor were any covariates made time-dependent. Cumulative survival rate from 15 mo to age 5 for the TJ group overall was calculated to be 0.43 based on generated annual age-specific survival rates (Fig 1). Model averaged resighting probabilities are presented in Table 3. All demographic data and brand resighting histories for study individuals are contained in S3 and S4 Tables, respectively. Discussion Survival rates are well understood for wild animals where handling for tagging is minimal, but the impact of more intensive sampling and temporary captivity had yet to be determined. In the current study, our analysis was used to investigate potential long-term effects of two novel research techniques utilized on juvenile Steller sea lions through evidence-based model ranking. We solely used demographic covariates of project individuals including sex, age, age class, experimental cohort, and age at capture as well as time-dependent forms of these covariates. After the identification of a considerable sample size bias by sex in the experimental treatment group, only those models that included sex as a covariate were considered. Maniscalco [31] reported survival rates based on resights of 199 Steller sea lions branded as pups on the Chiswell Island rookery in our study region between 2005 and 2010. He reported cumulative survival rates from the age of 3 weeks through 4 years of 35.7% (+/-8.2% S.E.), for all animals pooled. From the published results [31] we back-calculated a cumulative survival estimate of 49.8% for the ages of 15 months to 4 years, for comparative purposes. From Maniscalco's published results, and for sex ratios equivalent to the experimental treatment group and control groups, we estimated cumulative survival rates of 43.5% and 52.3%, respectively, and of 47.4% for the combined groups. Our own estimate of 49.1% is very close to the overall value from Maniscalco, and actually above the value for a comparable sex ratio. This suggested a strong likely effect of our sex bias in the treatment group on group specific and also overall survival rates. The mean survival rate was indeed slightly higher for females in our study group (Fig 1, Table 2). This is consistent with findings in many species of pinnipeds, including Steller sea lions [9,33], grey seals (Halichoerus grypus, [34]), Galapagos and northern fur seals (Arctocephalus galapagoensis and Callorhinus ursinus, [35]) and others. Resighting probability model ranking was also found to be influenced by sex and age, but not as time-specific covariates. This is consistent with other studies [31,33]. Juvenile males tend to disperse farther than females [13,14], which may explain in part why males were found to most often have a lower resighting probability than female groups (see Table 3). Several study animals were seen within the range of the eastern population post-release, adding to recent evidence that suggests that the distinction by eastern and western stock may have to be revisited [16]. Since the CJS model cannot separate out permanent emigration from mortality, it is possible that several males may have simply emigrated out of the main resight effort area, both lowering their estimated resighting probability as well as their apparent survival rates. Mean apparent survival probability differed between the experimental treatment and control groups ( Table 2), but the magnitude of this effect was much smaller than uncertainties. The top ranked survival model that initially carried 64% of model weights was also the most parsimonious model based on sex as the main predictor. Addition of the experimental treatment factor in the second ranked model that carried 26% of model weights did not result in an improved fit. Since the treatment factor beta parameter confidence estimates spanned zero we deemed the difference in AICc as most likely resulting from the numerical penalty associated with the addition of a variable to the model [32]. When such 'pretending variables' are removed Fig 1. Model averaged cumulative survival for juvenile Steller sea lions (Eumetopias jubatus) participating in temporary captivity. Cumulative survival rates for animals aged 2 to 5 comparing survivorship between males (open) and females (closed) as well as those that were held in temporary captivity receiving LHX implants (TJ, circles) and those that were released immediately after the initial sampling event (FR, triangles). 95% confidence intervals for each term also included as capped lines. from consideration, model weights should be re-computed [32]. The so corrected weights lead to a single model with delta AICc<5, and an evidence ratio of 10.4. Thus, the simplest model had more than ten times the support of the second model in the revised rankings, and models that include the experimental treatment are not supported by the analysis following our posthoc evaluation. This judgment is supported by the finding that our overall survival estimate is slightly above a comparable estimate back-calculated from Maniscalco 2014 [31]. However, given the limited sample size, alternate models including effects of experimental treatments remain possible and effects may yet become apparent in larger sample sizes. Other considerations and conclusions A power analysis in program MARK returned an effective sample size of 211 animals. While we corrected for our small sample size by utilizing AICc, which by nature penalizes models with more complex structure much more harshly than AIC [29], our sample size substantially constrained our ability to detect minute differences in survival, as evidenced by the large overlapping confidence intervals associated with group mean survival differences. To date, we have not been able to collect evidence in support of negative effects of two novel research techniques, temporary captivity and implantation of LHX, on the survival of endangered Steller sea lions. Animals in our study appear to be exhibiting similar or even higher mean survival rates to rates reported from separate mark resight studies in the region during the same period. However, given our low power of the absence of findings of negative effects of the new research approaches of temporary captivity combined with transmitter implantation cannot be seen as a proof of the absence of any effects. A precautionary approach to the continued application of these novel research techniques could involve the selective use of only one or the other technique for specific research projects, until larger sample sizes are reached. Results presented here include only the top 12 models for brevity. Best models with the most support were considered to be within 2 delta AICc and are highlighted in bold. Models within 5 delta AICc were also considered to have minor support. § Models ultimately excluded from the final results. The TJFR factor did not improve model fit, nor did it change the overall deviance in comparative models and was therefore removed from model selection. Table. Individual brand resighting summary for evaluating survival in juvenile Steller sea lions. Summary of resighting events on an annual basis for both temporarily captive, implanted (prefix TJ-) and free-ranging (prefix FR-) juvenile Steller sea lions. Each individual brand resight history was reduced to a binary encounter history for input into Program MARK for survival analysis. The first instance of a resight or null place holder (-) indicates the year that the animal was marked and released with the resighting events summed in the left column ('Resights'). Only those resights that included a photograph to confirm a positive identification were included in this analysis. Resights of these study individuals were gathered from various contributing institutions including the National Marine Mammal Laboratory, the Alaska Department of Fish & Game, and the Alaska SeaLife Center. (DOCX)
2016-05-12T22:15:10.714Z
2015-11-18T00:00:00.000
{ "year": 2015, "sha1": "1f23fcf47fc9ad0568ac16d1ded64a70a4934af8", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0141948&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1f23fcf47fc9ad0568ac16d1ded64a70a4934af8", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
98341847
pes2o/s2orc
v3-fos-license
Epifluorescence microscopic detection of photolithographically micropatterned aldehyde- and carboxy-terminated self-assembled monolayer An aldehyde (CHO)-terminated self-assembled monolayer (SAM) was prepared on silicon substrates covered with native oxide (SiO2/Si) by chemical vapor deposition of triethoxysilylundecanal (TESUD). The CHO-terminated SAM was subsequently converted to the COOH-terminated SAM by photooxidation using 172-nm vacuum UV light through photomasks with square windows of 10 × 10 and 2.5 × 2.5 μm2. The chemical reactivity of each microspot composed of the COOH-terminated SAM was detected with high resolution by epifluorescence microscopy using Qdot® 655. These results demonstrated that the microspots (spot size: 2.5 × 2.5 μm2) composed of active COOH-terminated SAM were successfully fabricated on the SiO2/Si substrate at a high density of 8 × 104 spots/cm2, which was four times higher than that of previously reported miscrospots. Introduction Microarray technology has become a crucial tool for large-scale and high-throughput biological science and technology, facilitating fast, easy, and simultaneous detection of thousands of addressable elements within a single experiment under identical conditions [1]. In recent years, microarrays of DNA, proteins, and cells are becoming indispensable for studies on genomics, proteomics, and cellomics. Spatially defined immobilization of DNA, proteins, and cells [2] on the surfaces of various microarray platforms is an important technology. Many strategies for constructing protein microarrays on a wide range of substrates have been proposed in the past decades. The method for immobilizing proteins on surfaces will determine the functional properties of protein microarrays. There are two approaches for developing protein microarrays: tailoring the surfaces to functionalize natural target proteins and modifying target proteins with tags such as histidine, streptavidin, and biotin in order to effectively bind them to surfaces [3]. The former approach is considered to be more practical because it can tailor microarray surfaces to be used for wide varieties of proteins without compromising their functional properties. Almost all types of proteins have reactive functional groups such as carboxy (COOH), amino (NH 2 ), thiol (SH), and hydroxy (OH) groups in their side chains. These functional side groups can be used as a starting point for immobilizing proteins on tailored microarray surfaces. One of the most promising approaches to regulate site-selective attachment of proteins is employment of patterned self-assembled monolayers (SAMs) terminated with chemically reactive groups that bind to these functional side groups in proteins. Various constructing strategies and a wide range of substrates, such as glass, gold, polymer and other special formats, have been examined for constructing protein microarrays [4]. In a previous study [5], we reported the fabrication of welldefined microspots (5 × 5 m 2 ) composed of the COOH-terminated SAM by photooxidation of triethoxysilylundecanal (TESUD, [CHO]-[CH 2 ] 10 Si[OC 2 H 5 ] 3 ) SAM on silicon substrates by masked 172-nm vacuum UV (VUV) light. The chemical reactivity was confirmed with a novel method using epifluorescence microscopy of the Qdot ® 655 streptavidin conjugate tagged to biotin hydrazide bound to the activated COOH groups. In this study, we report the fabrication of microspots (2.5 × 2.5 m 2 ) composed of the COOH-terminated SAM; these microspots were smaller than those previously reported using masked photooxidation of TESUD SAM [5]. Moreover, it was demonstrated these microspots could be detected with high resolution by epifluorescence microscopic detection using Qdot ® 655. Preparation of CHO-terminated SAM The CHO-terminated SAM was prepared on silicon substrates covered with native oxide (SiO 2 /Si) by chemical vapor deposition (CVD) of TESUD (Gelest Inc.), as previously described [3]. In brief, Si (100) wafers (1.5 × 1.5 cm 2 ) were exposed for 30 min at 10 3 Pa to 172-nm VUV light ( mW/cm 2 ) radiated from an excimer lamp (UER20-172V; Ushio Inc.); the distance of between the lamp window and the sample surface was 20 mm. The VUV/ozone-cleaned Si wafers were placed with 0.1 cm 3 of TESUD diluted with 0.7 cm 3 of absolute toluene in a Teflon container having a volume of 65 cm 3 in a dry N 2 atmosphere with less than 10% relative humidity. The container was sealed and heated in an oven maintained at 130°C for 6 h. Each sample exposed to TESUD vapors was sonicated for 20 min successively in absolute toluene, absolute hexane, and acetone. Finally, the sample was rinsed with deionized water and blow dried with a N 2 gas stream. Photooxidation of CHO-terminated SAM Next, the CHO-terminated SAM was irradiated for 25 min at 10 5 Pa by the masked 172-nm VUV light radiated from a distance of 90 mm between the window and the sample surface [5]. As shown in Figure 1, the CHO-terminated SAM on SiO 2 /Si substrates was fabricated by site-selective photooxidation with activated oxygen species (O( 1 D), O( 3 P) ) through photomasks with square windows of 10 × 10 and 2.5 × 2.5 m 2 . Chemical Reactivity of 172-nm VUV irradiated microspots and unirradiated regions. To investigate the chemical reactivity in the 172-nm VUV unirradiated regions composed of the CHOterminated SAM (Figure 2A), the micropatterned sample was immersed overnight in 1 ml of 50 mM phosphate buffer (pH 5.8 g/ml biotin hydrazide at 4°C. Excess physisorbed biotin hydrazide was removed by washing with 50 mM phosphate buffer (pH 7.4) containing 1 M NaCl and Figure 2B), another micropatterned sample was immersed overnight in 1 ml of 50 mM phosphate buffer (pH 7.4) containing 1g/ml bovine serum albumin (BSA) at 4°C to block the CHO-terminated regions. Next, the sample was washed with 50 mM phosphate buffer (pH 7.4) containing 1 M NaCl and 0.05% Tween 20. The sample was then incubated at room temperature for 2 h with 25 mM Nhydroxysuccinimide (NHS) and 20 mM 1-ethyl-3-(3-dimethylaminopropyl)-carbodiimide hydrochloride (EDC) in ethanol to activate the 172-nm VUV irradiated microspots. Immobilization of biotin hydrazide and subsequent labeling with the Qdot ® 655 streptavidin conjugate were conducted in a manner similar to that performed in the 172-nm VUV unirradiated regions. Results and Discussion As shown in previous studies [5,6], the hydrophobic TESUD SAM surface on a silicon substrate became hydrophilic through the 172-nm VUV light irradiation and the water-contact angle decreased from approximately 80° to approximately 33° during irradiation for the first 25 min, while the thickness remained unchanged at approximately 1.1 nm, suggesting no alkyl chains were reacted during the VUV irradiation. To investigate the chemical reactivity in the 172-nm VUV irradiated and unirradiated regions, the photolithographically micropatterned samples using photomasks with the square windows of 10 × 10 and 2.5 × 2.5 m 2 were treated with biotin hydrazide, which was subsequently tagged with the Qdot ® 655 streptavidin conjugate and fluorescence from Qdot ® 655 was detected with epifluorecence microscopy. As shown in Figure 3A, Qdot ® 655 (red fluorescence) was selectively immobilized in the unirradiated regions but did not adsorb on the irradiated microspots (no fluorescence). Because NH 2 groups in biotin hydrazide react with CHO groups in TESUD SAM to form a Schiff base, we concluded that the area-selective adsorption of biotin hydrazide in the unirradiated regions is based on the chemical reaction between NH 2 and CHO groups. Moreover, the fact that Qdot ® 655 did not adsorb on the irradiated microspots indicated the area-selective photooxidation of CHO groups in TESUD SAM. If the photooxidation condition is precisely controlled for conversion of CHO groups to COOH groups, these irradiated microspots are expected to again become reactive to biotin hydrazide by activating COOH groups with NHS/EDC treatment. To confirm this, blocking the CHO groups in the unirradiated regions was carried out by BSA adsorption and subsequent activation of the irradiated microspots of 10 × 10 and 2.5 × 2.5 m 2 with NHS/EDC treatment. As shown in Figure 3(B), well-resolved fluorescence from Qdot ® 655 (see insets in Figure 3(B)) was detected only in the irradiated microspots, indicating that these microspots became reactive to biotin hydrazide, while no fluorescence was detected from the unirradiated regions where the reactivity of the CHO groups was blocked by BSA adsorption. These results show that the CHO groups in the 172-nm VUV-irradiated microspots were photochemically converted to COOH groups, which were further converted to active N-hydroxysuccinimidyl esters by the activation treatment with NHS/EDC. Therefore, stable amide linkages were formed between the activated COOH groups and biotin hydrazide.
2019-04-06T13:01:35.074Z
2013-03-01T00:00:00.000
{ "year": 2013, "sha1": "b91b30235461c645019fc4e1a36c6cc039a3b0d0", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/417/1/012060", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "e582db5a944a1ca5cd0034106d0eee34f4fb72bd", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Physics", "Chemistry" ] }
56601825
pes2o/s2orc
v3-fos-license
Multiple cerebral venous sinus thrombosis : Case report Cerebral venous thrombosis (CVT) is an uncommon clinical problem and can be characterised by nonspecific and common symptoms of headaches and vomiting due to the intracranial hypertension. Alternate diagnoses are entertained especially when procoagulant factors are not elicited. We present a 35 year old female, parity 2+0, gravida 3 at 8 weeks gestation who presented with headache, vomiting, photophobia and neck stiffness. The diagnostic workup and radiological findings confirmed multiple venous sinus thromboses. The case is discussed in the light of diagnostic challenge, treatment pathways and how an understanding of the basic sciences explains its clinical presentation. Introduction Cerebral venous thrombosis is an uncommon disease that presents in a non-specific pattern.Ninety percent of patients report a headache, 50% have some type of neurological deficit and convulsions occur in 40% of cases (1).This nonspecific presentation leads to misdiagnosis, delayed diagnosis or medical misadventures.Average delay from onset of symptoms to diagnosis is 7 days (2). The patient profile is varied, defining clinical characteristics few and aetiological relationships diverse.Documented series indicate association of sinus thrombosis with presence of prothromotic factors (3), genetic factors including protein C and S deficiency, pregnancy, nephrotic syndrome as well as dehydration and cancer (4). A clinician's high index of suspicion remains the vanguard of making an accurate and timely diagnosis.We report a case of a young female who presented with non-specific symptoms and signs and managed for multiple cerebral venous sinus thrombosis. Case report A. N., a 35 year old female patient, at parity 2+0, gravida 3 at 8 weeks gestation presented to our hospital with 6 days history of a severe global throbbing headache of sudden onset that was associated with vomiting, photophobia and neck stiffness.She had no antecedent trauma, fever nor any history of hypertension. Her obstetric and gynecological history was unremarkable, without any use of oral contraceptives.She was a married housewife who neither smoked nor consumed any alcohol. On general examination, A.N. was restless but had no conjuctival pallor, scleral icterus, oral thrush or lymphadenopathy.Her vital signs were normal (brachial blood pressure of 106 / 65 mmHg, temperature 37 0 C) except for a Multiple cerebral venous sinus thrombosis: Case report tachycardia of 106 beats/min (good volume, regular non-collapsing pulse). She was fully conscious with normal higher functions and pupils that were 3 mm in diameter and responsive to light, both directly and consensually.Positive findings included photophobia, a left sided abducent nerve palsy and bilateral papilloedema at fundoscopy.Her neck was stiff with negative Kerning's and Brudzinki's signs while the spine, motor, sensory and cerebellar examinations were unremarkable.The systemic findings were noncontributory.An impression of a subarachnoid hemorrhage with differential diagnosis of meningitis and an intracranial space occupying lesion were entertained. Laboratory investigations included a haemogram (Hb of 13.7g/dL, WBCs of 8.2x10 9 /L with a neutrophil count of 70%) and electrolytes (K+ 2.8 mmol/L).The rest of the biochemistry, and the coagulation profile were unremarkable.The opening pressures were high and the CSF bloody at lumbar puncture. An MRI was then performed that revealed hyperintense signals in the superior sagittal sinus (SSS), straight, right transverse and sigmoid sinuses-changes due to thrombosis (Figure 1).The thrombi were multiple with some being contiguous and others discrete. A thrombophilia screen ordered after this showed a normal profile (Table 1).Her subsequent management entailed anticoagulation with high dose enoxaparin and treatment with clonazepam, analgesics and potassium supplementation.The patient had an uneventful recovery and was discharged home on the 7 th day.She will complete one year of anticoagulation. Discussion Most of A.N.'s major venous sinuses were involved with the exception of the inferior saggital.The superior saggittal sinus usually drains into the confluence of sinuses then into the straight and finally the sigmoid sinus.The combination of both continuous and discrete thrombi point to either a systemic aetiology or a combination of both local and systemic causes.Some anatomic features may explain some of the local factors contributing to thrombosis and propagation.Sinuses are valveless endothelialined channels formed between the two layers of the dura.Endothelial disruptions in these low pressure channels (contributing to stasis) may initiate or propagate thrombosis (5).The cerebral veins join the sinuses obliquely generally against the blood stream and create regions of turbulent flow.These features may point to prothrombotic zones and explain thrombus propagation in sinuses in proximity.Hence a systemic hypercoagulable state in combination with local features may account for the varied pattern of thromboses.This patient had no other procoagulant factor other than pregnancy.High circulating estrogen levels in pregnancy are accepted to be the cause of this hypercoagulable state.In general, aetiological relationships of cerebral sinus thrombosis are varied.About 85% of patients are noted to have a procoagulant factor (3) which may include protein C and S genetic deficiencies, pregnancy, nephrotic syndrome as well as dehydration and cancer (4).Her age and physiological status did not fall into the other thrombophilia risk patterns. Close to 90% of patients report a headache with 50% having some type of neurological deficit while convulsions occur in 40% of cases (1).Headache was present in our patient.This symptom, in association with vomiting is a feature of intracranial hypertension.Normal cerebro-spinal fluid flow patterns have the final common pathway as the reabsorption by the arachnoid granulations and flow into the venous sinuses (6).Thrombosis within the sinuses block the CSF flow pathways causing intracranial hypertension without ventriculomegaly (Figure 1).Cerebral vein thrombosis leads to local effects of venous hypertension with associated ischemia and disruption in the blood brain barrier that result in both cytotoxic and vasogenic edema (Figure 2), which contribute further to the development of intracranial hypertension.These distinct but concurrent mechanisms are central to the pathophysiology.Large unilateral lesions may cause compression of the diencephalon and brain stem leading to coma or death. Figure 1: Description of the course and consequence of venous thrombosis The clinical features in our patient varied with limited value in terms of localization of the lesion.She had features of meningeal irritation and abducent nerve palsy.Since the sinus drainage areas are wide and varied, the neurological signs seen here and described by other authors are not surprising. The diagnosis of CVT is usually confirmed on laboratory investigations and radiology.The lumbar puncture was performed and revealed bloody CSF.This is a sensitive but nonspecific marker for cerebral venous thrombosis.The fact that our patient was pregnant limited our radiological investigation to the MRI since a CT scan poses unacceptable radiation risks to the fetus.In routine practice the sensitivity of CT scan is limited in the initial three days but its utility lies in the ability to rapidly rule out other important differential diagnoses.MRI approaches a sensitivity of up to 98% in some series with the current modality of choice being Single-slice phase-contrast angiography (SSPCA) with a sensitivity of 100% (7). Treatment of CVT is aimed at stabilization, prevention of clot propagation and the prevention or reversal of cerebral herniation.The management of this patient included anticoagulation for the purposes of propagation prevention.This role of anticoagulants is controversial with the basic concern being that 40% of patients present with a haemorrhagic infarct in situ.Literature available from three trials by Einhaupal et al (8), De bruin et al (9) and Nagaraja et al (10) provide no clear answers.In all these, no increase or new cerebral haemorrhage occurred.Two cases of pulmonary embolism were reported in the placebo group in two these trials. The duration of anticoagulation is guided by the findings that there is a 2% rethrombosis and 4% extracranial thrombotic event within a year (1).This formed the basis for one year of anticoagulation in our patient. The clinical course of the patient was benign with self-limiting disease without need for surgery to achieve decompression or use of osmotic diuretics (11), as well as endovascular placement of thrombolytics (12).The latter is considered experimental.Although the patient had a lumbar puncture for diagnostic purposes, it could also be therapeutic for the treatment of intracranial hypertension.If employed, the punctures are done serially in combination with acetazolamide (13).CSF diversionary procedures are indicated after two weeks of failed conservative therapy with serial lumbar punctures.The options include lumbar peritoneal shunt or fenestration of the optic nerve sheath (14). In conclusion, this patient's non-specific clinical picture belies the challenge in diagnosis and management of this particular condition.Although a rare condition, a clinician's high index of suspicion remains the first step towards a correct and timely diagnosis.The diagnosis of cerebral venous sinus thrombosis should be considered in a young and middle-aged patient presenting with an unusual headache or with stroke-like symptoms without the usual vascular risk factors.The worsening of the local effect of cerebral venous thrombosis is preventable by early anticoagulation. Figure 1 : Figure 1: T1 MRI images of the patient showing hyperintense signals at multiple sites (block arrows) Figure 2 : Figure 2: Effects of major sinus thrombosis
2018-10-17T05:49:42.115Z
2009-09-24T00:00:00.000
{ "year": 2009, "sha1": "a918711e863e783ba2323d1cb872dd5d187a2b98", "oa_license": "CCBY", "oa_url": "https://www.ajol.info/index.php/aas/article/download/46242/32641", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a918711e863e783ba2323d1cb872dd5d187a2b98", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
85340570
pes2o/s2orc
v3-fos-license
Redescription and resolution of some Neotropical species of jumping spiders described by Caporiacco and description of a new species ( Araneae : Salticidae ) Type specimens of some of Caporiacco’s Neotropical species are revised. The taxonomy of his species from French Guiana, whose type specimens are lost, is considered. The types of Corythalia hadzji Caporiacco, 1947, Corythalia luctuosa Caporiacco 1954, Hypaeus barromachadoi Caporiacco, 1947 and Naubolus melloleitaoi Caporiacco, 1947 are redescribed. The following new synonymies are established: Freya guianensis Caporiacco, 1947 = Chira spinipes (Taczanowski, 1871) syn. nov.; Hypaeus bivittatus Caporiacco, 1947 = Hypaeus barromachadoi Caporiacco, 1947 syn. nov. New combinations are: Agelista petrusewiczi Caporiacco, 1947 = Noegus petrusewiczi (Caporiacco, 1947) comb. nov.; Albionella chickeringi Caporiacco, 1954 = Mago chickeringi (Caporiacco, 1954) comb. nov.; Asaracus pauciaculeis Caporiacco, 1947 = Mago pauciaculeis (Caporiacco, 1947) comb. nov.; Cerionesta leucomystax Caporiacco, 1947 = Sassacus leucomystax (Caporiacco, 1947) comb. nov.; Lapsias guianensis Caporiacco, 1947 = Cobanus guianensis (Caporiacco, 1947) comb. nov.; Phiale modestissima Caporiacco, 1947 = Asaracus modestissimus (Caporiacco, 1947) comb. nov. The species Noegus lodovicoi sp. nov. is also described, based on an ex-syntype of Agelista petrusewiczi. The following nominal species are considered species inquirendae: Albionella guianensis Caporiacco, 1954, Alcmena trifasciata Caporiacco, 1954, Amycus effeminatus Caporiacco, 1954, Capidava variegata Caporiacco, 1954, Corythalia variegata Caporiacco, 1954, Dendryphantes coccineocinctus Caporiacco, 1954, Dendryphantes gertschi Caporiacco, 1947, Dendryphantes spinosissimus Caporiacco, 1954, Ilargus modestus Caporiacco, 1947, Lapsias melanopygus Caporiacco, 1947 = Frigga melanopygus (Caporiacco, 1947) comb. nov., Lurio splendidissimus Caporiacco, 1954, Nagaina modesta Caporiacco, 1954, Amycus patellaris (Caporiacco, 1954), Phidippus triangulifer Caporiacco, 1954 and Tutelina iridea Caporiacco, 1954. Description.Male (holotype of H. barromachadoi).Total length 6.30.Carapace dark brown, 2.90 long, 2.10 wide and 1.70 high.Ocular quadrangle 1.75 long, cephalic region light brown.Anterior eye row 1.95 wide and posterior 1.85 wide.Diameter of AME 0.75.Clypeus 0.25 high.Chelicerae dark brown, with low prodorsal humps (Fig. 1), four promarginal and three retromarginal teeth.Palps as in Figs 2-3, light brown, with a short embolus and a bifid RTA.Endites, labium and sternum light brown.Legs I dark brown, II-IV light brown.Abdomen dark brown, variegated, with a pair of lateral light brown stripes on the posterior two thirds.The other specimen has a yellow abdomen with a pair of paramedian longitudinal dark brown stripes dorsally. Mago chickeringi (Caporiacco, 1954) Remarks.Although the type specimen is lost, the illustration of the male palp by the author, as an exception, can allow the species identification for future works on the local fauna.The species seems to belong in the genus Mago O.P.-Cambridge, 1882 and is similar to Mago procax Simon, 1900 by having a slender, short embolus (See GALIANO 1963a, pl. 27, fig. 7).Mago chickeringi, though, can be distinguished from all the revised species of the genus for uniquely having a thin, dorsally curved RTA (See CAPORIACCO 1954, fig. 50a). Male.Unknown.(CAPORIACCO 1947(CAPORIACCO , 1948)), 25 from French Guiana (CAPORIACCO 1954) and 18 from Venezuela (CAPORIACCO 1955).All his descriptions, especially his drawings, were very poor in details and until recently no modern taxonomist had had the opportunity to examine the type specimens of many of his species; therefore most of them have remained unrecognizable.RUIZ & BRESCOVIT (2005) examined the type specimens of some of his species from Venezuela and established several taxonomic changes.Some of Caporiacco's species from Guyana were also revised by RUIZ et al. (2007), but most of his taxa from that country are revised in the present paper.The single species from Guatemala/Mexico and the 25 from French Guiana are the most taxonomically problematic, due to his bad illustrations and the fact that almost all of the type specimens are lost. According to GALIANO (1968b), who visited the Muséum National d'Histoire Naturelle (MNHN, Paris) and redescribed all Neotropical species described by Eugène Simon, the type specimens of the species described from French Guiana by CAPORIACCO (1954), that should be deposited in the collection of the MNHN, were sent to Caporiacco and have never been sent back to Paris.BERDONDINI & WHITMAN (2002) published a list of all the types deposited in the collection of the Museo Zoologico de "La Specola".The only Caporiacco species from French Guiana with specimens in that collection are Albionella guianensis Caporiacco, 1954, Alcmena trifasciata Caporiacco, 1954, Chira portai Caporiacco, 1954[= Frigga kessleri (Taczanowski, 1872)], Corythalia luctuosa Caporiacco, 1954and Mago budoninus Caporiacco, 1954[= Hypaeus taczanowskii (Mello-Leitão, 1948)].Among the 25 species described in that paper (CAPORIACCO 1954), only these five were described based on syntypes, while the other 20 were based on single specimens.This makes us wonder if Caporiacco retained only duplicates and indeed sent the rest of the specimens back to Paris.The fact is that those types are lost. After the two previous papers on the taxonomy of those problematical species (RUIZ & BRESCOVIT 2005;RUIZ et al. 2007), the present study is a third attempt to clarify the identity of Caporiacco's Neotropical species. MATERIAL AND METHODS The material examined is deposited in the Museo Zoologico de "La Specola", Firenze.The measurements are given in millimeters.The abbreviations used throughout the text are (RTA) retrolateral tibial apophysis, (AME) anterior median eye, (MNHN) Muséum National d'Histoire Naturelle, (MZLS) Museo Zoologico de "La Specola". Amycinae Simon, 1901 Hypaeus barromachadoi Caporiacco, 1947 Figs 1-3 Hypaeus barromachadoi Caporiacco, 1947: 30 ( species.Although the lectotype, here designated, does not have the standard dentition of Noegus (two small promarginal teeth), the species is tentatively transferred to this genus.The lectotype designated (Figs 7-8) is the specimen that fits the original description, with four promarginal and three retromarginal teeth.Because taxonomy of the group is still in need of revision, we decline to present a diagnosis for the species.Description.Female.Total length: 5.50.Carapace yellow.Chelicera yellow, with three teeth on both promargin and retromargin.Palp and legs yellow.Abdomen and spinnerets pale.Epigynum with a rounded atrium placed far from the posterior border; internally with very long copulation ducts and a pair of long glandular projections, directed forward, arising from their initial part . Remarks.The specimen is poorly preserved but, despite not having the standard dentition of Noegus (two promarginal teeth), its epigynum is very similar to that of Noegus trilineatus Mello-Leitão, 1940 and seems to be correctly placed in this genus. Amycoida Maddison & Hedin, 2003 incertae sedis Asaracus modestissimus (Caporiacco, 1947) Caporiacco, 1948: 709;Berdondini & Whitman, 2002: 147;Platnick, 2008. Description. Female (holotype).Total length: 9.50.Body uniformly light brown, except for a pair of light brown marks and a longitudinal median light brown short stripe on the posterior third of the yellow abdomen.Carapace 3.40 long, 2.40 wide and 1.70 high.Ocular quadrangle 1.90 long.Anterior eye row 2.05 wide and posterior 1.90 wide.Chelicerae stout, with one retromarginal and two promarginal teeth.Epigynum (Figs 12-13) with a posterior small pocket and an anterior small atrium joining the copulatory openings; initial part of copulatory ducts very membranous and wide; sclerotized narrow ducts coil from the posterior part of the membranous ducts toward the spermathecae, which are medially placed. Male.Unknown. Remarks.The holotype, in the penultimate instar, allows the identification of the species.Its position in Naubolus is doubtful, since the boundaries of genera in Dendryphantinae are in need of revision. Sassacus leucomystax (Caporiacco, 1947) Description.Male (lectotype).Total length: 2.82.Carapace dark brown, 1.42 long, 1.15 wide, 0.82 high, with dorsolateral longitudinal stripes of white scales joining the eyes and extending to posterior border of carapace.Ocular quadrangle 0.72 long.Anterior eye row 0.92 wide, posterior 1.02 wide.Chelicera dark brown, with two teeth on promargin, one distally placed on retromargin; chelicerae slightly divergent.Palp dark brown, with a curved femur; small tuft of white scales on distal dorsal palpal femur, a sinuous RTA, embolic haematodocha hidden behind the tegulum and a well developed embolus .Legs 1423, dark brown; patellae and tarsi lighter.Abdomen light brown with a transverse stripe of white scales on the anterior border; dorsally with a chevron of white scales in the middle of abdomen and two others on the posterior half. Female.Unknown. Corythalia hadzji Note.The new synonymy is established based on comparisons between Caporiacco's type and illustrations of Taczanowski's species by GALIANO (1968a, figs 10-14, 17).This species does not belong with the rest of the species in Chira and its inclusion in this genus is considered temporary. Unrecognizable species The following nominal species are here considered nomina dubia, either because their types are too juvenile or because they are lost and the illustrations provided in the literature do not allow their recognition.Albionella guianensis Caporiacco, 1954: 151, fig. 49
2019-03-22T16:06:25.918Z
2008-09-01T00:00:00.000
{ "year": 2008, "sha1": "ed912becbd4e5cfa9541c93bc3eef6d3533d4c0c", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/rbzool/a/VC86VXkQT5d54bhSBVRycCM/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ed912becbd4e5cfa9541c93bc3eef6d3533d4c0c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
91074887
pes2o/s2orc
v3-fos-license
Physicochemical, Phytochemical and Pharmacognostic Evaluation of a Halophytic Plant, Trianthema portulacastrum L The practice of traditional medicine is based on hundreds of years of belief and observations and analysis, which help in the development of modern medicine. Today interest in herbal drugs is increasing primarily based upon the idea that herbal medicines are safe, inexpensive and have less adverse effects. Each plant drug possesses unique properties in terms of its botany, chemical constituents and therapeutic potency. In folkloric medicine, plants are used for curing various diseases mainly based on popular belief passed on from generation to generation. For e.g. Costus pictus, known as ‘insulin plant’, a member of Costaceae family is used as a munching dietary supplement for International Journal of Current Microbiology and Applied Sciences ISSN: 2319-7706 Volume 7 Number 05 (2018) Journal homepage: http://www.ijcmas.com Introduction The practice of traditional medicine is based on hundreds of years of belief and observations and analysis, which help in the development of modern medicine. Today interest in herbal drugs is increasing primarily based upon the idea that herbal medicines are safe, inexpensive and have less adverse effects. Each plant drug possesses unique properties in terms of its botany, chemical constituents and therapeutic potency. In folkloric medicine, plants are used for curing various diseases mainly based on popular belief passed on from generation to generation. For e.g. Costus pictus, known as 'insulin plant', a member of Costaceae family is used as a munching dietary supplement for ISSN: 2319-7706 Volume 7 Number 05 (2018) Journal homepage: http://www.ijcmas.com Trianthema portulacastrum L. belongs to the family Aizoaceae. It is an important medicinal halophyte, traditionally used to cure many diseases and disorders. It order to ensure authenticity and maintain the therapeutic efficacy of this plant, evaluation of certain quality control parameters for the standardization of this plant was attempted. To achieve this, physicochemical, phytochemical and pharmacognostic studies of this plant was done. For phytochemical analysis, phenols, flavonoids, cardiac glycosides, tannins, steroids, saponins, triterpenes, coumarins, phlobatanins, etc. were evaluated. For physicochemical analysis, loss on drying, total ash, water soluble, acid insoluble, sulphated, nitrated and carbonated ash were determined. The extractive values in different polar and non-polar solvents were measured. Finally macroscopic, microscopic and powder study of leaf and stem was done. All the standard methods were followed for different estimations. The crude powder of T. portulacastrum was rich in coumarins; while its solvent extracts toluene and ethylacetate were rich in steroids. In physicochemical analysis loss on drying was 9.5%. The ash values ranged from 0.83% to 11.83%. The extractive values of organic solvents ranged from 0.52% to 8.64% and water soluble extractive value was 17.74%. Maximum extractive value was in methanol and water indicating presence of more polar compounds than nonpolar compounds. The macroscopic, microscopic and powder characteristics of leaf and stem was measured. The parameters evaluated in this study will safeguard the authenticity and efficacy of crude drug and also distinguish the drug from its adulterants. The parameters enlisted in this study will be useful and helpful in setting diagnostic indices for identification and preparation of monograph of this plant the treatment of diabetes in Southern India (Jayasri et al., 2008). Enicostema littorale is another herb of family Gentianaceae used for hypoglycemic activity found in many parts of India such as Gujarat and Maharashtra (Maroo et al., 2003). The only shortcoming of traditional medicine is there are no stringent quality control parameters; in other words, there are no standardization parameters and hence they are prone to adulteration and substitution and puts doubt on their efficacy. Their chance of getting adulterated is directly proportional to their efficacy and availability. So it is important to study pharmacognostic characters of each medicinal plant to differentiate the unadulterated plant sample. Among the medicinal plants, halophytic plants are very significant. Halophytic vegetation dominates the tidal marsh ecosystems. Plants develop specific anatomical, morphological, and physiological characteristics enabling them to perform their vital functions in the presence of large concentrations of harmful salts. The ability of some of the halophytes to resist high salt conditions is because of two main mechanisms either they exclude the salt well in leaves (salt exclusion) or compartmentalize. Most of the medicinal halophyte plants are herbs and forbs and are perennial and their biological types were therophyte and chameophyte (Priyashree et al., 2010). Trianthema portulacastrum L. belongs to the family Aizoaceae, commonly known as noxious weed, horse purslane, hogweed, itcit or santha. It is a prostrate, glabrous, succulent herb found almost throughout India in cultivated and wastelands. The plant is bitter, alexiteric, analgesic, stomachic, laxative. It has been reported for some traditional use as anthelmintic, vermifuge and antirhematitis (Shastri, 1952) and serves as alterative cure for bronchitis, heart disease, blood anaemia, inflammation, and piles, ascites. The root applied to the eye cures corneal ulcers, itching, dimness of sight and night blindness (Kirtikar and Basu, 1933). It is also used as vegetable in various parts of world due to its high nutritional value. Two forms are reported of this plant, a red coloured form in which the stem, leaf margin and flowers are red; and a green coloured form which has a green stem and white flowers. The leaves possess diuretic properties (Balamurugan et al., 2009). The plant shows hepatoprotective (Kumar et al., 2004) and antioxidant activity (Sunder et al., 2010). In the present study, an attempt has been done to lay down some standardization parameters for Trianthema portulacastrum leaf and stem. Hence the objectives of the study were to evaluate organoleptic features, macroscopic and microscopic evaluation of T. portulacastrum leaf and stem. The whole plant dry powder was evaluated for its phytochemical, physicochemical and fluorescence analysis. Plant collection The halophytic plant Trianthema portulacastrum L. was collected in August, 2017 from Porbandar, Gujarat, India. The plant was washed thoroughly was tap water, shade dried and homogenized to fine powder and stored in closed container for further studies. Macroscopic study The macroscopic studies were carried out using organoleptic evaluation method. The arrangement, size, shape, base, texture, margin, apex, veination, colour, odour, taste of leaves and stem were observed. Macroscopic and microscopic characters were studied as described in quality control method (Khandelwal, 2008). Photographs at different magnifications were taken by using digital camera. Microscopic study Microscopic study was carried out by preparing thin sections of stem and leaf. The thin sections were further washed with water, stained with safranin, fast green and mounted in glycerine for observation and confirm its lignifications (10x, 40x) (Tyler et al., 1977). Powder microscopy The powder microscopy of the whole plant powder was studied using standard procedure by capturing the images of different fragments of tissues and diagnostic characteristic features were recorded (Tyler et al., 1977). Physicochemical analysis The physicochemical parameters like loss on drying, total ash, acid-insoluble ash, watersoluble ash, sulphated ash and extractive values were determined as per WHO guidelines (WHO, 1998). The solvents used were petroleum ether (PE), toluene (TO), ethyl acetate (EA), methanol (ME) and water (AQ). The details of the procedure followed are as described earlier (Pande and Chanda, 2017). Phytochemical analysis The qualitative phytochemical analysis of crude powder and different solvent extracts of whole plant powder and different solvent extracts of Trianthema portulacastrum was carried out to identify different phytoconstitutents (Harbone, 1998). The phytoconstituents analysed were alkaloids, flavonoids, phenols, saponins, tannins, cardiac glycosides, steroids, phlobatanins, triterpenes, anthocyanins, etc. The presence of specific phytochemicals indicated is indicated with (+) sign and the absence of phytochemicals is indicated with (-) sign. The procedure followed for different phytochemical analysis is given Table 1. Fluorescence analysis Fluorescence study of different plants powder was performed as per Chase and Pratt (1949). A small quantity of the plants powder was placed on a grease free clean microscopic slide and 1-2 drops of freshly prepared reagent solution were added, mixed by gentle tilting of the slide and waited for a few minutes. Then the slide was placed inside the UV chamber and observed in visible light, short (254 nm) and long (365nm) ultra violet radiations. The colours observed by application of different reagents in different radiations were recorded. Results and Discussion Organoleptic and macroscopic characteristic of Trianthema portulacastrum L. T. portulacastrum is a prostrate, sub-succulent herb and facultative halophyte. The organoleptic and macroscopic characteristics of the plant are given in Table 2 and Figure 1. Leaves The leaf was simple, green in colour, phyllotaxy was obliquely opposite, shape was obovate, margin was entire, apex was apicular, base was asymmetrical, veination was reticulate, petiole was long. Outer surface was smooth and fleshy. The odour was characteristic and taste was bitter. The average size of leaf was 5-6 cm length and 2-4 cm wide ( Fig. 1a and Table 2). Stem The stem was light pink in colour, branched, woody, prostrate. Outer surface was glabrous. The average size of the stem was 10 cm long and 0.2-0.5 cm thick ( Fig. 1b and Table 2). Microscopic characteristic Petiole The transverse section of T. portulacastrum petiole is shown in Figure 2. The petiole was bean shaped. The single layered upper and lower epidermis was surrounded by thin cuticle layer (Fig. 2a). The epidermis was covered with unicellular and multicellular, 2-3 celled trichomes. Ground tissue was parenchymatous, vascular bundles were three in numbers, the size of the vascular bundles varied from centre to leaf margin i.e. large too small. They were centripetal arranged i.e. xylem surrounded by the phloem (Fig. 2b). Leaf The transverse section of T. portulacastrum leaf is shown in Figure 2. The leaf lamina was dorsiventral in nature. The upper epidermis and lower epidermis were single layered. The palisade tissue was single layered on the upper surface, it was covered with thick cuticle (Fig. 2c). The lower surface of leaf showed unicellular trichomes. The mesophyll was small, consisted of 4-7 layered. T.S. passing through the mid rib region showed vascular bundles towards the ventral surface and it was surrounded by palisade tissues (Fig. 2d). Centrally located conjoint collateral vascular bundles were surrounded by spongy parenchymatous cells. The xylem was surrounded by phloem (Fig. 2e). The paracytic stomata were present in lower epidermis (Fig. 2f). Stem The transverse section of T. portulacastrum stem is shown in Figure 3. The epidermis was single layered thick walled, narrow, small and it was surrounded by thick cuticle layer (Fig. 3a). The unicellular and multicellular trichomes were present on the outer surface of the epidermis. The cortex region consisted of 6-8 layers (Fig. 3b). The vascular bundles were present in the pith region. The plant showed secondary growth, phloem was present below the xylem (Fig. 3c). The vascular bundles were surrounded by polygonal parenchymatous cells, vascular bundles were conjoint, collateral, close type, arranged in a ring form (Fig. 3d). The vascular bundles were eight to ten in number without cambium ring, pith was made up of welldeveloped parenchymatous tissue (Fig. 3e). The xylem was well developed and consisted of vessels, fibres, metaxylem and xylem parenchyma. Phloem consisted of sieve tubes, companion cells and phloem parenchyma (Fig. 3f). Powder microscopy of the plant The crude powder of the T. portulacastrum plant was green in colour, taste was bitter and odour was characteristic. The powder microscopic characteristics are shown in Figure 4. The specific characteristics of powder determined by microscopic investigation showed unicellular trichomes, multicellular trichomes, spiral vessels, annual vessels, bordered pitted vessels, pitted vessels, paracytic stomata, sclerenchymatous cells, etc. Physicochemical analysis The physicochemical analysis of T. portulacastrum plant is given in Figure 5 and 6. The loss on drying of dry powder of plant was 9.5%. Alkaloids Add crude powder and solvent extracts to 2N HCl and mixture was filtrated. 1) The filtrate was treated with few drops of Dragondroff 's reagent 2) The filtrate was treated with few drops of Mayer's reagent. 3) The filtrate was treated with few drops of Wagner's reagent Formation of orange precipitate indicated the presence of alkaloids. Formation of Cream precipitate indicated the presence of alkaloids. The nitrated ash of whole plant powder was 16.83%. The carbonated ash of whole plant powder was 17.83%. The extractive value of whole plant powder is given in Figure 6. The maximum soluble extractive value was found in methanol (8.64%). Minimum soluble extractive value was found in petroleum ether (0.52 %). The water soluble extractive value was 17.74%. Phytochemical analysis The qualitative phytochemical screening of the crude powder of T. portulacastrum plant is given in Table 4. In the crude powder of whole plant, coumarins were present in maximum amount followed by saponins and leucoanthocyanins (Table 4). Alkaloids, flavonoids, tannins, steroids cardiac glycosides, triterpenes, anthocyanins, phenols, quinones were present in trace amount while phlobatanins were absent. The qualitative phytochemical analysis of the plant T. portulacastrum in different solvent extracts is given in Table 4. In PE solvent extract, steroids were present in moderate amount; alkaloids, saponins and coumarins were present in trace amount while remaining phytoconstituents were absent (Table 4). In TO solvent extract, steroids were present in maximum amount followed by coumarins; alkaloids and flavonoids were present in trace amount while remaining phytoconstituents were absent. In EA solvent extract, steroids were present in maximum amount followed by coumarins; flavonoids and triterpenes were present in trace amount while remaining phytoconstituents were absent. In ME solvent extract, alkaloids, tannins, steroids and triterpenes were present in moderate amount; flavonids and coumarins were present in trace amount while remaining phytoconstituents were absent. In AQ solvent extract, alkaloids and saponins were present in moderate amount; flavonoids were present in trace amount while remaining phytoconstituents were absent. Pharmacognostical, physicochemical and phytochemical studies are important because once the plant is dried and powdered, it loses its morphological identity and is easily prone to adulteration. Pharmacognostic studies ensures plant identity, lays down standardization parameters which prevent the drug from adulterations. Such study helps in authentication of the plants and ensures reproducible quality of herbal products, which lead to safety and efficacy of natural products (Chanda, 2014;Singh et al., 2017) Standardization is a system to ensure that every packet of medicine that is sold has the correct amount and will induce its therapeutic effect. For the useful application of the plant parts in modern medicine, physico-chemical and phytochemical standardization is also very important (Saxena et al., 2012). Organoleptic and macroscopic evaluation is a qualitative evaluation based on the study of morphological profile of the plant. The macroscopic evaluation of T. portulacastrum showed that the plant was green in colour, shape of leaves was obovae, apex was apicular and base was asymmetrical. The stem was light pink in colour and woody. The microscopic evaluation showed leaf lamina was dorsiventral, unicellular or multicellular trichomes were present. Vascular bundles were conjoint, collateral, close type. Stem showed single layered epidermis, cortex region, secondary growth of vascular bundles, with conjoint, collateral, close, arranged in ring form. The powder study showed unicellular trichomes, spiral vessels, annual vessels, bordered pitted vessels, pitted vessels, paracytic stomata, sclerenchymatous cells, etc. Such studies are reported for other plants like Eucalyptus globules leaf (Shah et al., 2012) and Madhuca indica leaf (Moteriya et al., 2015). The physicochemical parameters like loss on drying, total ash, acid insoluble ash, water soluble ash, carbonate, nitrated and sulphated ash were determined. The values were in accordance to those reported earlier (Joshi, 2011). Loss on drying was 9.5%. This indicates that drying process was efficient. Loss on drying for Chaetomorpha antennina was 7% (Dhanki et al., 2018), for Cinnamomum verum leaf, it was 8.2% (Kumar et al., 2012) and 8.8% for Garcinia indica fruit rind (Prasad et al., 2012). This is an important parameter since it indicates the stability of the drug during storage time (Mukherjee, 2002). If the drying process is not efficient, i.e. high moisture content will encourage the growth of microorganism which may lead to the degradation of phytoconstituents of the drug during storage (Evans, 2005). The ash values ranged from 0.83% to 11.83%. Total ash value was 11.83% while acid insoluble ash was 0.83%. These values indicate the amount of organic and inorganic material present in the plant sample. The acid insoluble ash normally contains silica and earthy material and indicates contamination. In the present work, it was very negligible hence it can be stated that the plant material is free from contamination. The total ash values are in accordance with those reported for other plants. For e.g. the total ash value was 14% for root of Cryptolepis sanguinolenta (Odoh and Akwuaka, 2012); 11% for stem bark of Ficus benghalensis (Semwal et al., 2013) and 17% for Cassytha filiformis aerial parts (Ambi et al., 2017). Extractive values give an idea about the chemical constituents of crude drugs and also help in estimation of definite constituents soluble in a particular solvent. The extractive values of organic solvents of T. portulacastrum ranged from 0.52% to 8.64% and water soluble extractive value was 17.74%. This suggests the present of more polar compounds than non-polar compounds. Similar results are reported for other plants. The qualitative phytochemical analysis was done in crude powder and various solvent extracts of the plant. The crude powder of T. portulacastrum was rich in coumarins; while its solvent extracts TO and EA were rich in steroids. The plants are endowed with various secondary metabolites that exert particular physiological effect. The preliminary screening will give an idea about the chemical nature of the drug and hence an idea about its therapeutic efficacy. Phytochemical analysis for various solvent extracts of Strychnos potatorum leaves is reported by Kagithoju et al., (2013) and Thespesia populnea root by Patil et al., (2012). The information obtained through such studies will be helpful in further studies of the plant under investigation. The fluorescence analysis is a simple, rapid pharmacognostic procedure, which is useful in the identification of authenticity of crude drugs and recognizes adulterants. In the fluorescence analysis, the plant parts or crude drugs are examined as such or in their powdered form with a number of various polar and non-polar reagents. It is a valuable analytical tool in the identification of plant samples and crude drugs (Denston, 1946). The fluorescence analysis of T. portulacastrum displayed an array of colours that could be employed for identification of probable classes of compounds in the plant. Fluorescence is the phenomenon exhibited by various chemical constituents present in the plant material in the visible range in day light. The ultraviolet light produces fluorescence in many natural products (e.g. alkaloids like berberine) which do not visibly fluoresce in daylight. Some of the substances may be often converted into fluorescent derivatives by using different chemical reagents though they are not fluorescent, hence we can often assess qualitatively some crude drugs using fluorescence as it is the most important parameter of pharmacognostical evaluation (Ansari, 2006;Gupta et al., 2006). Fluorescence analysis is reported for other plants like Bombax ceiba (Wahab et al., 2012), Terminalia arjuna, (Desai and Chanda, 2014) and Cyathula prostrate (Sonibare and Olatubosun, 2015). Pharmacognostic studies are not part specific. All parts of the plant are important and show therapeutic efficacy, though their efficacy varies. Hence pharmacognostic studies should be done for the part of the plant which is under investigation. Some of the examples of pharmacognostic studies of different parts reported in the literature are root (Shah et al., 2011);rhizome (Jha et al., 2012); stem (Nagani et al., 2011); leaf stem and root of Ageratum conyzoides and Asparagus officinalis (Janarthanan et al., 2016;Begum et al., 2017); leaf (Rakholiya and Chanda, 2012); Aerial parts of Achyranthes aspera (Shukla et al., 2018); flower (Baravalia et al., 2012), Pseudobulbs of Coelogyne cristata (Pramanick, 2016); seed . The organoleptic, macroscopic, microscopic characters, phytochemical, physicochemical, fluorescence studies results of this study could be used for the quality control of the crude drug. They will also help to maintain the efficacy and identity of the drug and will prevent mishandling of the drug. These parameters can be used as reference standards of this plant and also help in preparation of a monograph. elemental
2019-04-02T13:14:24.410Z
2018-05-20T00:00:00.000
{ "year": 2018, "sha1": "0c6184bd77f5fd59ffa712143ed6b0e6805d3332", "oa_license": null, "oa_url": "https://www.ijcmas.com/7-5-2018/Pande%20Jyoti,%20et%20al.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "9eb70089c5aa8a5ac73bf79a7874c5d9fe27254a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }