id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
249277361 | pes2o/s2orc | v3-fos-license | The number of androgen receptor CAG repeats and mortality in men
Abstract Introduction The androgen receptor (AR) mediates peripheral effects of testosterone. Evidence suggests that the number of CAG repeats in exon-1 of the AR gene negatively correlates with AR transcriptional activity. The aim of this analysis was to determine the association between CAG repeat number and mortality in men. Methods Men aged 40–79 years were recruited from primary care for participation in the UK arm of the European Male Aging Study between 2003 and 2005. Cox proportional hazards modelling was used to determine the association between CAG repeat number/mortality. Results were expressed as hazard ratios(HR)/95% confidence intervals (CI). Results 312 men were followed up. The mean baseline age was 59.5 years. At follow up, 85/312(27%) men had died. CAG repeat length ranged from 14 to 39, with the highest proportion of CAG repeat number at 21 repeats(16.4%). In a multivariable model, using men with CAG repeat numbers of 22-23 as the reference, men with a lower number of CAG repeats(<22) showed a trend for a higher mortality in the follow-up period (HR 1.46 (0.75, 2.81)) as did men with higher number of repeats (>23) (1.37 (0.65, 2.91)). Conclusion Our data suggest that CAG repeat number may partially influence the risk of mortality in men. Further larger studies are required to quantify the effect.
Introduction
The peripheral effects of testosterone are mediated through the androgen receptor (AR). A number of studies have shown an association between a low serum testosterone (T) and increased all-cause and cardiovascular-related mortality, although a previous meta-analysis of 12 community-based surveys revealed considerable inconsistency between individual studies [1]. Whether low T level is a non-specific risk marker of poor health [1,2] or the association is mediated by the effects of T deficiency on the cardiovascular system [3] is currently unclear. Erectile dysfunction (ED) is increasingly recognized as an early warning signal of impending cardiovascular disease and a predictor of excessive mortality [3][4][5].
Late onset hypogonadism (LOH) is defined as the simultaneous occurrence of three sexual symptoms (decreased sexual interest and morning erections plus erectile dysfunction) and circulating total Testosterone (T) level below 11 nmol/L plus free T below 220 pmol/ L [6]. Using these criteria, associations have been reported between LOH and a variety of end organ deficits suggestive of androgen deficiency [7]. Pye et al. [8], in an earlier analysis of the European Male Ageing Study (EMAS) cohort, using a combination of deaths verified from death certificates (25%), death registers (37%), medical/hospital records (27%) or from the reporting of a family member or contact person, reported that severe LOH was associated with substantially higher risks of all-cause and cardiovascular mortality, to which both the level of T and the presence of sexual symptoms contribute independently.
The androgen receptor (AR) mediates the peripheral effects of testosterone. The main mechanism of action for the AR is to direct regulation of gene transcription. Exon 1 of the AR gene contains a polymorphic sequence of CAG repeats, which varies in number from 10 to 35, and which encodes polyglutamine stretches of the AR transactivation domain [9]. The evidence suggests that the number of CAG repeats is negatively correlated with the transcriptional activity of the AR [9][10][11].
We have previously shown in a male cohort of people with type 2 diabetes (T2DM) that a "u" shaped curve relation existed between number of CAG repeats and mortality [12]. The hypothesis developed was that there was an optimal CAG repeat number in relation to sensitivity of the androgen receptor, which relates to the effects of testosterone on metabolically active tissues and on the vascular system and that men with a CAG repeat number above or below this, may experience a less favorable long-term cardiovascular outcome. In an earlier meta-analysis, it was reported that men with <22 and >23 CAG repeats had an approximately 20% higher risk of being infertile than men with 22 or 23 CAG repeats [13].
In the light of these findings, our aim was to determine if any relation existed between the number of CAG repeats and all-cause mortality in a United Kingdom (UK) prospective cohort of mostly Caucasian men recruited from primary care [14], taking into account baseline testosterone level.
Hormones and biochemistry
A single fasting morning (venous blood sample (before 10 am) was obtained from each participant.
The blood samples were collected between 2003 and 2005.
The testosterone level was measured by liquid chromatography-mass spectrometry (LC-MS) as described previously [17]. Sex Hormone Binding Globulin (SHBG) was measured by the Modular E170 platform electrochemiluminescence immunoassay (Roche Diagnostics). Free T levels were derived from total T, SHBG, and albumin concentrations by the Vermeulen formula [18]. Measurement of total estradiol was carried out by gas chromatography-tandem mass spectrometry [17].
Determination of CAG repeat number
Genetic analysis was done in 2008. DNA extracted from whole blood was subjected to polymerase chain reaction (PCR) to amplify the region of the AR gene containing AR exon 1 CAG triplet repeat. PCR preparation, primers, and conditions were as described in a previous study [7]. Genotyping of the CAG repeat was carried out in the laboratory of the Centre for Integrated Genomic Medical Research (The University of Manchester), using fluorescently-labeled PCR. Ten nanograms of DNA were amplified in 10-l reactions containing 2.5 pmol each of fluorescently labeled forward and reverse primer, 10 PCR buffer, 1.5 mM MgCl 2 , 0.2 mM dNTPs, and 0.2 U Taq DNA polymerase. The primer sequences were: forward, 5-TCC AGA ATC TGT TCC AGA GCG TGC-3; and reverse, 5-GCT GTG AAG GTT GCT GTT CCT CAT-3. Reactions were cycled at 95 � C for 5 min; 10 cycles of 94 � C for 10 s, 55 � C for 30 s, and 72 � C for 30 s; 20 cycles of 89 � C for 20 s, 55 � C for 30 s, and 72 � C for 30 s; and finally, 72 � C for 10 min. Samples were then run on an ABIPRISM 3100 Genetic Analyser (Applied Biosystems, Foster City, CA) and genotyped using Genescan (Applied Biosystems). Allele frequencies were checked for consistency with HapMap data or literature where possible.
Statistical analysis
Cox regression was used to determine the association between number of CAG repeats and all-cause mortality. Participants contribute person-time from the date of recruitment to death and stopped contributing at their date of death, or end of February 2021. Based on previously published thresholds [13], CAG repeats were categorized as <22, 22-23 (referent group) and >23. Analyses were adjusted for age, decile of Index of Multiple Deprivation, total testosterone at baseline, and estradiol. In a separate model, we also adjusted additionally for SHBG.
To determine whether the association between AR CAG repeats and mortality varied by level of total testosterone, we categorized total testosterone into tertiles and repeated the Cox regression analysis for participants in each of the three tertiles of total testosterone, in separate models. In the models stratified by tertile of total testosterone, we adjusted for total testosterone as a continuous variable.
Results
In total, 396 men were recruited to the study of whom 312 had complete data for both CAG repeat and mortality. At baseline, 22/312 reported having diabetes (7.16%). The age range at recruitment of the 312 men who contributed data for the analysis was 40-80 years (mean 59.5 years). 93% of participants in the study were of Caucasian ethnicity. During a follow up period, of up to 17 years 85/312 (27.2%) of men had died. Baseline data are summarized in Table 1.
Relation between CAG repeat number and mortality
Compared to participants with 22-23 CAG repeats, participants with <22 and >23 CAG repeats, respectively, had a higher hazard ratio (95% CI) for all-cause mortality during the study period in a model adjusted for age, total testosterone, total estradiol, and decile of index of multiple deprivation, though this was not statistically significant at the 95% confidence level, 1.46 (0.75, 2.81), 1.37 (0.65, 2.91) (Figure 1 and Table 3). SHBG attenuated the association between number of CAG repeats and mortality (Table 3).
The association between AR CAG repeats and mortality varied by tertile of total testosterone at baseline. In a model adjusted for age, total testosterone (continuous measure), total estradiol, and quintile of index of multiple deprivation including only men in the lowest tertile of total testosterone at baseline, the hazard ratio (95% CI) for mortality among those with <22 and >23 CAG repeats, respectively (compared to 22-23 CAG repeats was) 2.64 (0.76, 9.13) and 2.65 (0.66, 10.66). The corresponding results among men in the middle tertile of total testosterone was 1.72 (0.35, 8.45) and 1.76 (0.34, 9.13) and among men in the highest tertile of testosterone was 1.03 (0.34, 3.12) and 0.87 (0.24, 3.13).
Discussion
In this long-term prospective follow-up study (approximately 17 years), we observed a trend for men with a CAG repeat number less than 22, or more than 23 for increased likelihood to die over the period of followup than those with a CAG repeat number between 22 and 23. This effect was modified by serum testosterone level at baseline with a diminishing effect by increasing tertile of serum testosterone level (baseline). This is the longest follow-up study to describe this phenomenon and accords with the findings of a recently published study in T2DM men where a similar but not identical association was seen [12]. Interestingly among men with type 2 diabetes, in the study of Wong et al., a higher total testosterone was reportedly associated with increased mortality in the presence of shorter CAG repeat length but decreased mortality in those with long CAG repeats [19]. A smaller effect size in the relation between CAG repeat number and mortality was seen here than was previously described in T2DM men [12]. The reduced effect size may relate to an increased cardiovascular event rate in men with T2DM than the male participants in this study and the fact that the men with T2DM were older at recruitment than the men whose outcomes were reported here. Nevertheless the direction was similar.
An increasing CAG repeat number within exon 1 of the AR gene polymorphism has previously been linked to increased AR insensitivity [9][10][11]. In a cohort of T2DM men, we previously reported a "u" shaped relation between the number of CAG repeats and mortality, such that presence of 21 CAG repeats was associated with up to an 58% lower mortality rate than <20 CAG repeats and >21 CAG repeats [12]. While the findings of this current study slightly differ, they are in an independent sample of men of whom 6.8% reported having diabetes at baseline. The fact that the relation between CAG repeat length and mortality in a general population sample of men was less strong than in men with type 2 diabetes does raise the question of whether the effect may be greater in men with an underlying predisposition to cardiovascular disease. The numbers here were not sufficient to look at this as only 6.8% of the men had a diagnosis of T2DM at recruitment.
The transactivational activity of the AR is known to be inversely associated with the number of CAG repeats [20]. Whether variations of the CAG repeat lengths within a particular range are associated with clinical changes in tissue androgenization is uncertain. Similarly, there is no conclusive evidence that AR CAG repeat polymorphisms modulate the responsiveness to exogenous testosterone replacement in hypogonadism. Larger studies are required to characterize fully the impact of modest variability in CAG repeat length on mortality in men, as well as to more precisely to define the interaction between CAG repeat number and circulating testosterone/SHBG concentration in modulating mortality risk in men.
Independent associations between AR exon 1 CAG length and adverse cardiovascular risk factors, such as high LDL-cholesterol [21], low HDL-cholesterol [22], and high blood pressure [8, 22], have been demonstrated by other studies. Specifically association between AR exon 1 longer CAG repeat length and low total testosterone concentrations appears to exert an adjunctive worsening effect on the metabolic profile [23,24]. This suggests some level of complexity of the role of the CAG polymorphism in regulating the relation between androgen effects and cardiovascular risk factors, which may inform future risk calculators taking into account CAG repeat number. At this time, there are no reports to suggest that CAG repeat number is itself associated with the likelihood of a man developing T2DM.
It is relevant to risk factors for early death reported that men with CAG repeat length below 21 or above 24 had a, respectively, 50% and 76% higher risk of testicular cancer than patients with CAG repeat number of 21-24 [25]. In other words, the risk of developing testicular cancer would seem to be lower for men with a CAG repeat number between 21 and 24. In another study with an approach similar to the study reported here (as mentioned in the Introduction), a meta-analysis of 3915 men (1831 fertile and 2084 infertile) [13] reported that men with <22 and >23 CAG repeats had an approximately 20% higher risk of being infertile than men with 22 or 23 CAG repeats Model 1 is adjusted for age at recruitment, total testosterone, total estradiol, and decile of index of multiple deprivation. Model 2 is additionally adjusted for sex hormone-binding globulin. [13]. Taken together with our findings, the evidence supports the notion that normal AR function is sustained over a critical but limited range of CAG repeat number. At this point in time, we do not have access to the actual cause of death [26]. Cause specific mortality was not available to us. Further work is needed to at look potential associations between CAG repeat number and specifically cardiovascular mortality. This is a relatively small sample and any specific recommendations/guidance would require a larger analysis. We anticipate that with corroboration from other cohorts it may be possible more precisely to define the interaction between CAG repeat number and circulating testosterone/SHBG concentration in modulating mortality risk in men.
Strengths/limitations
A strength of the paper is the duration of follow-up of the participants. A limitation of the paper is that we have only analysed the UK subset of the EMAS cohort, since linked NHS data were available for this subset. Therefore out analysis is limited by sample size. A further limitation is that we only have mortality data, not any details of comorbidities as at 2019/2020 that may have developed over time. We plan to obtain this information in the future and also to extend the cohort analysis to the other cohorts of the EMAS study.
We accept that there are many other risk factors for cardiovascular disease. However these have been linked to relative "functional" hypogonadism which is the consequence of androgen receptor resistance secondary to a low or high number of CAG repeats. With a relatively small number of deaths, we did not feel that inclusion of multiple linked cardiovascular risk factors would be appropriate here.
Regarding the analysis of the association between prescribing over time/development of neoplastic disorders and mortality, this would require a larger cohort and a complex longitudinal analysis which we are in the process of putting together. Finally we accept that the study was conducted mainly in Caucasian men. This is a relatively small sample and any recommendations regarding CAG repeat measurement would require a larger analysis. We anticipate that with corroboration from other cohorts it may be possible more precisely to define the interaction between CAG repeat number and circulating testosterone/SHBG concentration in modulating mortality risk in men.
Conclusions
In this long-term prospective follow-up study, we observed that men with an AR exon CAG repeat number less than 22 or more than 23 showed a trend towards a higher mortality rate over the period of follow-up than those with a CAG repeat number between 22 and 23. These results did not reach levels of formal statistical significance. The effect was greater in men with a lower baseline testosterone level.
A greater understanding of the interaction between CAG repeat number and circulating testosterone level could add further to our understanding of the endocrine processes that modulate mortality risk in men.
Disclosure statement
None of the authors has any conflict of interest regarding this study. | 2022-06-03T06:23:09.877Z | 2022-06-02T00:00:00.000 | {
"year": 2022,
"sha1": "037854ec3c186e9f0cb28c8b96862cdcaec91e4d",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/13685538.2022.2061452?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "735cfcf1bbc76f44c3df3c65a3f633ac1fc58188",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244430392 | pes2o/s2orc | v3-fos-license | Plasma RIPK3 And HMGB1 Predict Severe COVID-19 Progression In ICU Patients: A Single-Center Cohort Study
Background: Severe progression of coronavirus disease 2019 (COVID ‐ 19) causes respiratory failure and critical illness. Recently, these pathologies have been associated with necroptosis, a receptor ‐ interacting serine/threonine ‐ protein kinase 3 (RIPK3) dependent regulated form of inammatory cell death. Investigations of indicator necroptosis proteins like RIPK3, mixed lineage kinase domain ‐ like pseudokinase (MLKL), receptor ‐ interacting serine/threonine ‐ protein kinases 1 (RIPK1), and high ‐ mobility group box 1 (HMGB1) in clinical COVID ‐ 19 manifestations are lacking. Methods: A prospective prolonged cohort study including 46 intensive care unit (ICU) patients classied with moderate and severe COVID ‐ 19 was conducted with daily measured plasma levels of indicator necroptosis proteins like RIPK3, MLKL, RIPK1, and HMGB1 by enzyme ‐ linked immunosorbent assay (ELISA). On this basis, a multiple logistic (regression) classication for the prediction of severe COVID ‐ 19 progression was performed. Results: We found signicantly elevated RIPK3, MLKL, HMGB1, and RIPK1 levels in COVID ‐ 19 patients admitted to the ICU compared to healthy controls throughout the ongoing disease, indicating necroptotic processes. Above all, with combined measurements of RIPK3 and HMGB1 plasma levels, we were able to time ‐ independently predict COVID ‐ 19 severity with 84% accuracy, 90% sensitivity, and 76% specicity. Conclusion: We suggest that HMGB1 and RIPK3 are potential biomarkers to identify high ‐ risk COVID ‐ 19 patients and developed a classier for COVID ‐ 19 severity. Data are presented as a n (%) for categorical variables or b median (interquartile range) for continuous variables. Patients’ laboratory parameters are reported as the respective median of the parameter levels obtained during ICU stay. p ‐ values comparing patients with moderate and severe COVID ‐ 19 were calculated with Mann ‐ Whitney U test or Fisher’s exact test. Additionally, patients’ median laboratory parameter levels were compared to the hospital’s central laboratory’s threshold levels (CRP: 0.5 mg/dl, IL ‐ 6: 7 pg/ml, PCT: 0.5 ng/ml, LDH: 248 U/l, peripheral leukocyte count: 10.41 /nl). Respective quantities in the pathological range were determined and then compared among patients with severe and moderate COVID ‐ 19 by Fisher’s exact test.
Introduction
Coronavirus disease 2019 is the most challenging pandemic in recent human history. In November 2021, the World Health Organization reported 249.743.428 cases of COVID-19 with 5.047.652 deaths globally [1]. It is crucial for the disease outcome and appropriate treatment to develop a method to determine the exact point of COVID-19 exacerbation. Patients suffering from critical COVID-19 often present with respiratory failure as well as features of sepsis, such as coagulopathy, lymphopenia, and high plasma levels of pro-in ammatory cytokines [2].
In various comparable non-COVID-19-related in ammatory diseases, it is already established that the receptor-interacting serine/threonine-protein kinase 1 and 3 (RIPK1 and RIPK3), as well as the mixed lineage kinase domain-like pseudokinase (MLKL), are associated with disease progression as important regulators of necroptotic cell death [3,4]. For example, the examination of lung tissue sections in H7N9 virus infection, in which acute respiratory distress syndrome (ARDS) was the main cause of death, showed signi cantly higher RIPK1, RIPK3, phospho-RIPK3, MLKL, and phospho-MLKL protein levels [5]. These data suggest that severe H7N9 infection is associated with necroptosis of the lung epithelium which contributes to ARDS. This hypothesis is supported by results that showed signi cantly increased RIPK3 levels not only in the plasma of ARDS patients but also in bronchoalveolar lavage uid [6]. Furthermore, elevated RIPK3 levels in the plasma of patients with severe sepsis or septic shock also indicate that the RIPK3 signaling pathway is activated under septic conditions [7]. Necroptosis, also referred to as RIPK3-dependent necrosis, is executed by phosphorylated and activated RIPK1 and RIPK3, which form a complex known as the necrosome [8][9][10]. Subsequently, the effector molecule MLKL is phosphorylated, enabling it to oligomerize and migrate to the cell membrane, leading to the release of damage-associated molecular patterns (DAMPs), cell rupture, and lytic cell death [11]. This promotes cytokine production and an excessive immune response [12]. High-mobility group box 1 (HMGB1), considered as one of the most relevant DAMPs released by necroptotic cells, usually binds to DNA as well as chromatin and exerts its function in chromatin modi cation and DNA repair [13][14][15][16][17][18]. When released during in ammatory cell death, HMGB1 triggers immunological processes, inducing recruitment of immune cells and expression as well as the release of pro-in ammatory cytokines (interleukin 6 (IL-6); IL-1β; tumor necrosis factor-α (TNF-α)), as similarly described in COVID-19 [13,[19][20][21]. Extracellular HMGB1 is furthermore capable of forming complexes with cytokines amplifying hyperin ammation [22,23] . Moreover, high serum HMGB1 levels in non-COVID-19 patients were linked to fatal ARDS [24] . Besides, reactive oxygen species (ROS) production is associated with necroptosis, and mitochondrial ROS (mtROS) production also plays a crucial role in peripheral lymphocytes in severe disease conditions [25][26][27].
Against this background, we decided to conduct a close monitoring of plasma levels of the necroptosis-related proteins RIPK3, MLKL, RIPK1, and the DAMP HMGB1 in COVID-19 patients throughout intensive care unit (ICU) stay. The current single-center cohort study aims to investigate the prognostic potential of RIPK3, MLKL, HMGB1, and RIPK1 in COVID-19 progression as feasible biomarkers. Using long-term measurement data, we were able to build a classi er that predicts COVID-19 exacerbation independently of time. In addition, we analyzed cell death and mtROS in peripheral leukocytes of ICU COVID-19 patients in single measurements, to verify if these parameters differ in COVID-19 patients as shown before in severe disease conditions, e.g. sepsis patients [27] .
COVID-19 cohort
This is a prospective single-center cohort study of 46 COVID-19 patients (≥18-years) who were admitted to the ICU of the University Hospital Frankfurt am Main, Germany, between June 2020 and January 2021. During ICU stay, blood samples were obtained daily at 8 a.m. from admission until ICU discharge. In ammatory parameters including C-reactive protein (CRP), IL-6, procalcitonin (PCT), lactate dehydrogenase (LDH), and peripheral leukocyte count were obtained daily at 4 a.m. and measured by the hospital's central laboratory and compared to the hospital's central laboratory's threshold levels (CRP: 0.5 mg/dl, IL-6: 7 pg/ml, PCT: 0.5 ng/ml, LDH: 248 U/l, peripheral leukocyte count: 10.41 /nl). Control samples were drawn from 15 non-severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2)-infected healthy donors (≥18-years) to compare healthy physiological conditions to COVID-19. The study was conducted in compliance with good clinical practice and current guidelines. Intubation was considered in patients with COVID-19 and severe hypoxemia (PaO 2 /FiO 2 <150 mmHg) and respiratory rates >30/min. A PaO 2 /FiO 2 of <100 mmHg in two consecutive measurements was an indication to perform mechanical ventilation, according to the German guideline [28]. Based on this, it was feasible to distinguish between patients with severe and moderate COVID-19 according to their requirement for intubation throughout their ICU stay. Patients were transferred to a normal ward if their oxygen requirement was <6 l/min and their SpO 2 >90%. Patients who were not mechanically ventilated due to patient will (n=3), despite indication, were also assigned to the group of patients with severe COVID-19. The time of symptom onset was speci ed by the patient.
Plasma preparation and measurement
Whole blood samples were drawn into citrate tubes (SARSTEDT S Monovetten, Nümbrecht, Germany, Citrat 3,13%). Samples were centrifuged for 10 minutes at 2000 g and plasma was stored at -80°C until further processing.
Statistical analysis
Statistical analyses were carried out with GraphPad Prism version 7.0 (GraphPad Software Inc., San Diego, CA, USA) and R v4.0.3 (R Foundation for Statistical Computing, Vienna, Austria) [29]. Descriptive variables were calculated using means with standard deviation (SD); medians and interquartile ranges (IQRs, P25%-P75%), as well as counts and percentages. For continuous variables, two-tailed Student's t-or Mann-Whitney U tests were performed. ANOVAs with Tukey's post hoc test for multiple comparisons or Kruskal-Wallis tests with Dunn's post hoc test for multiple comparisons were used to examine more than two groups. Adjusted p-values from post hoc tests were indicated (p adj ). For categorical data, Fisher's exact test was performed. Principal Component Analysis (PCA) was performed to investigate the relevance of the measurement parameters and their correlations. A p-value <0.05 was considered statistically signi cant (*p<0.05; **p<0.01; ***p<0.001).
Multiple logistic (regression) classi er
A multiple logistic regression analysis was performed including 28 patients with severe COVID-19, de ned by the requirement for mechanical ventilation at the time of the respective blood collection, and 18 patients with moderate COVID-19. The COVID-19 cohort was censored (0=ICU discharge and 1=death). The day of the censoring event was labeled as day E and chosen as a reference point. Data from the days E to E-3 were used for training and testing the model. The remaining data up to E-7 were used as complementary validation data. The labeled data were split randomly into a training (70%) and a test set (30%). Each set contained binary class information about the patients' severity status (moderate=0 and severe=1) as well as the quantitative measurements of plasma RIPK3, MLKL, HMGB1, and RIPK1 levels.
Different models were trained and evaluated, including single or combined variables. All models were calculated using a generalized linear model (GLM) of the binomial family to nd a classi er (caret package) [30]. A 10-fold cross-validation was performed to exclude a subsample bias and prevent the model from being over tted. Each model was calculated as a regular logistic regression to compare the relative goodness-of-t with Akaike's information criterion (AIC). The ideal predictors for the classi er were selected by evaluation of the classi cation performance indicators, e.g., predictors with the highest accuracy and the lowest AIC after cross-validation were chosen. The nal classi er was built with training data that showed mean accuracies >90% on days E to E-3. This model was tested with the separated validation set to determine the overall predictive quality of the classi er, e.g., with general performance parameters (Accuracy, Sensitivity, and Speci city). The odds and Odds Ratio (OR) of the predictor variables were determined from the coe cients of the nal regression model. Finally, multiple logistic regression models were performed using measurements of RIPK3, HMGB1, CRP, IL-6, PCT, LDH, and peripheral leukocyte count including a training and a test set.
Demographic characteristics and laboratory parameters
Between June 2020 and January 2021, we considered 46 COVID-19 patients admitted to the ICU, of whom 28 patients showed a moderate and 18 patients a severe COVID-19 progression during ICU stay. Overall, ICU admission occurred on day seven (4-11) after symptom onset. Patients with severe COVID-19 were older (p=0.033), showed extended ICU stay (p<0.001), and increased mortality rate (p<0.001) compared to patients with moderate COVID-19 ( Table 1). The median survival time after ICU admission in patients with severe COVID-19 was 17 days. Of all investigated comorbidities, we found a signi cantly increased rate of arterial hypertension in patients with severe compared to moderate COVID-19 (p=0.016) ( Table 1).
Additionally, we examined these measurements with symptom onset as a baseline (Fig. S1).
Prediction of severe COVID-19 progression with a multiple logistic (regression) classi er: model selection With randomly selected training data from the day of the censoring event E (ICU discharge or death) to E-3 (three days before the censoring event), we evaluated models with different predictor combinations in a time-independent manner ( Table 2). Although a model consisting of HMGB1, RIPK3, and RIPK1 and a model consisting of solely RIPK3 achieved better accuracies with 78%, their ts and thus model qualities (Akaike's information criterion (AIC)) were worse, resulting in a model with HMGB1 and RIPK3 with a marginally superior t (AIC=65.91), pointing to a simpler and, therefore, more applicable model. While this was an improvement towards the other models, a slightly lower classi cation accuracy Acc HMGB1+RIPK3 =77% was achieved. However, this model required only two measurements, opposed to three.
To further evaluate on which days plasma RIPK3, MLKL, HMGB1, and RIPK1 levels distinguished best between severe and moderate COVID-19 progression, we looked at individual days and markers backward from the censoring event (E). On days E-3 until the event, HMGB1 ( Fig. 3a-d, third column) and on days E-3 and E-1, RIPK3 (Fig. 3b,d, rst column) plasma levels were signi cantly elevated in patients with severe compared to those with moderate COVID-19. In contrast, RIPK1 and MLKL levels did not differ signi cantly between severe and moderate COVID-19 ( Fig. 3a-d, second and fourth column) and were therefore not included in our further analysis. Consequently, plasma RIPK3 and HMGB1 were selected for further evaluation.
Evaluation of the discriminatory ability of combined RIPK3 and HMGB1 plasma levels for building the COVID-19 severity classi er To organize data for training and testing the selected model, combined RIPK3 and HMGB1 plasma levels are viewed backward from the censoring event (E). Table 3 shows the classi cation performance of predicting COVID-19 progression using the training and test data. The days starting from day E to E-3 were analyzed separately. In the training data, E, E-1, and E-3 achieved accuracies of 100% ( The most stable and optimal results in discriminating between patients with moderate and severe COVID-19 were found to be days E-1 and E-3, as indicated by signi cantly higher RIPK3 and HMGB1 plasma levels (Fig. 3b,d, rst and third column) as well as by the performance of the training and test data using combined RIPK3 and HMGB1 measurements from these days (Table 3). Therefore, HMGB1 and RIPK3 plasma levels from these days were used in building the nal classi er.
Prediction of severe COVID-19 progression with combined RIPK3 and HMGB1 measurements Table 4 shows the performance of the nal classi er with the training, test, and validation data. The overall accuracies of discriminating between a moderate and severe COVID-19 progression were high (>83%). The fraction of false positives and false negatives was low, resulting in speci city and sensitivity levels >74%. The test set reached 83% accuracy as well as 89% sensitivity and 74% speci city (Fig. 4b), which was exceeded by the validation set using data from up to day E-7 (excluding E-1 and E-3) with 84% accuracy, 90% sensitivity, and 76% speci city (Fig. 4c). This was particularly accurate up to 6 days before the censoring event (Fig. 4d).
Also, RIPK3 plasma levels of patients with moderate COVID-19 approached the healthy control levels before ICU discharge (Fig. 4e). Notably, HMGB1 plasma levels indicated signi cant differences between patients with moderate and severe COVID-19 at a very early stage (E-8) (Fig. 4f). Therefore, the combination of circulating levels of RIPK3 and HMGB1 can be used to time-independently classify COVID-19 patients admitted to the ICU into potential disease severity states (Fig. 4a-c).
The odds of changing COVID-19 severity based on RIPK3 and HMGB1 levels To further estimate these ndings, a logistic regression model was calculated with the full data over the entire observation period of plasma RIPK3 and HMGB1 levels as independent variables and disease progression of COVID-19 as the dependent variable. With this model, the odds of changing the disease severity state were estimated (Table 5).
Model comparison for the prediction of severe COVID-19 progression using multiple in ammatory variables To compare the predictive power of established in ammatory markers as well as RIPK3 and HMGB1, we additionally performed a multiple logistic regression model including measurements of RIPK3, HMGB1, CRP, IL-6, PCT, LDH, and peripheral leukocyte count. Interestingly, in the training set, the combination of measured RIPK3, HMGB1, and PCT levels reached the highest accuracy (93%). In order to be able to represent the plot in two dimensions, with one variable on the x-axis and one on the y-axis, we chose a model with two measurements from our potential biomarkers. In fact, the combination of RIPK3 and HMGB1 levels discriminated best between moderate and severe COVID-19 progression with an accuracy of 86% in the training set (Table 6). In the test set, both models performed similarly, with an accuracy of 83.7% (Table 7). Therefore, plasma RIPK3 and HMGB1 are the most suitable candidates for predicting COVID-19 severity.
Discussion
In this study, plasma RIPK3, MLKL, HMGB1, and RIPK1 levels of COVID-19 patients are obtained in daily-assessed measurements throughout the whole ICU stay. Based on these data, we developed for the rst time a classi er built on RIPK3 and HMGB1 as potential biomarkers to discriminate between moderate and severe COVID-19 progression after ICU admission with an accuracy of 84%. Several independent lines of evidence support this conclusion.
First, COVID-19 intensive care patients showed continuously signi cantly higher plasma RIPK3 levels than healthy controls throughout their ICU stay, strongly indicating ongoing RIPK3-dependent necroptosis. In addition to previous investigations that considered RIPK3 levels at single time points [31,32], we revealed in our prolonged study that patients with severe COVID-19 possess higher RIPK3 plasma levels in a time-dependent manner.
Also, patients with moderate COVID-19 showed decreasing RIPK3 levels, corresponding to their recovery.
Second, we observed signi cant long-term elevations of HMGB1 in COVID-19 intensive care patients compared to healthy controls. Elevations of HMGB1 levels were associated with the requirement for mechanical ventilation and fatal outcome, as also demonstrated in our supplemental data and previous studies [33,34]. Notably, we also revealed signi cant elevations of plasma HMGB1 corresponding to severe COVID-19 progression in a disease-dependent time course. Chen et al. observed an association between exogenous human HMGB1 and stimulated angiotensin-converting enzyme 2 (ACE2) expression as an entry receptor for SARS-CoV-2 in cultured human lung epithelial cells, indicating a feedback loop that possibly worsens patients' outcomes [33]. RIPK3 and extracellular HMGB1 also contribute to endothelial dysfunction and loss of barrier integrity, considered to be involved in COVID-19 pathology [6,[35][36][37]. Since high extracellular levels of HMGB1 are particularly harmful, our results provide evidence for HMGB1 as a potential drug target in COVID-19, as has been successfully demonstrated for IL-6 signaling [38].
Third, we found that plasma MLKL and RIPK1 levels were tendentially higher in COVID-19 ICU patients compared to healthy controls, indicating an involvement of necroptosis in COVID-19 pathology. Accordingly, upregulation of phosphorylated MLKL was detected in lung tissue of SARS-CoV-2-infected mice and post mortem human lungs. In vitro, MLKL and RIPK3 contributed to cell death induction, as well as cytokine and DAMP release in SARS-CoV-2-infected cells, reinforcing our ndings in COVID-19 patients in the ICU [39]. Moreover, phosphorylated and thus activated RIPK1 was detected in pharyngeal epithelial cells of COVID-19 patients, and since respiratory tissues appeared to be a prominent sink for RIPK1 in COVID-19, its interaction with SARS-CoV-2 components is hypothesized [40,41]. However, MLKL and RIPK1 did not contribute signi cantly to COVID-19 severity and were therefore not included in the nal classi er.
In addition to our prolonged study investigating kinetic variations, our single measurements of RIPK3, MLKL, HMGB1, and RIPK1 supported our hypothesis that necroptosis plays a role in COVID-19, as described in our supplemental data. In this cohort, we also observed a loss of viable peripheral leukocytes in every examined cell subpopulation according to disease severity, particularly in patients receiving extracorporeal membrane oxygenation (ECMO), however, given that there were only 6 patients with this treatment, these results should be interpreted carefully.
To our knowledge, we are the rst to perform mtROS measurements using ow cytometry in whole blood samples of COVID-19 patients; therefore, there is still a lack of comparative studies. Other studies were carried out on cell cultures treated with plasma from COVID-19 patients or with single viral components (open reading frame 3a (ORF-3a) or the SARS-CoV-2 spike protein), as well as SARS-CoV-2-infected monocytes in vitro and respiratory samples (sputum/ Bronchoalveolar lavage (BAL)) of COVID-19 patients [42][43][44][45][46]. Nevertheless, it is important to mention these studies, but comparisons should be interpreted with caution. We show in our supplemental data that peripheral leukocytes of our COVID-19 cohort with single measurements had signi cantly lower levels of mtROS compared to healthy controls.
As patients were admitted to the ICU at different disease stages, data from the rst day after ICU admission would provide limited information.
Therefore, we took data from time points when the disease progression was already clear to build our model using plasma RIPK3 and HMGB1 levels and thereby reduced the variability that resulted from admission to the ICU at different COVID-19 stages. In everyday clinical practice, it is often not possible to predict whether a patient is close to or long before death or ICU discharge. The classi er model avoids this problem with RIPK3 and HMGB1 as promising biomarkers in COVID-19.
This study has several limitations. The timing of mechanical ventilation is a subjective outcome. However, differentiation of severity is possible because, once patients have the indication for intubation, spontaneous breathing and non-invasive ventilation are no longer su cient and a de nite state of disease progression has been reached. Since we intended to examine COVID-19 patients over a prolonged period, we decided to consider the requirement for intubation as a distinction between a moderate and severe COVID-19 progression for the study design. We cannot completely exclude an additional impact of the intubation status on plasma levels of RIPK3, MLKL, HMGB1, and RIPK1.
We are aware that measurements of 46 patients must be considered carefully, but regarding the number of blood samples received daily over a longer period of time, the study size is unusually extensive, in particular, compared to other single-center studies.
Moreover, to further explore the disease mechanisms indicated by this study, we suggest additional investigations on necroptosis markers, such as studies on other COVID-19 progressions and stages which we could not take into account e.g., non-hospitalized patients, or patients with post-COVID-19 syndrome. Finally, our data needs to be con rmed in further longitudinal clinical studies with independent cohorts of COVID-19 patients before implementation in clinical algorithms can be considered.
Conclusion
Our classi er with RIPK3 and HMGB1 as promising biomarkers in COVID-19 could help to timely identify future patients who require more intensive monitoring and bene t from maximized immunomodulatory therapy after ICU admission [38,51]. This model is simple and more accurate than models that, in addition to RIPK3 and HMGB1 plasma levels, considered in ammatory markers such as CRP, IL-6, PCT, LDH, and peripheral leukocyte count. The study was performed in accordance with the Declaration of Helsinki. Approval from the local ethics committee was obtained before the study was conducted (reference #20-643, #20-982) and a waiver regarding the requirement of written informed consent from COVID-19 patients was authorized. All participants of the control group provided written informed consent.
Consent for publication
All authors critically revised and approved the manuscript.
Data and materials availability
All data are available in the main text or the supplementary materials. Tables Table 1.Patient demographics of the COVID-19 cohort.
Data are presented as a n (%) for categorical variables or b median (interquartile range) for continuous variables. Patients' laboratory parameters are reported as the respective median of the parameter levels obtained during ICU stay. p-values comparing patients with moderate and severe COVID-19 were calculated with Mann-Whitney U test or Fisher's exact test. Additionally, patients' median laboratory parameter levels were compared to the hospital's central laboratory's threshold levels (CRP: 0.5 mg/dl, IL-6: 7 pg/ml, PCT: 0.5 ng/ml, LDH: 248 U/l, peripheral leukocyte count: 10.41 /nl).
Respective quantities in the pathological range were determined and then compared among patients with severe and moderate COVID-19 by Fisher's exact test.
COPD, chronic obstructive pulmonary disease Classi cation accuracies (%) and AICs values of the training data from day E to E-3 Table 3. Classi cation of the severity status on multiple days on and before the censoring event.
The classi cation performance of predicting COVID-19 severity using the training and test data consisting of combined RIPK3 and HMGB1 plasma levels on day E to E-3.
CI, con dence interval; Acc, accuracy Confusion matrices of the classi cation performances on training, test, and validation data with corresponding accuracy, sensitivity, and speci city Table 5. Logistic regression parameters of tting the severity state with RIPK3 and HMGB1 plasma levels.
RIPK3 and HMGB1 plasma levels of the total measurements are included in a logistic regression to calculate the odds of disease-related severity change.
The model parameters of the t are presented (***p<0.001).
Data is split into severe and moderate as described in our main method section Acc, accuracy | 2021-11-21T16:08:34.667Z | 2021-11-19T00:00:00.000 | {
"year": 2021,
"sha1": "cb4592589fdfffafdba570a757f6524146df8e94",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-1064345/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "cbfbf6ab63a7838e0da1c03839eba5d0b67cf073",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
213062063 | pes2o/s2orc | v3-fos-license | Tertiary Education Reforms in Ghana: Implications for Executive and Regulatory Governance of Public Universities
This article is the result of an exploratory research on regulation and corporate governance and developments in the tertiary education sector of Ghana. The materials reviewed included the legislations establishing public universities and oversight agencies and the draft bills for the reforms in the sector. The findings were qualitatively analysed. The governance and regulation of public universities in Ghana is currently going through reformsreforms that promises sweeping changes not only in the executive governance of the universities and the regulatory agencies, but also fundamentally, in the regulatory governance of the tertiary education sector. The piecemeal individual university Acts will give way to a single universities Act and consequently, harmonised universities statutes. This has implications for the autonomy of public universities in terms of institutional, academic and financial independence. The tertiary education oversight bodies, the National Council for Tertiary Education and the National Accreditation Board will also be merged into one oversight agency – Ghana Tertiary Education Commission. This article examines the current executive and regulatory governance of public universities and the oversight agencies and concludes that public universities under the status quo enjoy a high degree of institutional and academic independence. It is however posited that this will be eroded in the emerging regime. It further reveals that the present regulatory governance framework lacks both formal and defacto independence resulting from lack of financial autonomy and tenure insecurity. Consequently the regime lacks credibility. Conversely, the emerging regime both in terms of the executive governance of the universities and regulatory governance are tainted with insecurity of tenure, lack of institutional and financial independence. This will undermine policy and regulatory credibility with negative implications for academic freedom.
Methodology
The study employed an exploratory literature review approach on the institutional and regulatory governance of public universities in Ghana. The materials reviewed included primary and secondary legislation and, draft bills of parliament. The findings were then qualitatively discussed.
Current Executive Governance of Public Universities in Ghana
The need for good corporate governance practices in the management of organisations gained global attention partly because of the corporate scandals that hit the UK in the early 1990s (e.g. Mirror group Newspapers) and the USA in early 2000s (e.g. Euron andWorldcom in 2001/2002).Governance refers to the processes which serve as the medium for rule making, interpretation and decision making in any human endeavour (Sweet, 1999).This means that it constitutes the set of constraints in a relationship involving the interplay of authority, direction and control (Zingles, 2000). Therefore, corporate governance is "the system by which companies are directed and controlled" (Cadbury, 1992).Therefore, public university governance is the system of rules and procedures established in legislation, statutes and policy, the application of which is carried out through standard processes for the achievement of the objectives of the university.
For the achievement of the goals of the company, and in the case of public universities in Ghana, the members of the Governing Councils have the onus of leading the building and maintenance of "successful relationships with a wide range of stakeholders" (Financial Reporting Council , 2018). According to (Freeman, 1984),a stakeholder is any person or group of persons who are either influenced by the company or able to influence the attainment of organizational goals. Eden and Ackerman (1998) cited in (Bryson, 2004) identifies stakeholders as persons or groups who wield the ability to directly influence the future direction of an organisation. Thus, the quality of the Higher Education Institution's dedication to different stakeholder groups (beyond keeping a list of contacts) underscores the role higher education plays in the society (Jongbloed, 2009). It denotes a conscious effort to involve stakeholders with a view to appreciate their perceptions of the organization's offerings and how these can be improved. According to (Burrows, 1999), stakeholders in a typical higher education institution include the governing entities, administration, employees, clienteles, suppliers, competitors, donors, communities, government regulators, non-governmental regulators, financial intermediaries and joint venture partners. (Honu, 2008), views the main stakeholders of higher education to be the shareholders who fundamentally include the chief executives, teaching and non-teaching staff, students, alumni and the regulator included on the governing board. This will bring together a 'combination of executive directors, with their intimate knowledge of the business, and of outside, non-executive directors, who can bring a broader view to the company's activities, under a chairman who accepts the duties and responsibilities' (Cadbury, 1992).They collectively constitutes the determiners of the strategy for growth and development in the universities (Swedish International Development Cooperation Agency, 2017).Their inclusion on the governing Councils will therefore promote effectiveness and promote good governance (Hampel, 1998).While this should not be construed to mean the absence of governance failures, it nevertheless is a panacea to the minimisation of the sins of commission or omission, be it in the areas of strategy, performance or oversight (Higgs, 2003).
In the light of this, university governance should involve faculty and other stakeholders in the field of education, both internal and external because "when people are involved in developing a plan, it creates a sense of belonging, ownership and commitment that minimises conflict and increases the willingness of individuals to contribute their talents to accomplish the task" (Honu, 2008).
However, this is a challenge in higher education management in Africa (Jowi, et al., 2013).Universities in Africa are still bedevilled with 'student disturbances, harassment of academic staff and widespread academic corruption' because they operate in weak governance systems (Oaanda, 2016).In search of solution to the challenge of university governance, the NCTE organised a Board Leadership Development Conference in 2016 at which it was noted that there were 'no prescribed solutions' to the issue but that learning from best practices could be a panacea (Fritteli, 2016). It has therefore become imperative for universities in Africa and for that matter Ghana, to embrace corporate governance practices and abandon the traditional self-governing and collegial governance system which is characterised by acquired leadership by mere progressing from 'headship of department, through deanship/directorship to positions such as Pro-Vice-Chancellorship and Vice-Chancellorship' and allow the inclusion of broader but relevant stakeholders (Effah, 2018). In relation to the foregoing, it is significant to note that the current crop of public universities in Ghana were established at various stages of the country's development in response to the needs of tertiary education at the particular time. The piecemeal approach meant that the focus of each university was dictated by the exigencies at the time of their establishments. This has had resulted in the development of unique mandates for these institutions. The fact that these institutions were established under different political regimes; some under military governments and others under democratically elected governments with different political persuasions, theirexecutive governance and internal organisation though similar, vary slightly especially in terms of the composition of their governing Councils. Thus, it is significantly necessary to examine them for an appreciation of their conformity with principles of good corporate governance. This will be indicative of the justification for reforms.
Governing Councils
As incorporated statutory corporations, public universities have governing boards referred to as Councils, which are required to be in tune with principles of good corporate governance for optimal delivery of the good public education (Prempeh, 2019).The governance of corporations like the universities is the responsibility of the governing boards (Cadbury, 1992).The role of the government as one of the stakeholderof these universities is to appoint the Councilsand ensure they operate within an appropriate governance system (Cadbury, 1992).
The Councils generally comprise of the stakeholders of tertiary education: representatives of the Senior High Schools whose products are the raw material of the universities; the alumni who are the products of the universities and therefore their ambassadors and link to industry; the faculty and staff of the universities; the regulator; and the government to whom the sovereign mandate of the people of Ghana resides.
While constituents of the Councils are largely constant in all the public universities, it is also true that there are variations as to the number of representatives per stakeholder. For instance, in the area of employee representation, there are discrepancies in the senior member level representations as in some universities, Convocation has one representative on the Council, while other universities provide for two Convocation representatives; one from the professorial ranks and the other from the non-professorial ranks. The implication of this arrangement is that the employee influence in governance is varied across the universities. It is also the case that most often, the convocation representation is unrepresentative of its composition. Where it is not explicit that the non-professorial rank representative should be a nonteaching Senior Member, it is the practice that both representatives will be drawn from among the teaching members of Convocation. The non-teaching members who are mostly the main line administrators are therefore largely not represented on the Councils.
An important stakeholder of the universities -the cadre of pensioners who have devoted their lifetime energies into the growth and development of these universities is left out of its governance. Even the University of Ghana which is the only public university to specify the members of the university to include pensioners(University of Ghana Act, 2010), nevertheless failed to expressly provide for their inclusion on Council. The foregoing deficiencies in the current governing Council compositions notwithstanding, the Council membership spread are in consonance with the principle of good corporate governance which provides that 'the board should include an appropriate combination of executive and non-executive (and in particular, independent non-executive) directors, such that no one individual or small group of individuals dominates the board's decision-making' (Financial Reporting Council , 2018).The challenge though, has to do with the mode of appointment of the non-executives. This is because in majority of tertiary education systems especially in Europe, "the government continues to partly or completely control the appointment of external members" to governing Boards but this is usually "seen as a way for the State to gain greater influence over internal decision-making processes, thus reducing institutional autonomy, or conversely as a practical way to clear potential subsequent hurdles" (Pruvot & Estermann, 2018). This is because on an average of thirteen members on the Councils, the government appointees who form the majority group averaging four persons including the Chairman are outnumbered by the aggregation of the other stakeholders. This means that no single stakeholder can dominate the Councils in decision-making. Whether this will be sustained in the ongoing reforms is a matter for determination. This contrasts with current trends in higher education in that whereas it is recognised that there is "the need to increase the efficiency, save resources and minimise the administrative burden" in tertiary education management, it is also the case that new reforms are offering "greater freedom from the state and, in most cases, goes hand in hand with increased participation of external members in the University governing bodies"(Pruvot & Estermann, 2018).
Authority of the Councils and Internal Organisation
The Councils have formal authority 'to do or provide for any act or thing in relation to the University which the Council considers necessary or expedient'(University of Ghana Act, 2010, 2010)(University of Professional Studies Act, 2012)(University for Development Studies Law, 1992).In this regard, their functions are unfettered as they are generally responsible for the management and administration of the finances and properties and to determine the strategic direction and monitor and evaluate policy implementation of these universities. This allows for the internal organisation of the universities by the Councils acting in most cases on the recommendations of the Academic Board but without recourse to any other authority except when any such action is inconsistent with the Constitution of Ghana or at variance with approved regulatory standards.
Determination of Principal Officers
The determination of who constitutes the principal officers of public universities has implications for cost. More importantly, it has implications for certainty in terms of administrative direction. Intriguingly, whereas in some universities the principal officers are spelt out in the establishing law to be the Chancellor, Chairperson of the Council and the Vice-Chancellor,(University of Ghana Act, 2010, 2010)(University of Professional Studies Act, 2012) it is not provided for in other universities (University for Development Studies Law, 1992). In other universities including those for whom their Acts are silent on the matter, the principal officers include an extensive list of the Chairman of Council, the Vice-Chancellor, Pro-vice-chancellor, Registrar, Finance Director, Librarian and Director of Works and Physical Development. This has huge cost implications for the universities involved as the public purse is used to provide accompanying privileges for the officers so determined. This must be one of the targets for the reform to reduce administrative cost in university governance.
Appointment of Officers
The public universities under the status quo have the independence to appoint and promote their employees without external influence. Appointments of Principal Officers and other officers of professorial rank are approved by the Governing Councils on the recommendation of the respective Appointments and Promotions Boards of the Academic Boards of the universities. All other staff are appointed by the Vice-Chancellor on the authority of Council on the recommendations of the respective Appointments and Promotions Boards.
Tenure of Council
The tenure of Councils of the public universities vary from two (2) to three (3) years subject to reappointment for another term only. In view of the recognition that the Councils provide leadership and direction for the universities, the short terms of 2 and 3 years are inimical to policy focus.
3.1.6. Relationship between Council and Academic Board The Councils determine policy and strategic planning direction. In this regard, the Academic Boards formulates the academic policy of the University for the Approval of Council, and also advise the Council on the appointment of academic staff and the admission of students and related matters. There is no third party between the Council and the Academic Board, save Constitutional barriers and regulatory norms.
Allowances of Councils
Except in the case of the University of Ghana whose allowances are determined by the Council,(University of Cape Coast Law, 1992) allowances for Council members of the other public universities are subject to the approval of the Minister of Education in consultation with his counterpart for Finance. This also affects other public universities whose establishing laws make no referenceto the payment of allowances. However, notwithstanding the foregoing arrangements, it is the case that, of recent, all public universities have been included in the omnibus allowance structure for boards and agencies of the government of Ghana. The current dispensation therefore shreds the authority of Governing Councils to determine allowances for the performance of their functions. This is a fetter in the independence of Councils. The application of corporate governance principle on remuneration would provide for the Councils to independently determine their allowances without political interference in line with the principle that 'a formal and transparent procedure for developing policy on executive remuneration and determining director and senior management remuneration should be established' (Financial Reporting Council , 2018).
Current Regulatory Governance of Public Universities in Ghana
Regulation connotes control of the conduct of persons or groups or an activity (Ogus, 1994). The control should be suitably focused under the direction of a communal authority for the achievement of the objectives of the community(Baldwin, Cave, & Lodge, The oxford Handbook of Regulation, 2010).In another breadth, it refers to the 'promulgation of an authoritative set of rules, accompanied by some mechanism, typically a public agency, for monitoring and promoting compliance with these rules' (Baldwin, Scott, & Hood, Regulation, 1998). Accordingly, it is the purposive ordering and influence of activity and participant conduct for the delivery of desired goals (Parker & Braithwaite, 2003) (Baldwin & Cave, 1999)(Black, Decentring Regulation: Understanding the role of regulation and self-regulaion on a "post Regulatory" world, 2001) (Black, 2002).
Regulatory governance therefore refers to how regulation takes place within a framework of institutions and legal provisions having regards to 'the independence and accountability of the regulator, the relationship between the regulator and policymakers; the process-formal and informal-by which decisions are made; the transparency of decision making; the predictability of decision making; and the organizational structure and resources of the regulator' (Brown, Stern, Tenenbaum, & Gencer, 2006).
Coming from the point that regulatory governance revolves around an institution conveniently known as the regulator, it is imperative now to understand the character of any such body. Regulator bodies are specialised independent administrative authorities performing their functions independently of the control of government ministries and departments (Majone, 1999). In some instances, such bodies serve as protectors of the public interest with the mandate of 'setting standards, issuing licenses' among others in their specialised areas (Thatcher, 2002).Caution is therefore required in classifying these agencies because some are just appendages of government ministries and agencies performing only executive and advisory roles (Maggetti, 2009).
In contrast, there are others classified as independent regulatory agencies (IRAs), performing specialised functions, deriving authority from the state through legislation by the representatives of the people in parliament, but are not politicians and are not hierarchically controlled by politicians.However, they are not separated from Cabinet influence by constitutional convention (Law Reform Commission of Canada, 1980).Consequently, they are agents, and the elected politicians are principals, but this agent-principal relationship is one in which the agent is organisationally separate from the principal, but invariably accountable to the principal-legislature and judges (Thatcher, 2002).
That notwithstanding, "intervention by the Cabinet or the responsible Minister has sometimes appeared to be arbitrary and proved to be worrisome to applicants or other participants in proceedings before regulatory agencies"(Law Reform Commission of Canada, 1980). But "[t]o the extent that there is no Minister actually responsible and accountable before Parliament for the operations of a government agency, one can say that there has been an investiture of power in the agency by the legislature, rather than a mere delegation of authority"(Law Reform Commission of Canada, 1980). In contextualising the foregoing, it is instructive to note that there is difficulty associated with giving a one-fit-all definition to IRAs as their character differ according to the particular sector they provide oversight for (Thatcher, 2002).
In Ghana, there are institutions which provide oversight in some public service areas known as Independent Constitutional Bodies (ICB) "including the National Commission for Civic Education (NCCE); the Electoral Commission (EC); the Commission on Human Rights and Administrative Justice (CHRAJ); the National Media Commission (NMC); the Office of the Auditor-General (and the Audit Service); and the Public Services Commission (PSC). Apart from these, there are other institutions of governance, which, while not expressly listed as independent, may yet require a high degree of autonomy to execute their respective constitutional mandates. These are the Bank of Ghana and the Office of the Government Statistician (and the Statistical Service)"(Constitution Review Commission, 2011). The world over, the strength or weakness of the regulatory process is directly related to the strength or weakness of the legal and parliamentary architecture (Stern & Holder, 1999).But usually it is jurisdictions with weak legal and parliamentary systems that have regulatory reputational challenges and therefore require robust regulatory processes because the reputation of the regulator is as 'sound as their last [regulatory] decision' and so, 'it can take only one or two partial decisions or government interventions to seriously undermine a good regulatory reputation' (Stern & Holder, 1999).
The tertiary education sector in Ghana currently has two oversight bodies, the National Council for Tertiary Education (NCTE) and the National Accreditation Board (NAB). Public tertiary education institutions are therefore responsible to the Ministry of Education through these bodies (Education Act, 2008).To the extent that the NCTE and NAB don not fall under the category of independent Constitutional Bodies, is early warning that it is should not be expected to find in them any semblance of autonomy from government -political interference. This section of the article therefore examines the functions of the regulators and the degree of their independence from partisan political control and also from the very institutions they regulate.
National Council for Tertiary Education (NCTE)
The National Council for Tertiary Educationis one of the national oversight bodies for tertiary education. The Council functions as both an executive and advisory agency to the government. The generality of its functions include advising the Minster of Education on the development and cost implications of tertiary institutions such as are related to recurrent expenditure and financial emoluments agreed with government including conditions of service; proposition and monitoring of the application of approved standard operating norms in the sector; provision of guidance for linkages between tertiary education institutions and relevant external agencies(National Council for Tertiary Education Act, 1993). The Chairman and members of the Council are appointed by the President in consultation with the Council of State and their salaries, facilities and privileges shall be determined by the President in accordance with article 71 (1) (d) of the Constitution(National Council for Tertiary Education Act, 1993).
Membership of the Council is made up of a Chairman appointed by the President. The rest include; two Vice-Chancellors of universities, one representative each from the Polytechnics, Association of Ghana Industries, National Development Planning Commission, Ministry of Finance, Ministry of Education, Ghana Academy of Arts and Sciences, Ministry of Employment and Social Welfare. The remaining are the chairpersons of the National Accreditation Board and the National Teacher Training Council, one person with extensive experience in university work and four other persons two of whom shall be women(National Council for Tertiary Education Act, 1993).The President is minded to appoint persons who are of high moral character and integrity and have considerable experience and expertise relevant to the advancement of the functions of the Council(National Council for Tertiary Education Act, 1993). In the performance of its functions, the Council shall decide the procedure for its meetings and is required to meet at least once every three months and at its meetings, all questions proposed shall be decided by a majority of the votes of the members present and voting and where the votes are equal, the Chairman or the person presiding shall have a casting vote(National Council for Tertiary Education Act, 1993).
The tenure of the Chairman of the Council is four years subject to renewal for another term(National Council for Tertiary Education Act, 1993). The other members who are not ex-officio are appointed for three years in the first instance and may be re-appointed for another term but shall not serve for more two terms in succession (National Council for Tertiary Education Act, 1993).
Appointment of officers including the Executive Secretary of the Council are done by the President on the advice of the Council in consultation with the Public Services Commission(National Council for Tertiary Education Act, 1993).The arrangement of the consultative process for such a knowledge and expert based Council,the arrangement should rather be that the President makes this appointments in consultation with the Council but on the advice of the Public Services Commission.
The finances for the operation of the Council shall be funds annually allocated by Parliament and other funds the Minister of Finance may approve(National Council for Tertiary Education Act, 1993).The Council is required to submit an annual report of its activities to the Minster of Education who then presents it to Parliament within six months of the end of each financial year and the report shall include its audited accounts and the accompanying Auditor-General's report(National Council for Tertiary Education Act, 1993).
Legislative Regulations for the implementation of any part of the Act is the prerogative of the Minister of Education albeit on the advice of the Council(National Council for Tertiary Education Act, 1993). While it is conclusive too that, the NCTE comprises the relevant stakeholders and therefore clothe with competence, it nevertheless suffers some setbacks. The Council lacks financial independence as it depends largely on annual funds approved by parliament. It also suffers tenure insecurity as the membership is part-time and for a short term of three years, one year short of the term of the President. The implication is that members other than the ex-officio ones can be changed within the term of one President. The thought of failing to gain re-appointment is enough to align members to bidding of the appointing authority. This has the tendency of affecting affected the functional credibility of the Council as supported by its inability to seize the occasion to resolve governance challenges in the University of Education, Winneba and the Kwame Nkrumah University of Science and Technology, Kumasi.
National Accreditation Board (NAB)
The National Accreditation Board is an agency of the Ministry of Education with the mandate for the accreditation of public and private educational institutions either academic or professional and in so doing determine programmes and requirements for the assurance of standards in consultation with the institution. It is also the statutory authority in the country for the determination of equivalences of certificates of all levels from institutions in the country or from elsewhere. Additionally, the NAB is the statutory advisor to the President on the grant of Charter to private tertiary institutions (National Accreditation Board Act, 2007). The Board may on its own, or on the request of an institution conduct an accreditation exercise on the corporate institution or its programmes(National Accreditation Board Act, 2007). Accreditation exercises are conducted by panels appointed by the Board and consists of a chairperson and not more than eight persons whose professional, academic, industrial or commercial competence must be relevant for the particular exercise(National Accreditation Board Act, 2007). The Board in the performance of its functions has the power to request information from an institution and any such institution shall comply and provide the information or grant access to authorised officers of the Board to the relevant records or books(National Accreditation Board Act, 2007). It shall be an offence which attract a fine or imprisonment or both on summary conviction of a person who operates an uncredited institution and, or programme(National Accreditation Board Act, 2007).Sequel to the foregoing, the Board may close down non-conforming institutions and where cost is incurred by the Board in the enforcement of its directives, steps shall be taken to recover same from the institution involved(National Accreditation Board Act, 2007). This Board is funded through annual allocations approved by Parliament and others drawn from grants, fees and charges, returns on investments as the case may be and donations or gifts.Reports relating to the functions of the Board and audited accounts are required to be submitted to the Minister who would then submit same to Parliament(National Accreditation Board Act, 2007). Similarly, whenever it is necessary for the making of regulations for the proper implementation of the functions of the Board, it shall make recommendations to the Minister of Education who may make the regulations by legislative instrument(National Accreditation Board Act, 2007). It is conclusive from the preceding paragraphs of the NAB that it governing Board is dominated by representatives of the constituent stakeholders in tertiary education and therefore competently composed for the ascribed functions. Though a statutory corporation, the nature and form of the board is more self-regulatory within a shared governance framework. Unlike the NCTE, the NAB has more financial autonomy because it is allowed to generate funds from fees and charges in the discharge of its functions. Notwithstanding that the Board generates revenue in addition to the annual allocations by Parliament, it lacks financial independence insofar as its allowances are determined by the Minister of Education. It is also instructive to note that the loyalty of the members of the Board who are not ex officio will be aligned with the President than anything in so far as they have and insecure term of three years albeit with the eligibility for reappointment(National Accreditation Board Act, 2007) -because any such re-appointment will depend on the degree of alignment of the member to the President and for that matter the government. This is a dent on the credibility of the decisions of the Board. Additionally, it lacks autonomy in the making of regulations as the making or otherwise of such is the prerogative of the Minister.
It is not surprising therefore to note that accreditation panel members may becompromised with financial incentives and other offers, the consequence of which is a lack of due diligence on staffing credentials of assessed institutions to ascertain the veracity or otherwise of claims by the latter(Fredua-Kwarteng & Ofosu, 2018).The regulatory credibility of the Board is thus at its wits end. Confidence in the process could have been restored were there, a third party appeals body for the resolution of grievances from institutions which are subject of accreditation assessments(Fredua-Kwarteng & Ofosu, 2018).
It is also important to note a discrepancy in the Board composition of the two bodies. Whereas some stakeholders are not represented on one or the other two bodies, some find space on both bodies. But it is even a waste of public funds to constitute two separate boards with almost the same representatives.
Emerging Regime
The foregoing has unearthed challenges in tertiary education delivery in Ghana. They include inconsistency at the executive governance level of the universities, role duplication between NCTE and NAB resulting in waste of regulatory resources, as well as regulatory fatigue on the part of the universities. In recognition of these issues, the government has a justification pursuing the Tertiary Education Reforms to provide 'a comprehensive, coherent, well-articulated and holistic policy framework to respond effectively to the needs of the learning public (Ghanaian Times, 2018). In this regard, two bills have been drafted and at various stages of validation for Parliamentary action. These are the Public Universities Bill 2019 (PUB) and the Education Regulatory Bodies Bill 2019 (ERBB). The PUBwill merge and consolidate the piecemeal legislation establishing public universities to provide for harmonisation and the ERBB will merge the NCTE and NAB into a single regulator-the Tertiary Education Commission (TEC).The nature and form of these evolving entities is the subject of the next pages of this article.
Executive Governance -Public Universities Bill 2019
All the traditional public universities in Ghana were established by single Acts of Parliament. This piecemeal legislations has created different, although similar governance structures in these universities. The PUB when passed by parliament in its current form will replace these Acts and bring under its ambit, all the universities including new ones that will be established afterwards. Consequently, governance of public universities will see a dramatic change especially regarding institutional independence, tenure security, financial independence and ultimately academic freedom. The degree of autonomy of the governing council and the security of tenure of principal officers and management personnel will determine the level of independence of the university from external influence in terms of decision making. Similarly, the level of control over allocation of funds is significant for the assurance of independence. These factors together with degree of independence of the university to determine matters of academic growth and development will provide clarity on the level of academic freedom of the universities.
Composition of Governing Councils
In the light of the foregoing, it is noted that the Universities Bill provides for the university to have a governing body referred to as the University Council which consists of nine members comprising a Chairman and four members including at least one woman nominated by the President. The rest are the Vice Chancellor and one representative of the following stakeholder groups: registered employee unions on a rotational basis; convocation, Student from among the student unions; and the National Council for Tertiary Education without voting right. This composition gives the government a majority (5 of 8, since the NCTE representative is non-voting) stake in the event of a vote. This is notwithstanding the casting vote of the Chairman in the event of an equality of votes. It will make decision making to be influenced by the group of government appointees. In addition to this weakness, it is also noted that degree of influence of students on Council has been deflated. Unlike the current arrangements in the universities which allows both undergraduate and postgraduate students to be represent separately to bring on board the issues relevant to their constituents, the PUB requires both groups to be represented by one person. Also, Convocation which under the PUB will be represented by one person was mostly represented by two representatives; one each for the professorial and nonprofessorial classes. Furthermore, the employee unions (UTAG, GAUA, SSA, FUSSAG and TEWU will now be represented by one person.
One stakeholder, the second cycle institutions, from whom the universities draw their raw material -students, have been left out in the new arrangement. Another stakeholder which forms the army of ambassadors -the alumnihave been axed out of the governance of the university. Aside their role of providing feedback of the suitability of the products of the university, the alumni have forged very serious partnership with the former even in the provision of infrastructure.
Remarkably, even with the stakeholders who are provided governance slots, thereis a difficulty of the representatives being in the position to articulate the concerns of the broadened stakeholder groups. The implication of this inadequate stakeholder involvement and the dominance of government on Council is that the Councils will become appendages of the government. Worse still, such Councils will lack the ingredients to foster effective stakeholder collaboration for the strategic development of the universities. This contrasts with the aims of the university as indicated in the PUB to 'promote inclusive, efficient, effective and transparent governance systems and practices and maintenance of public trust'(Public Universities Bill, 2019). It does appear that the government is focused on the economics of the management of public universities than the substance. It is also the case that government want to take full control of the universities 'as the business owner'(Ghana News Agency, 2019).
Funding
The PUB provides that the Minister of Education shall make a prescription of the form and indicate the times for the submission of the revenue and expenditure of public universities. It is also provided for the public university to obtain prior approval from the Minster to expend beyond the approved estimates. The harmonisation and cost control spirit of this provision cannot be underestimated. However, it provides a control tool by which the Minister could undermine the direction of the university regardless of how well intended the decisions of Council in that regard maybe.
Allowances
Under this regime, the allowances of the members of council and committees of Council will be determined by the Minster of Education in consultation with the Minister of Finance(Public Universities Bill, 2019). While this will harmonise allowances drawn at the various public universities as is already the case, it will also create a challenge for the Councils in terms of the level of allowances to be drawn by ad hoc committees which may not be committees of council or even the academic Board. This is therefore a fetter in the mandate of the Council to establish committees especially ad-hoc committees since they will at each point require direction from the Minister on the appropriate allowances to be paid. Governance will therefore be stifled.
Tenure of Council
The current regime is characterized by inconsistent tenure as the tenure of Councils vary from two to three years. The tenure of principal officers is also varied among the public universities. The PUB provides for these to be harmonised. Except the Vice-Chancellor, all members of the Council shall be appointed for three years and eligible for reappointment for another three years only (Public Universities Bill, 2019). This is an improvement on the existing Councils of some public universities, where the tenure of members of Council is for two years. This is because, it will ensure governance stability for long time planning in the public universities. The positive impact of this is however eroded from scratch since it is also provided that "the President may dissolve and reconstitute the Council in cases of emergencies, or appoint an interim Council to operate for a stated period"(Public Universities Bill, 2019).There will be as many opportunities for the President to dissolve the Council as there will be interpretations of what constitutes an emergency in the university.
Officers of the University
Further incursions in security of tenure is demonstrated in the appointment of officers of the university -the Vice-Chancellor, Pro-Vice-Chancellor, Registrar and Finance Director. Each of these officers shall have an initial three year term with the eligibility for another term only. This political style appointments will make officers to play along the political party in power knowing that the Councils which are the so-called appointing authorities under the PUB are government dominated for the assurance of the renewal of their appointments. But more seriously, with three year terms for both Council and officers of the university, the propensity of wholesale changes in positons is high and can result in serious loss of institutional memory aside the possibility of governance crisis due to simultaneous vacancies at Council and executive Management levels.
Academic Freedom
The PUB provides for academic freedom in public universities in elaborate language(Public Universities Bill, 2019). In contrast, the bill provides that "the Minister may from time to time give policy directives through the [GTEC] to the University and the University shall comply"(Public Universities Bill, 2019). Furthermore, the Bill gives the Minister a deciding authority in the establishment of campuses of the public universities. In this regard it provides that the establishment of campuses after approval by the Governing Council is subject to the approval of the Minister, whose approval shall also be subject to availability of funds(Public Universities Bill, 2019). In the light of the foregoing, timidity will become the norm among faculty and therefore lead to the suffocation of objective academic discourse. Academic freedom is further subdued under an insecure appointment regime, lack of both institutional and financial autonomy.
Regulatory Governance -Education Regulatory Bodies Bill 2019
Considerably, the National Council for Tertiary Education (NCTE) and the National Accreditation Board (NAB) perform complimentary roles with duplication in some instances which has implications for regulatory cost and efficiency. It is the responsibility of the NCTE to advise the Minister of Education on the development of tertiary education institutions; the financial needs of such institutions; the recommendation of norms and standards relating to staff, costs, and accommodation and time utilisation; and on matters related to remuneration and conditions of service of the employees(National Council for Tertiary Education Act, 1993).
On the other hand, the NAB is fit out to determine the programmes and requirements of the institutions so developed for their proper operation and the maintenance of acceptable levels of academic or professional standards among others(National Council for Tertiary Education Act, 1993).The NAB conducts staff audits during accreditation and re-accreditation of programmes. In between, the NCTE also conducts periodic staff assessments. The dual regulatory oversight results to the spread of already inadequate regulatory resources both in terms of human and financial resources. For instance the assessments of both agencies cover the adequacy in terms of student-staff ratio and the adequacy and relevanceof qualifications. Such an exercise could have been executed by an individual officer or unitary committee from either agencies. Material and financial resources such as stationery, vehicles and subsistence allowances are doubly applied both on the side of the regulators and the regulatee institutions.
The situation also results in regulatory fatigue on the side of the tertiary institutions, with the tendency to influence them to explore loopholes in responding to regulatory demands. Government, cognisant of the challenges of the twin regulators has taken steps to merge them to maximise regulatory resources and eliminate regulatory fatigue for optimisation (Ghanaweb, 2019).The Ghana Education Regulatory Bodies Bill 2019 will therefore establish the Ghana Tertiary Education Commission (GTEC). Thus, the GTECwill combine the functions of the NAB and NCTE. Accordingly, it will have advisory, co-ordination, regulatory and accreditation functions.
Governing of Board Ghana Tertiary Education Commission (GTEC)
The governing Board of the Commission comprising eleven members shall be appointed by the President in accordance with Article 70 of the 1992 Constitution of Ghana. The Chairman and two others will be nominated by the President. The other members will be the Director-General of the Commission, the Director-General of the National Development Commission, and one representative each from the following stakeholder groups: the Vice-Chancellors of public universities; the heads of private chartered universities; National Commission for Technical and Vocational Education and Training; the Office of the Attorney-General not below the rank of Principal State Attorney; Ministry of Finance not below Director; and the Ministry of Education not below Director(Education Regulatory Bodies Bill, 2019). The Association of Professional Bodies and the Association of Ghana Industries will now not be part of this all important Commission even though they were represented on the NAB and NCTE respectively. How can such a Board deliver policy for education for a "knowledge driven economy" when the relevant actors are outside the decision making frame. The members of the Board are appointed on a part-time basis and therefore are required to for the business of the Commission at least every three months or in the case of an extraordinary meeting on the request of not less than onethird of the members(Education Regulatory Bodies Bill, 2019).
Tenure of Board
The tenure of members of the Board other than those who are members by reason of office shall be four years renewable for another term only and the appointment of a member can be revoked by the President through a letter (Education Regulatory Bodies Bill, 2019). This in addition that it is a part-time Commission will affect the commitment of members. It should have been given a tenure similar to the superior courts of judicature and the independent Constitutional Bodies to insulate it from political interference and safeguard the sanctity of its regulatory directives and decisions. But in its current state, coupled with the provision that the Minister may give policy directives to the Board of the Commission(Education Regulatory Bodies Bill, 2019), the Commission is at best a Ministerial agency at the beck and call of government. Insofar as such Ministerial directives will not exclude the regulatory functions of the Commission, it will be a legendry miracle if the Commission escapes political capture.
Funding of the Commission
The funds of the Commission shall include 'a levy of one percent of the internally generated funds of a tertiary institution' and funds from the following sources allocations by Parliament; fees and charges, returns on investments' donations, grants and gifts; and other funds the Minister of Finance may approve(Education Regulatory Bodies Bill, 2019). Unlike the arrangement under the previous regime, this emerging regime promises to ensure financial autonomy for the oversight body. This will strengthen it in terms of regulatory resources.
Appointment of Officers
The part-time nature of the Commission means that the day-to-day operations of the Commission is under the direction of the Director -General and other officers of the Commission who shall be appointed by the President in accordance with article 195 of the 1992 Constitution. The terms and conditions of their appointments shall be as stated in the individual letters of appointment(Education Regulatory Bodies Bill, 2019). This is problematic given that the tenure of officers will not be secured and the determination of other conditions of service could be a probable tool for their manipulation by the executive.
Allowances of the Board
Allowances of the Board and its committees shall be determined by the Minster of Education in consultation with the Minister of Finance(Education Regulatory Bodies Bill, 2019). This is inappropriate for a regulatory body. As the adage goes, he who pays the piper causes the tune. And in this instance, the Commission will be dance to the tune of the Minister as its financial independence cannot be guaranteed.
Functions
Its advisory functions will include; establishment and development of tertiary education institutions; their direction and general orientation for the achievement of a diversified and differentiated tertiary education system; the financial matters including needs, income generation and rates of personnel remuneration of tertiary education institutions; and the recommendation of standards and norms in the areas of governance, finance, academic programmes among others(Education Regulatory Bodies Bill, 2019).
The GTEC will also serve as a co-ordination body between external funding agencies and tertiary education institutions; provide platform for interactions between the academia and industry; and between tertiary education institutions and other levels of education in the country(Education Regulatory Bodies Bill, 2019).
As an accreditation body, it will ensure the maintenance of the following standards; physical infrastructure, governance system, human resources and financial sustainability, academic and professional standards. In the performance of this function, the 'Commission shall take appropriate actions including sanctions against tertiary education institutions which act contrary to the norms and standards set by the Commission and the terms and conditions under which accreditation has been granted' (Education Regulatory Bodies Bill, 2019).
In furtherance of its regulatory function, it is mandated to give approval for the establishment of tertiary education institutions and inspect, monitor and evaluate these institutions for the purpose of compliance. Instructively, the Commission shall regulate the internal organisation of tertiary institutions through the "the approval of the establishment of new academic units in tertiary education institutions being mindful of cost-effectiveness and alignment with institutional mission and mandate and national development objective"(Education Regulatory Bodies Bill, 2019). However, it is mandatory for the Commission in the performance of the regulatory function to consult the Minister of Education. In addition, at any time in the performance of its functions, the Minister of Education can give directives to the Commission and it shall comply(Education Regulatory Bodies Bill, 2019). This contrast with the approach in the United Kingdom where Ministerial involvement in regulation is in the form of guidance and not directives, and even at that, Ministerial guidance must have regard for the protection of the institutional autonomy of the regulatee(Higher Education and Research Act 2017 (UK)).
From the foregoing, there is no doubt that the emerging Commission will be saddled with a conflict of mission. While this is enough to affect its efficiency, its regulatory function is further particularly undermined by the requirement for it to consult the Minister of education in the performance of that function. The implication is for the wish of the government to prevail at all times especially that the establishment of new academic units by the universities must be approved by the Commission in consultation with the Minister.
Conclusions
The objective of this article has been to provide insight into the internal governance of public universities and the regulatory environment within which they operate and, the merging changes coming on the heels of reforms in the tertiary education sector of the country. In the present, university Councils command credibility as the ride on the back of wide stakeholder presentations without any person or group having dominance at decision-making. The analysis also reveals that executive Management and other officers of public universities have a security of tenure. In addition, the internal organisation of public universities is independent of external directives. Thus, the current internal governance of public universities has a high propensity of promoting academic freedom.
In contrast, the emerging internal governance arrangements would eliminate some stakeholder groups from the governing Councils. It will also reduce the strength of representation of other groups. Furthermore, it will create power imbalance as the government have majority representation on Council in contrast to dictates of good corporate governance, thatno individual or group should have absolute majority and control on the board. The Council will under that regime, be subject to Ministerial directives to which it must comply. The security of tenure of senior Management and other staff will also be eroded. Finally, the emerging regime would require that an important academic decision as the establishment of an academic unit be subject to the approval of the new regulatory Commission subject to the consultation of the Minster of Education. Academic freedom cannot be guaranteed under such a regime. On the regulatory front, the present bodies; the National Council for Tertiary Education and the National Accreditation Board have been found to be advisory agencies of the Ministry of Education and for that matter the Government. Their governing Boards have been found to lack independence and therefore their regulatory decisions would most likely lack credibility.
The reforms will further dilute the scanty credibility of the regulator by the elimination of professional bodies who will further groom products of these institutions for the job market after the formal classroom room instructions.This very important stakeholder deserves a place on the Board. This credibility challenge is further pronounced by the authoritygiven to the Minster of Education to give directives which the Commission shall comply with.
This article recognises the need for reforms to provide assurance to the tax payer of the delivery of quality and relevant university education for the socio-economic development of the country. In doing so however, the autonomy of the very institutions that provide the education should not be degraded to the extent that they would retain no form.
Significantly also, cost reduction should not be the overriding focus such that relevant stakeholders are not provided space on decision-making platforms. Finally, ministerial guidance rather than directives should be encouraged to avoid the destruction of academic freedom . | 2020-01-30T09:04:11.688Z | 2019-08-31T00:00:00.000 | {
"year": 2019,
"sha1": "0302813c3cacf34db8edef328f54d953278257fc",
"oa_license": null,
"oa_url": "http://www.internationaljournalcorner.com/index.php/theijhss/article/download/147319/103484",
"oa_status": "GOLD",
"pdf_src": "Unpaywall",
"pdf_hash": "0302813c3cacf34db8edef328f54d953278257fc",
"s2fieldsofstudy": [
"Education",
"Political Science",
"Law"
],
"extfieldsofstudy": [
"Political Science"
]
} |
236564789 | pes2o/s2orc | v3-fos-license | Characteristics of quantifiers moderate the framing effect
The attribute framing effect, where people judge a quantity of an item more positively with a positively described attribute (e.g., “ 75% lean ” ) than its negative, albeit normatively equivalent description (e.g., “ 25% fat ” ), is a robust phenomenon, which may be moderated under certain conditions. In this paper, we investigated the moderating effect of the characteristics of the quantifier term: its format (verbal, e.g., “ high, ” or numerical, e.g., “ 75% ” ) and magnitude (i.e., if it is a small or large quantity) using positive or negative synonyms of attributes (e.g., energy vs. calories). Over five pre-registered studies using a 2 (synonym, between-subjects: positive or negative) (cid:1) 2 (quantifier format, between-subjects: verbal or numerical) (cid:1) 2 (quanti-fier magnitude, within-subjects: small or large) mixed design, we manipulated quantifier format and magnitude orthogonally for synonyms with differing valence. We also tested two mechanisms for the framing effect: whether the effect was mediated by the affect associated with the frame and whether participants inferred the speaker to be positive about the target. We found a framing effect with synonyms that was reversed in direction for the small (vs. large) quantifiers, but not significantly moderated by quantifier format. Both the affect associated with the frame and the inferred level of speaker positivity partially mediated the framing effect, and the level of mediation varied with quantifier magnitude. These results suggest that the magnitude of the quantifier modifies one's evaluation of the frame, and the mechanism for people's evaluations in a framing situation may differ for small and large quantifiers.
been identified that magnify or reduce the effect, which are of research interest because they help understand why framing effects occur (Gal & Rucker, 2018).
In a typical attribute framing study, participants judge items with an attribute described with one of two complementary phrases, either positive or negative (Levin et al., 1998). The frames are traditionally constructed using complementary proportions of antonyms (e.g., 25% fat vs. 75% lean) so that the attribute quantity is the same in each frame, making them logically equivalent. The positive frame is consistently evaluated more favorably than its complementary negative equivalent (Donovan & Jalleh, 1999;Kim et al., 2014;Levin & Gaeth, 1988;Seta et al., 2010). Researchers have concluded that the attribute's valence (e.g., fat vs. lean) affects people's judgements and proposed explanations for this effect. The affective encoding account posits that the positively valenced "lean" creates positive affect that leads people to judge the product more favorably compared with the negative affect created by "fat" (Levin & Gaeth, 1988). Alternatively, the pragmatic inference account posits that speakers choose a positive or negative term to convey some implicit information (Sher & McKenzie, 2008). People could infer that there is more of the attribute than a reference point (McKenzie & Nelson, 2003). Listeners can also infer a speaker's viewpoint: for example, the speaker is more positive about the 75% lean meat as opposed to the 25% fat meat (Hilton, 2008;Keren, 2007).
The different explanations of the framing effect have found independent empirical support (e.g., Sher & McKenzie, 2006). These explanations focus on how information about the attribute's valence is encoded or interpreted. More recently, researchers also sought to understand how this is affected by the magnitude of the quantifier and how it is represented . This paper adds to the limited research on how changing characteristics of the quantifier can produce different perceptions of the overall frame. Specifically, we address how and why a quantifier's format of representation and its magnitude modify the attribute framing effect.
| Attribute frames consist of attribute and quantifier
In this paper, we refer to a frame as a sentence that includes a quantity (e.g., a proportion or percentage) of an attribute. The traditional attribute framing paradigm typically considers the quantifier and a positive attribute descriptor as "positive" (e.g., 75% lean), compared with the other "negative" frame that has a complementary quantifier and negative antonym (e.g., 25% fat; Levin & Gaeth, 1988, but see Sher & McKenzie, 2006, for a valence-neutral construction). Explanations such as the affective encoding account posit that people's judgements are consistent with the valence of the attribute (e.g., positive for 75% lean and negative for 25% fat). However, this explanation does not sufficiently take into account the role quantifiers may play in modifying the attributes they are attached to (Kiss & Pafel, 2017). For instance, one would expect the magnitude of a quantifier to affect the overall valence of the framing sentence and an item that is 5% fat would be perceived more positively than an item that is 25% fat.
Several existing studies showed that people identified frames with greater magnitudes of the positive descriptor (e.g., 95% lean vs. 5% fat) to be more positive than those with smaller magnitudes of the positive descriptor (e.g., 75% lean vs. 25% fat; e.g., Liu et al., 2020). In one case, participants even showed a reversal in preference between a 25% and 5% positive descriptor (Janiszewski et al., 2003). These findings suggest that the quantity changes the valence of the framing scenario and therefore one's evaluation of it. However, the consequence for the framing effect is less straightforward to determine because complementary quantifiersoften of differing magnitude-are paired with positive and negative antonyms. Some studies found no difference in the framing effect size (i.e., the difference between evaluations of the positive and negative frames; Jin et al., 2017;Kim et al., 2014;Saad & Gill, 2014): 95% lean is more positive than 75% lean, but the complementary 5% fat is also less negative than 25% fat. Other studies found that 75% lean (vs. 25% fat) displayed a larger framing effect than 95% lean (vs. 5% fat; Liu et al., 2020;Sanford et al., 2002), which suggests that the magnitude of the quantifier may not affect the valence linearly. Beef that is 95% lean may be more positive than beef that is 75% lean, but beef that is 5% fat could be much less negative than beef that is 25% fat. Comparing a positive and negative frame with a large and small quantifier may thus not reveal the full effect of the quantifier magnitude.
In addition to variations in magnitude, quantifiers can also exist in different formats: numerical and verbal. Although attribute framing is predominantly studied with numerical quantifiers, some work has shown that framing is also possible with verbal quantifiers Liu et al., 2020;Welkenhuysen et al., 2001; see also Reyna & Brainerd, 1991, for an example in risky choice framing). Varying quantifier format between numerical and verbal could moderate the effect of frames on people's evaluations. One would expect that verbal quantifiers might produce a larger attribute framing effect than numerical quantifiers for two reasons. First, compared with numerical quantifiers, verbal quantifiers (specifically, verbal probabilities) are believed to be processed more intuitively than numerical ones (Windschitl & Wells, 1996). For instance, people perceived "1 in 10 chance" to have different event likelihood from the equivalent "10 in 100 chance" when they described it with verbal than numerical quantifiers (Windschitl & Wells, 1996). Framing effects demonstrate a similar variation in judgment based on information presentation and are argued to be fueled by intuition (Tversky & Kahneman, 1986).
Thus, the effect should be more prevalent when frames use verbal quantifiers. Second, verbal quantifiers place more emphasis on the context than numerical ones (Moxey, 2017), which should increase the effect of the attribute's valence. Of the reported studies on verbal versus numerical quantifiers in attribute framing, one study found that attribute frames with verbal quantifiers produced a framing effect when those with numerical quantifiers did not: participants were more likely to prefer a prenatal test when informed that they had a moderate chance of having a baby with cystic fibrosis (negative frame) than a high chance of having a baby without (positive frame); however, they did not show this preference when given a 25% chance of cystic fibrosis versus 75% chance of no cystic fibrosis (Welkenhuysen et al., 2001). The other studies did not find evidence that verbal quantifiers (e.g., moderate fat vs. high lean content) magnified the framing effect compared with numerical quantifiers Liu et al., 2020).
A challenge for testing the effect of quantifier format lies in selecting the best pair of verbal quantifiers that will make up complementary frames. Verbal quantifiers do not have precise numerical equivalents. One could select semantic opposites such as high versus low , but these may not be how participants would translate the numerical quantifiers. Studies that solicited translations of numerical complements found that people indicated 75% and 25% to be "high" and "moderate," respectively (Liu et al., 2020;Welkenhuysen et al., 2001) versus "very high" and "low" for 95% and 5% (Liu et al., 2020). From this, one can see that the effect of a verbal versus numerical quantifier could be different for the positive than the negative frame. Comparing different quantifier magnitudes for the positive and negative frame may thus mask some of the effects of quantifier format.
| Creating a systematic test of quantifier characteristics
A direct, systematic test of how the characteristics of a quantifier moderate the framing effect needs to independently manipulate the frame, quantifier magnitude, and quantifier format. Ideally, one would need to compare the same quantifier in the positive frame as the negative to isolate the effect of changing the quantifier characteristics.
Such an orthogonal manipulation is not straightforward, 1 but it is achievable within the broader typology of framing effects. One way to manipulate quantifier characteristics orthogonally to attribute valence is by using synonyms of an attribute that are either positive or negative. This construction is most commonly used in goal framing, which targets whether an individual is persuaded to adopt or support a behavior by framing the same behavior (called a goal) with synonymous descriptors that are either positive or negative (e.g., Epley et al., 2006;Gamliel, 2013;Krishnamurthy et al., 2001;Levin et al., 1998; see also examples in message framing: Dardis & Shen, 2008). For instance, people were more positive about euthanasia when it was described as "not prolonging life" versus the same, but oppositely valenced "ending life" (Gamliel, 2013).
Goal framing and attribute framing differ on some conceptual and methodological points, although the types can overlap and the typology is not exhaustive (Levin et al., 1998). For example, goal frames target whether one is persuaded by the framing of a behavioral consequence (or goal) as opposed to whether characteristics (attributes) of an item affect one's evaluations of the item in attribute framing (Levin et al., 1998; see also Keren, 2011, for further discussion on framing classifications). Methodologically, attribute frames rely heavily on proportional quantifiers (i.e., percentages) to achieve logical equivalence (i.e., the same amount of the same attribute). In contrast, goal frames can, but does not regularly, invoke quantities of the synonymous descriptions (e.g., describing an equal monetary payout as being "withheld income" vs. "bonus income" ;Epley et al., 2006). Attribute frames are also more likely to use antonyms and goal frames synonyms (see Krishnamurthy et al., 2001, for a discussion of why goal framing should use synonyms instead of complementary antonyms to avoid confounds with attribute framing). However, one rare example of synonym use in attribute framing showed that participants were more likely to select the same priced flight framed as including a carbon offset (positive frame) as compared with a carbon tax (negative frame) 2 (Hardisty et al., 2010). Despite the synonymous attributes referring to the same value and consequence, people still had different attitudes toward them.
As our objective was to investigate the effect of quantifier characteristics, we took the approach of Hardisty et al. (2010) in order to manipulate quantifier characteristics (format and magnitude) orthogonally. To retain the aspects of attribute framing, we manipulated how attributes of a target and assessed participants' evaluations of the targets. We conducted five experiments with three scenarios (food labels: all five experiments; a fish farm and a company: final experiment) to test the framing effect with quantified synonyms, as moderated by quantifier format (i.e., numerical or verbal) and magnitude (e.g., small or large). We also tested whether two mediators-the affect of the frame and how much people inferred a speaker to be positive about a target-would explain the framing effects.
| Open science statement
In line with recent scientific guidelines, the methods and analyses for all experiments were pre-registered and can be found on the Open Science Framework along with the materials and data (https://osf.io/ zkmy7/).
| EXPERIMENT 1
In Experiment 1, we had two pre-registered hypotheses. 3 First, people would judge a quantity of the positive synonym of an attribute (energy) as healthier than when it was the same quantity of its negative synonym (calories). Second, the framing effect would be larger for verbal than numerical quantifiers because verbal quantifiers would increase participants' reliance on the affect associated with the attribute.
| Participants
One hundred and ninety participants (62% female; 71% White; ages 18-79 years, M = 32.51, SD = 14.14) were sourced from a university lab database. The sample size was determined a priori based on a stopping rule of reaching either the desired sample size (N = 187) or by a specific date. We collected three more participants than aimed for due to the recruitment process of the lab. No analyses were performed prior to the completion of data collection. Participants completed the experiment on Qualtrics at the end of an unrelated 20-min medical survey and were compensated with £8 for the entire study.
| Design
Participants made judgements about food in a 2 (synonym, between-subjects: positive or negative) Â 2 (quantifier format, between-subjects: verbal or numerical) Â 3 (quantifier magnitude, within-subjects: small, moderate, or large) mixed design. Participants also judged the affect of the synonyms (calories or energy) independently from the quantifier.
Participants read the following vignette about a food product that was labelled in one of the framing scenarios shown in Table 1. The quantifier was given in either verbal or numerical format. The attribute was either energy or calories. Participants read the same vignette with the small, medium, and large quantifier described in Table 1, presented in random order for each participant. Participants rated the healthiness of the food on a 7-point Likert scale (1: very unhealthy, 7: very healthy).
As a measure of affective associations with the attribute, participants next rated the terms "energy" and "calories" individually on a 7-point semantic differential scale with four sets of bipolar adjectives (e.g., bad-good;MacGregor et al., 2000). Participants also judged their affective associations with eight filler nutrients, such as "protein" and "sugar." Scale reliability was excellent, Cronbach's α = 0.95 (energy) and 0.94 (calories). We calculated the mean for the four adjectives on the scale as a measure of affect, with higher scores indicating a more positive affective association.
Finally, participants reported socio-demographic information, motivation for healthy eating (Naughton et al., 2015), how frequently they used nutrition labels, and their estimated weight and height. On average, participants' BMI was in the slightly overweight category (M = 25.38, SD = 7.09), and participants had positive attitudes toward healthy eating (M = 5.14, SD = 1.02 on a 7-point scale, with higher scores indicating more positive attitudes). Sixty-seven percent used nutrition labels frequently.
| Attribute framing effect
Participants exhibited a framing effect with the attribute's synonyms, albeit not always in the predicted direction. As shown in Figure 1, participants' judgements of healthiness were more positive for moderate (medium/40%) and large (high/70%) energy than for these amounts of calories, β moderate-quantifiers = 0.19, p = .007, Cohen's d = 0.38, β large-quantifiers = 0.36, p < .001, Cohen's d = 0.78. However, we found unexpectedly that the framing effect reversed in direction with the small quantifiers (low/20%), with the calorie quantity judged as more healthy than the energy quantity, β small-quantifiers = À0.20, p = .006, Cohen's d = À0.41.
| Testing for format moderation of the framing effect
The difference in magnitude of the framing effect between formats was in the expected direction (greater for verbal than numerical quantifiers), but the moderation was not statistically significant for any of the three quantifier magnitudes, β small-quantifiers = 0.12, p = .083, β moderate-quantifiers = 0.02, p = .750, β large-quantifiers = À0.03, p = .642.
| Does affect explain the framing effect?
We first confirmed that participants had more positive affect for the term "energy" than "calories," assessed independently from their quantifiers (M calories = 4.62, SD = 1.51; M energy = 6.00, SD = 1.04), β = 0.47, p < .001, Cohen's d = 1.07. We conducted a planned T A B L E 1 Orthogonal manipulation of quantifier format and attribute valence to construct frames with synonymous attributes across three quantifier magnitudes in Experiment 1
Positive attributes
Provides a low % of your daily energy.
Provides 20% of your daily energy.
Provides a medium % of your daily energy.
Provides 40% of your daily energy.
Provides a high % of your daily energy.
Provides 70% of your daily energy.
Negative attributes Provides a low % of your daily calories.
Provides 20% of your daily calories.
Provides a medium % of your daily calories.
Provides 40% of your daily calories.
Provides a high % of your daily calories.
Provides 70% of your daily calories. moderated mediation analysis to assess whether the affect associated with the attribute predicted variations in the magnitude of the framing effect for each of the quantifier magnitudes. This is a conditional process model (illustrated in Figure 2) using the PROCESS macro in SPSS (Model 15, using bias-corrected bootstrap confidence intervals with 5000 samples; Hayes, 2013) 4 to estimate the framing effect on healthiness judgment (direct effect), as mediated by affect (indirect effect) and moderated by format. All variables were mean-centered prior to analysis.
The path labelled b in Figure 2 illustrates the effect of affect on healthiness judgment. Affect predicted healthiness with small and large quantifiers, but not moderate ones, β small-quantities = À0.28, p < .001, β moderate-quantities = 0.12, p = .133, β large-quantities = 0.24, p = .001. This was not significantly moderated by format (all ps > .05, reported in the appendix, Table A1).
The path labelled c 0 in Figure 2 illustrates the direct framing effect on healthiness judgment, after controlling for affect. The direct framing effect on healthiness judgements was significant for large quantifiers only, β small-quantifiers = À0.07, p = .392, β moderate-quantifiers = 0.13, p = .096, β large-quantifiers = 0.26, p < .001. This direct effect was not moderated by format (all ps > .05, reported in the appendix,
| Discussion
In Experiment 1, we found that participants associated "energy" with significantly more positive affect than "calories," which suggested that these synonyms have differing valence. Using these synonyms, we tested whether quantifier format would moderate the attribute F I G U R E 1 Effects of attribute (green circles vs. red squares) and format (x-axis) on healthiness judgements across three quantifier magnitudes in Experiment 1. Error bars reflect 95% confidence intervals. The difference between the green circles and red squares shows the magnitude and direction of the framing effect in each condition [Colour figure can be viewed at wileyonlinelibrary. com] F I G U R E 2 Moderated mediation model used in Experiments 1 and 2a, testing whether affect mediates the effect of the attribute on healthiness judgment, as moderated by format. The letters a, b, and c 0 indicated the regression coefficients for the individual pathways in the model. Binary variables were dummy coded (calories = 0, numerical = 0). The hypothesis testing coefficients are presented in the text and the full model coefficients are provided in the appendix (Tables A1 and A2) framing effect, as explained by the greater role of affect when quantifiers were verbal. We did not find the expected moderation by quantifier format, and the role of affect was complex: affect significantly mediated the framing effect for verbal and not numerical quantifiers only when they were large. Interestingly and unexpectedly, the effect for small quantifiers was opposite in direction from large quantifiers: people judged low calories (negative attribute) as healthier than low energy (positive attribute). In the next set of experiments, we sought to test the effects of quantifier format and magnitude further and investigate two mediators of the framing effect.
| EXPERIMENTS 2a-2b
The goal of the next two experiments was to test again the role of quantifier format as a moderator of the attribute framing effect at different magnitude levels of the quantifier. Additionally, we tested in separate experiments two explanatory variables as mediators of the framing effect: affect (Experiment 2a) and whether participants inferred the speaker to be positive about the food (Experiment 2b). We hypothesized that for large quantifiers, the energy quantity would be judged healthier than the calorie one, but for small quantifiers, the calorie quantity would be judged healthier than the energy one (2a and 2b). We also hypothesized that verbal quantifier would produce a large framing effect because compared with numerical quantifiers, they would increase participants' reliance on the affect associated with the attribute (2a) and participants would more likely infer speaker positivity for verbal quantifiers (2b). 5 In these experiments, we implemented an improved design to address a limitation of Experiment 1, where we had used verbal and numerical percentages for energy that are on average psychologically equivalent (Liu et al., 2019). Individual participants may have perceived the verbal quantifiers differently (e.g., one might believe low calories to be less than 20%), which would make the frames inequivalent between formats. To control for this individual variability, we had participants first translate the verbal percentages of energy or calories into a numerical percentage. These translations were later used to solicit participants' healthiness judgements in the numerical condition. We first describe the general method across the experiments, and then results for each experiment.
| Participants
In Experiment 2a, participants were sourced from a survey panel used nutrition labels frequently.
| Design
In both experiments, participants gave healthiness judgements in a 2 (synonym, between-subjects: energy or calories) Â 2 (quantifier format, between-subjects: verbal or numerical) Â 2 (quantifier magnitude, within-subjects: small or large) mixed design. Compared with Experiment 1, we manipulated only small and large quantifiers (not moderate) because moderate quantifiers had a similar pattern of results to large quantifiers in Experiment 1.
In Experiment 2a, participants made affect judgements for either energy or calories (independently of the quantifier). In Experiment 2b, participants inferred a speaker's positivity from each framed sentence (including both quantifier and attribute) using the same mixed design as for the healthiness judgements.
| Materials and procedure
In both experiments, participants first provided numerical percentages for a low % and high % of either calories or energy. 6 These translations were used as the within-subject quantifier magnitudes in the numerical condition. As part of the translation task, participants also provided filler translations of other quantifiers (e.g., translating 25% fat into a verbal equivalent) to avoid our research design being transparent to participants.
To distract participants from focusing on the translations they had provided, participants next completed a filler task similar to one used in Teigen et al. (2014). After this, participants completed the healthiness judgment task from Experiment 1 for the small and large quantifiers (verbal or numerical). For Experiment 2a, we also asked participants how much they would be willing to pay for a cereal bar with that energy (or calorie) description. 7 Participants also completed a measure of the mediator variables of interest. In Experiment 2a, this was the affect associated with the attribute (energy or calories), measured on a 7-point semantic differential scale identical to Experiment 1 (scale reliability was excellent, Cronbach's α energy = 0.90; Cronbach's α calories = 0.90). In Experiment 2b, the mediator variable tested was an inferred speaker's level of positivity. Participants rated how much they agreed with three statements about how positive the speaker felt about the food product: that the participant should buy the food, that the food was healthy, and that the product was good. Participants rated the statements on a 7-point Likert scale (1: strongly disagree, 7: strongly agree). Scale reliability was good, Cronbach's α = 0.89 (small quantifiers) and 0.86 (large quantifiers). We computed an inferred speaker positivity rating from the average of the measures.
Finally, participants provided the same socio-demographic information and measures collected in Experiment 1.
| Experiment 2a
Attribute framing effects and moderation by quantifier format We found framing effects on healthiness judgements that were opposite in direction for small and large quantifiers, as shown in the top panel of Figure 3. Participants judged large quantities of energy more Quantifier format moderated the framing effect only for small quantifiers: the framing effect was found for verbal but not numerical quantifiers, β = 0.31, p < .001. However, quantifier format did not moderate the framing effect for large quantifiers, β = 0.03, p = .743.
Does affect explain the framing effect?
After accounting for affect, there remained a significant direct framing effect on healthiness judgements, β small-quantifiers = À0.31, p < .001, β large-quantifiers = 0.16, p = .042. This effect was greater for verbal than numerical formats in the small, but not large, quantifiers, β small-quantifiers = 0.29, p = .008, β large-quantifiers = 0.01, p = .898. The conditional indirect effect, testing the mediation by affect of the framing effect on healthiness, showed no significant mediation for small quan-
| Experiment 2b
Attribute framing effects and moderation by format As shown in the middle panel of Figure 3, we replicated the framing effect on healthiness judgements. Participants judged that large quantities of energy were healthier than the equivalent quantity in calories, β = 0.37, p < .001, Cohen's d = 0.80. Small quantifiers reversed the direction of the effect, with participants judging that calorie quantities were healthier than energy quantities, β = À0.35, p < .001, Cohen's d = À0.74. However, we did not find the predicted moderation by quantifier format, β small-quantifiers = 0.10, p = .078, β large-quantifiers = À0.04, p = .520.
Does inferred speaker positivity explain the framing effect?
We assessed participants inferences about speaker positivity for the full quantified sentence (including the quantifier). These inferences matched the framing effect on healthiness judgements: participants inferred more speaker positivity for large quantities of energy versus calories, and small quantities of calories versus energy, β small-quantifiers = 0.34, p < .001; β large-quantifiers = À0.34, p < .001. Using the PROCESS macro in SPSS (Model 8; (Hayes, 2013), we conducted a planned moderated mediation analysis (illustrated in Figure 4) to assess whether inferences about the speaker's positivity predicted the framing effect for the two quantifier magnitudes. Because participants drew these inferences from the full frame (attribute and quantifier), we assessed this mediator for the entire quantified phrase (e.g., "low calories") instead of for the independent attribute (e.g., "calories").
Therefore, we expected quantifier format to moderate the a path instead of the b path in Figure 4.
| Discussion
Our second set of experiments found mixed support for our prediction that quantifier format would moderate the framing effect. Only for F I G U R E 4 Moderated mediation model used in Experiment 2b, testing whether inferred speaker positivity mediates the effect of attribute on healthiness judgements, as moderated by quantifier format. The letters a, b, and c 0 indicate the regression coefficients for the individual pathways in the model. Binary variables were dummy coded (calories = 0, numerical = 0). The hypothesis testing coefficients are presented in the text and the full model coefficients are provided in the appendix (Table A3) small quantifiers in Experiment 2a did we find a framing effect for verbal but not numerical quantifiers (in line with Welkenhuysen et al., 2001). In contrast, there was a consistent framing effect for large quantifiers, whether verbal or numerical. Critically, the direction of the framing effect was consistent with the affect associated with the attribute for large quantifiers, but reversed from the attribute's affect for small quantifiers. In addition, the mediation analyses showed that affect mediated the framing effect for large, but not small, quantifiers, while inferred speaker positivity mediated the framing effect for both quantifier magnitudes. However, the mediation analyses differed because Experiment 2a tested the affect associated with the attribute separately from the quantifier (e.g., "energy") while Experiment 2b tested the inferred speaker positivity from the full frame (e.g., "low % energy").
Given that the framing effect was in the opposite direction to the attribute's affect for small quantifiers, it is possible that the quantifier also modified the affect of the attribute, which was not captured by our measure. We sought to address this in Experiment 3 by measuring participants' affective associations with the full quantified phrase (i.e., "low % energy" as opposed to simply "energy").
| EXPERIMENT 3
In Experiment 3, we sought to replicate the framing effect with the synonyms energy and calories, expecting that large energy quantities would be judged healthier than large calorie quantities, and vice versa for the frames with small quantifiers. We also hypothesized that the framing effects for each quantifier magnitude would be mediated by both affect and inferences about speaker positivity. We also tested again the hypothesis that verbal quantifiers would produce larger framing effect than numerical quantifiers. Seventy-three percent used nutrition labels frequently.
| Reversal of attribute framing effect between small and large quantifiers
We conducted a planned moderation analysis to formally test for the effect of quantifier magnitude on the framing effect with healthiness judgements. As expected, there was a significant interaction between attribute and quantifier magnitude. Participants judged large quantities of energy more positively than calories, but they judged small quantities of calories more positively than energy, β = 0.52, p < .001.
4.2.3 | How much do affect and inferred speaker positivity explain the framing effect?
The synonym and quantifier in the frame affected participants' levels of affect and inferred speaker positivity. Large quantities of energy elicited more positive affect and speaker positivity than large quantities of calories, and vice versa, β small-quantifiers = À0.34, p < .001, β large-quantifiers = 0.44, p < .001; speaker positivity: β small-quantifiers = À0.37, p < .001, β large-quantifiers = 0.41, p < .001. We conducted a parallel mediation model testing affect and inferred speaker positivity as mediators of the framing effect on healthiness judgements using PROCESS for SPSS (Model 4, illustrated in Figure 5; Hayes, 2013). 10 The model did not include quantifier format as a moderator since the hypothesis that format would moderate the framing effect on healthiness judgements was not supported. We opted for parallel mediators because the existing theoretical accounts for the explanatory mechanisms of interest had not specified an interactive role between mediators.
Affect and inferred speaker positivity both significantly predicted healthiness judgements (the b paths in Figure 5
| Discussion
Across our experiments thus far, we produced a framing effect with frames using the same quantity of two synonyms with different valence, but the direction of the effect was determined by the quantifier magnitude: large quantities of the positive synonym were judged more positively than that of the negative synonym (as expected with a framing effect), but small quantities of the negative synonym were judged more positively than small quantities of the positive synonym.
Regarding the role of format, Experiment 3 did not find that quantifier format significantly moderated the framing effect on healthiness judgment. This was in line with Experiments 1 and 2b but in contrast with Experiment 2a.
The use of synonyms was necessary to evaluate whether the characteristics of quantifiers moderate the effect of framing, as we could isolate effects for small versus large and verbal versus numerical quantifiers. However, our investigation thus far only focused on one set of synonyms (energy vs. calories). We thus sought to replicate and extend our findings from the food scenario to other synonyms in the next experiment.
| EXPERIMENT 4
The goal of Experiment 4 was to test whether the framing effects as moderated by quantifier magnitude and the explanatory variables of affect and inferred speaker positivity would generalize to other synonyms.
To do so, we tested the framing effect for the original energy versus calories pair as well as two different synonym pairs: utilized versus depleted and dismissed versus fired. We pre-tested the synonym pairs among 107 participants to check that they produced a perceived valence difference of at least a medium size (i.e., Cohen's d = 0.50).
We hypothesized that the quantifier magnitude would moderate a framing effect between an attribute's positive and negative synonyms, with large quantities of positive synonyms judged more positively than the same negative synonyms, but small quantities of negative synonyms judged more positively than the same positive synonyms. Assuming the framing effects held, we planned to test if verbal quantifiers would have a larger framing effect than numerical ones, and if the framing effect would be mediated by participants' level of affect and inferred speaker positivity. 11
| Participants
We collected and stopped data collection once we had 408 participants (targeted to match the sample of Experiment 3). Participants were F I G U R E 5 Parallel mediation model testing whether affect and inferred speaker positivity mediate the effect of the attribute on healthiness judgment. The letters a, b, and c 0 indicate the regression coefficients for the respective pathways in the model. Binary variables were dummy coded (calories = 0). The hypothesis testing coefficients are presented in the text and the full model coefficients are provided in the appendix (Table A4). Note: We tested this model for each quantifier magnitude (small and large) in Experiment 3. In Experiment 4, we added quantifier magnitude as a moderator of the a and c 0 paths in this model 150 psychology undergraduate students, who received course credits, and 258 respondents from Prolific Academic, who received £1 for participation. Before combining the samples for analysis, we confirmed that the two samples did not differ significantly in their response to the manipulations. Participants were aged 18-71 years (M = 29.05, SD = 10.74), 70% female, and 74% White. Following our pre-registered criterion, we excluded from analyses involving quantifier magnitude cases where participants perceived the two numerical quantifiers to be verbally equivalent, as there would be no difference in the small and large quantifiers for these cases. This resulted in 7 (1.7%) exclusions for the food scenario, 17 (4.2%) for the fish farm scenario, and 24 (5.9%) for the business scenario. A robustness check of the same analyses including these cases did not change the nature of our results.
| Materials
Prior to the experiment, we pre-tested three synonym pairs with 107 participants to assess the difference in their valence: energy versus calories, utilized versus depleted, and dismissed versus fired.
Materials and results for this pre-test are summarized in Table 2, with a full report available as supporting information. All the synonym pairs produced a significant valence difference of d > 0.50. 12
| Procedure
Participants first completed a translation task, followed by a 3-to 5-min distractor task, and then the following tasks presented in random order to each participant: a favorability judgment task, an affective judgment task, and an inferred speaker positivity task. Finally, participants provided socio-demographic information.
Translation task
To extend our findings to numerical-verbal translations, we reversed the translation task in the previous experiments such that participants provided verbal quantifiers that best expressed the numerical quantifiers of the framed attribute. For example, participants selected which of the verbal quantifiers, low, moderate, or high, best described "15% energy" (positive small condition) or "65% energy" (positive large condition). We used 15% and 65% because these were average translations of low % and high % found in the previous four experiments reported here. Participants provided translations for 15% and 65% for all three of the framing scenarios (described in Table 2). Each participants' translations were subsequently used to construct the frames in the verbal quantifier condition.
Framing effect measure: Attitude favorability task
We assessed the framing effect by asking participants to rate their attitude toward the described target (the food, the fish farm, or the business) on a 7-point Likert scale (1: not at all favorable, 7: very favorable). This measure maintained a constant value judgment across the three scenarios and allowed us to test whether the framing effects observed in Experiments 1-3 would extend to a different type of judgment.
Mediator variable measure: Affect association task
For each scenario (featuring both attribute and quantifier), participants completed a semantic differential scale with four bipolar adjective anchors as in Experiment 3. Scale reliability was excellent for all scenarios and quantifiers, Cronbach's α ≥ 0.91. We calculated the mean of the four items as each participant's level of affect associated with the frame.
Mediator variable measure: Inferred speaker positivity task
For each scenario, participants inferred whether the speaker was positive about the target by rating their level of agreement with three statements reflecting the speaker's positive attitude (as in Experiments 2b and 3; for example: "The speaker believes the fish farm's practices are good"). Scale reliability was good for all scenarios and quantifiers, Cronbach's α ≥ 0.89. We calculated the mean of the three items for each participant, with a higher mean score meaning greater agreement that the speaker was positive toward the target. Note: The quantifier was either small or large (e.g., 15% vs. 65%) and verbal or numerical (e.g., a low % vs. 15%). **p < .01. ***p < .001.
| Results
We tested for three hypotheses. First, we expected a framing effect on participants' attitudes about the scenarios that was larger in the verbal than the numerical condition. Second, we expected the framing effect would be opposite in direction for the small versus large quantifiers. Third, we expected that affect and inferred speaker positivity would mediate the framing effect.
| Attribute framing effect and moderation by quantifier characteristics
We found that the synonym significantly affected participants' attitudes, moderated by the quantifier magnitude, in two out of three scenarios (food and fish farm, but not business), as illustrated in β small-food-quantities = À0.22, p < .001, β small-fish-quantities = À0.17, p < .001.
We performed planned moderated mediation analyses on participants' attitudes in the food and fish farm scenarios, using the model illustrated in Figure 5, to which we added quantifier magnitude as a moderator of the framing effects on judgements, affect, and inferred speaker positivity (c 0 and a paths in Figure 5). The analysis used PRO-CESS for SPSS (Model 8, Hayes, 2013). 13 Affect and inferred speaker positivity both significantly predicted participants' attitudes in the food and fish farm scenarios (b paths in Thus, overall, Experiment 4 supported our hypotheses that the direction of the framing effect reversed between small and large quantifiers and that affect and inferred speaker positivity mediated the framing effect. The evidence was consistent across two scenarios with different synonym pairs. However, we did not find the expected greater framing effect for verbal than numerical quantifiers.
| GENERAL DISCUSSION
In five pre-registered experiments, we found three main findings.
First, we consistently replicated a framing effect for the synonyms "energy" and "calories." We showed that it extended to another synonym pair ("utilized" vs. "depleted"), though not the pair "dismissed" versus "fired." Second, we found that verbal quantifiers did not trigger larger framing effects than numerical quantifiers. Third, we found that the magnitude of quantifiers moderated the direction of the framing effect, producing framing effects opposite in direction to the valence of the independent attribute. Affect and inferred speaker positivity mediated the framing effect more for large quantifiers than small ones, highlighting that the quantifier makes an important contribution to the framing effect and the mechanisms behind it. Here, we discuss these findings in terms of their implications for psychological mechanisms behind framing, and considerations for future framing research.
| Framing effects with attribute synonyms
Our experiments differed from typical attribute framing studies in that we did not use numerical complements and antonyms. We departed from the usual approach as this allowed us to test the effect of quantifier magnitude orthogonally to that of attribute valence. Our approach using synonyms that differed in valence was atypical, but not without precedent: synonyms were used in studies classified under goal framing (e.g., Epley et al., 2006) and attribute framing (e.g., Hardisty et al., 2010). Given our departure from the typical attribute research design, it is important to state the implications of our findings for the attribute framing literature as well as for other forms of framing effect such as goal framing.
Our synonyms depicted the same proportion of the same attribute in both frames. For example, we depicted the same proportion of fish harvested as either 15% utilization or 15% depletion. This is in contrast to the traditional attribute frame, which might depict the scenario as 15% depletion versus 85% conservation. These scenarios lead to the same proportions as in all of them, 15% of fish are gone and 85% remain. However, psychologically, the frames are perceived differently, producing the framing effect. We found a robust effect with two pairs of synonyms (d = 0.3-0.8), comparable with the antonym frames in the literature. The last pair, which had the smallest valence difference between the less negative "dismissed" and the more negative "fired," did not produce a framing effect. This suggests that synonyms need to show a sufficient valence difference, enough to distinguish the positive and negative frames. This may apply as well to antonym frames, where certain scenarios that are more clearly positive or negative find smaller or no effects (e.g., Liu et al., 2020).
One may argue, however, that although the terms "energy" and "calories" describe an equivalent dietary contribution from the food, the frames do not convey the same meaning to the recipient. People may infer additional information from a speaker's choice of frame, for example, a glass described as half-full is believed to mean that it was empty before (Sher & McKenzie, 2008), or 25% fat may be inferred to be at least that much fat (as opposed to exactly so; Mandel, 2014). As a result, antonym frames would not be informationally equivalent because one can infer there is more fat (or water) in the negative frame and less in the positive. With synonyms, because the quantities in both frames are the same, these inferences should act in the same direction: the food has at least 15% of the calories (or at least 15% of the energy) you need. However, the synonyms may result in informational non-equivalence through other reasoning about the speaker's choice of frame. One can infer that a speaker used a positive descriptor because they recommend a certain course of action (Hilton, 2008;van Buiten & Keren, 2009)-thus, energy provided is meant to be consumed, whereas calories provided are meant to be avoided. These inferences about what a speaker might recommend overlap with the conceptual criteria for goal framing, which has a focus on persuading individuals to support or adopt behaviors (Levin et al., 1998). While our experiments do not strictly align with goal framing in terms of what is framed and the target response, the two types of framing often overlap and are not exclusive. An information leakage perspective of framing effects with synonym frames could thus extend to goal framing, where two goal frames have different meanings for individuals because they reflect a speaker's recommendations. Considering how attribute and goal frames are constructed-using synonyms or antonyms-could provide more insight into the mechanisms shared by the two types of framing, and help explain studies with overlapping framing characteristics (e.g., Hardisty et al., 2010;Welkenhuysen et al., 2001).
| Quantifier characteristics and the framing effect
Quantifiers play an important role in modifying one's perception of an attribute (Kiss & Pafel, 2017). Using synonyms of relative positive or negative valence to maintain the same quantifier across frames, we found that the magnitude of quantifiers consistently interacted with the synonym's valence. This is consistent with other research where people were sensitive to whether a scenario on the whole was positive or negative, rather than just the attribute (e.g., people evaluated "65% didn't fail" as better than "35% didn't pass"; . Our experiments showed there was a reversal in judgements of the negative synonym when it had a small versus a large (or medium) quantifier. For example, we observed that judgements of calories tended to shift more depending on whether there were large or small amounts. When assessed on its own, the term "calories" received affect judgements around the center of the scale, so it was actually the less valenced term of the pair, but it was more negative than energy. It may be that negatively valenced terms are more sensitive to modification by quantifier magnitude-which could also explain why we did not find an effect with the fired versus dismissed synonyms, both of which are negative.
Compared with synonym attribute frames, research using antonym frames did not find consistent moderating effects of quantifier magnitude. Some found differences in the framing effect size Liu et al., 2020;Sanford et al., 2002); others did not (Jin et al., 2017;Kim et al., 2014;Saad & Gill, 2014). Past studies often expected that a frame's valence would match the attribute's valence (e.g., any quantity of fat is more negatively valenced than complementary percentage of "lean"). However, we provided here evidence that with some combinations of quantities, the smaller quantity could reverse the frame's valence, thereby reducing, or possibly even cancelling, the difference in valence perception. For example, a negative frame of 5% fat is in fact positive-then raising the question: is it less positive than a 95% lean frame? If it is not, this could explain why the framing effect was smaller or disappeared for more extreme antonym frame combinations Liu et al., 2020), while studies that use more moderate combinations (e.g., 40% vs. 60%) might not reflect an impact of the quantifier. Our findings highlight the need to check how the valence of the quantified attribute compares with the valence of the descriptor alone. This is important not just to attribute framing, but other framing types (e.g., goal framing) that may use quantifiers in the frame.
Our experiments also found that quantifier magnitude affected the extent to which affect and inferred speaker positivity explained the framing effect. Both the mediators operated in parallel, but mediation effects were greater for the large quantifiers, especially when mediated by affect. From this, we suggest that quantifier characteristics could also moderate the mechanisms by which people reach a judgment. For instance, affect more consistently predicted judgements for large than small quantifiers (in Experiments 1, 2a, 3, and 4).
However, inferred speaker positivity consistently predicted judgements for both quantifier magnitudes (Experiments 2b, 3, and 4). This could indicate that although both factors contribute to people's judgements, people make more affective judgements with large than small quantifiers. These effects are consistent with both affect encoding and pragmatic explanations. With large quantifiers, a valenceconsistent shift in attitudes could be more prevalent as the affect associated with the attribute matches the affect for the entire quantified frame. People can also integrate pragmatic inferences about how quantities are usually phrased: in terms of how much there is rather than how little (Clark & Clark, 1977). Thus small quantifiers act as a pragmatic marker for people to consider other aspects of the information, such as what the speaker might believe. The affective and pragmatic accounts may also relate to each other: if people perceive more positivity from a speaker, that could contribute to positive feelings about the target (Hilton, 2008;Hilton et al., 2005). Conversely, one might also infer that the positive affect they associate with the quantified frame is why speakers used positively valenced wording to describe the scenario. We do not know yet how the processes might interact, but having determined their contributory roles, we propose that future research examines the interplay of the two. Because our studies relied on mediation analyses, an important limitation is the reliance on correlations between dependent variables (affect, inference, and judgements) that cannot determine whether these variables have a causal effect on any of the others. One way to disentangle this in future work may be to manipulate each variable (e.g., the need to infer speaker positivity) by presenting participants with the speaker's actual opinions, and testing how this affects subsequent judgment.
A hypothesis we did not find support for was that larger framing effects would be found when words were used instead of numbers.
We only found this in one study (2a), but we also found the opposite in another study (4), where the framing effect was greater for the numerical than verbal quantifiers, and there were no significant moderation effects in the other studies. Our results suggest there is not a difference in framing effect size between verbal and numerical quantifiers, similar to recent work with typical antonym frames by Gamliel and Kreiner (2019) and Liu et al. (2020)-but contrasting with findings of Welkenhuysen et al. (2001). Verbal quantifiers may not be processed more intuitively than numerical ones, at least not to the extent that it would magnify the framing effect.
In our mediation tests, we also found that the quantifier format affected neither participants' affect for the attribute frame nor what they inferred about speaker positivity, meaning quantifier format failed to magnify the affect or pragmatic signal as we had predicted.
We had expected that verbal quantifiers would magnify these signals based on the evidence that verbal quantifiers produce more deviation from normative judgment (Windschitl & Wells, 1996), increase attention on the context (Moxey, 2017), and possess more pragmatic signaling value (Teigen & Brun, 1995. A reason why our findings were not as expected may be that many of these studies investigated a specific type of verbal quantifier: probabilities (e.g., pragmatic signaling: Teigen & Brun, 1995intuitive judgements: Windschitl & Wells, 1996)-which suggests that verbal probabilities differ from other verbal quantifiers. Supporting this difference, recent evidence indicates that proportional verbal quantifiers, as used in our study, are similar to numerical quantifiers in what pragmatic focus they place on attributes in a frame (Liu et al., 2020). Another explanation to consider relates to the overlap between attribute and goal framing using antonyms or synonyms. Welkenhuysen et al.'s (2001) study with antonyms (25% chance of disease vs. 75% chance of no disease), which found a greater effect with verbal quantifiers, also targeted participants' support for a medical procedure. Their framing scenario thus overlaps with goal framing, and so it could be possible that quantifier format affects goal framing mechanisms rather than attribute framing ones.
| Limitations and future directions
Our five experiments showed that synonyms with different valence produce framing effects comparable with attribute framing with complementary antonyms, and the quantifier (but not its format) modified the frame's valence to produce a framing effect different in direction for small versus large quantifiers. The quantifier's magnitude also determined participants' affect and inferred speaker positivity for the frame, both of which contributed to judgment differences. These findings suggest that first, it is important to check how quantifier magnitude modifies the valence of the overall phrase, as this could affect framing effects even for complementary antonyms. Second, existing accounts and evidence for framing effects could overlap and operate in tandem, so integrating different accounts and typologies could provide a more nuanced view of the mechanisms behind attribute framing.
A limitation of our work is that we did not test more quantifier magnitudes in steadily increasing intervals, because this was not practical with verbal quantifiers, which have less precise increments (Budescu & Wallsten, 1995). We therefore cannot identify if the effect would shrink and disappear before reversing in direction. Given that medium quantifiers had similar effects to large quantifiers (and also, complementary antonyms find an effect using 50%, e.g., full vs. empty; Ingram et al., 2014), the reversal may only occur at a relatively small value. From a pragmatic perspective, one would likely draw more inferences as the informational quantity becomes more ambiguous (Grodner & Sedivy, 2011;Liu et al., 2020). The less clearly positive or negative a set of frames, the more ambiguity there may be for the evaluation, which should produce larger framing effects. Testing for levels of ambiguity of different quantifier magnitudes would be another promising avenue to understand how the quantifier magnitude modifies frame valence.
We also see merit in examining whether other posited explanations for attribute framing, such as fuzzy-trace theory (Reyna & Brainerd, 1991), can apply to synonym frames. Our findings do not suggest that participants' used the "gist" of the information (e.g., the food has some energy) instead of the verbatim representations (i.e., the exact information: 15% energy) in their judgment. Sensitivity to quantifier magnitude is posited to be evidence of using verbatim representations in antonym frames , and our participants' evaluations consistently changed with the quantifier magnitude. However, because the quantities were identical in both frames, it is possible that people encode the modified valence in the gist of the information as well. We propose that future investigations target these different explanations to better understand how different mechanisms apply to different types of framing effects.
CONFLICT OF INTEREST
The authors report no conflicts of interest.
ETHICAL APPROVAL
Ethical approval for this study was obtained through the University of Essex ethical review committee.
DATA AVAILABILITY STATEMENT
Data and materials from the study are available on the Open Science Framework at https://osf.io/zkmy7.
sure, perhaps because consumers do not necessarily prefer buying healthier products (Raghunathan et al., 2006 (Hayes, 2013) was used to generate unstandardized coefficients, 95% bootstrapped confidence intervals (based on 5000 simulations), and p values. Figure 2 illustrates the corresponding pathways for each of the effects in the model. The a path is the direct effect of frame on affect; the b path is the direct effect of affect on healthiness; the c 0 path is the direct effect of frame on healthiness after accounting for the mediated pathway. (Hayes, 2013) was used to generate unstandardized coefficients, 95% bootstrapped confidence intervals (based on 5000 simulations), and p values. Figure 2 illustrates the corresponding pathways for each of the effects in the models. The a path is the direct effect of frame on affect; the b path is the direct effect of affect on healthiness; the c 0 path is the direct effect of frame on healthiness after accounting for the mediated pathway.
T A B L E A 3 Effect of frame and quantifier format on inferred speaker positivity and healthiness judgements in tests of moderated mediation for small and large quantifiers in Experiment 2b (Hayes, 2013) was used to generate unstandardized coefficients, 95% bootstrapped confidence intervals (based on 5000 simulations), and p values. Figure 4 illustrates the corresponding pathways for each of the effects in the model. The a path is the direct effect of frame on inferred speaker positivity, moderated by quantifier format; the b path is the direct effect of inferred speaker positivity on healthiness; the c 0 path is the direct effect of frame (moderated by quantifier format) on healthiness after accounting for the mediated pathway.
T A B L E A 4 Effect of frame on affect, inferred speaker positivity, and healthiness judgements in tests of mediation for small and large quantifiers in Experiment 3 (Hayes, 2013) was used to generate unstandardized coefficients, 95% bootstrapped confidence intervals (based on 5000 simulations), and p values. Figure 5 illustrates the corresponding pathways for each of the effects in the model. The a path is the direct effect of frame on the respective mediators (1: affect, 2: inferred speaker positivity); the b path is the direct effect of each of the mediators on healthiness; the c 0 path is the direct effect of frame on healthiness after accounting for the mediated pathways.
T A B L E A 5 Effect of frame and quantifier magnitude on affect, inferred speaker positivity, and favorable attitude in tests of moderated mediation on two scenarios in Experiment 4 (Hayes, 2013) was used to generate unstandardized coefficients, 95% bootstrapped confidence intervals (based on 5000 simulations), and p values. This model is similar to that illustrated in Figure 5, but with quantifier magnitude included as a moderator of the a and c 0 paths. The a path is the direct moderated effect of frame on the respective mediators (affect and inferred speaker positivity); the b path is the direct effect of each of the mediators on favorable attitudes; the c 0 path is the direct moderated effect of frame on favorable attitudes after accounting for the mediated pathways. | 2021-08-02T00:05:46.877Z | 2021-04-30T00:00:00.000 | {
"year": 2021,
"sha1": "de0ed2f3038ce2034a075717a2454096a02e49a7",
"oa_license": "CCBY",
"oa_url": "http://repository.essex.ac.uk/30407/1/bdm.2251.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "830aba288dc2bc86ef2024fc23abd3822ce56f17",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
12521068 | pes2o/s2orc | v3-fos-license | Bubble continuous positive airway pressure in the treatment of severe paediatric pneumonia in Malawi: a cost-effectiveness analysis
Objectives Pneumonia is the largest infectious cause of death in children under 5 years globally, and limited resource settings bear an overwhelming proportion of this disease burden. Bubble continuous positive airway pressure (bCPAP), an accepted supportive therapy, is often thought of as cost-prohibitive in these settings. We hypothesise that bCPAP is a cost-effective intervention in a limited resource setting and this study aims to determine the cost-effectiveness of bCPAP, using Malawi as an example. Design Cost-effectiveness analysis. Setting District and central hospitals in Malawi. Participants Children aged 1 month–5 years with severe pneumonia, as defined by WHO criteria. Interventions Using a decision tree analysis, we compared standard of care (including low-flow oxygen and antibiotics) to standard of care plus bCPAP. Primary and secondary outcome measures For each treatment arm, we determined the costs, clinical outcomes and averted disability-adjusted life years (DALYs). We assigned input values from a review of the literature, including applicable clinical trials, and calculated an incremental cost-effectiveness ratio (ICER). Results In the base case analysis, the cost of bCPAP per patient was $15 per day and $41 per hospitalisation, with an incremental net cost of $64 per pneumonia episode. bCPAP averts 5.0 DALYs per child treated, with an ICER of $12.88 per DALY averted compared with standard of care. In one-way sensitivity analyses, the most influential uncertainties were case fatality rates (ICER range $9–32 per DALY averted). In a multi-way sensitivity analysis, the median ICER was $12.97 per DALY averted (90% CI, $12.77 to $12.99). Conclusion bCPAP is a cost-effective intervention for severe paediatric pneumonia in Malawi. These results may be used to inform policy decisions, including support for widespread use of bCPAP in similar settings.
IntroductIon
In 2015, over 5.9 million children worldwide died before their fifth birthday; the majority of these deaths were preventable or treatable with simple, inexpensive interventions. 1 The leading infectious cause of death in children under 5 years is pneumonia, accounting for 15% of paediatric deaths worldwide, and resource-limited settings bear a disproportionate share of mortality and disease burden. 2 Pneumonia frequently causes respiratory distress and hypoxia in children, which can lead to respiratory failure and cardiac arrest in severe or untreated cases. The highest case fatality rate (CFR) occurs in children with severe pneumonia (table 1). 3 4 Even a small improvement in the management of pneumonia could result in a significant decrease in childhood morbidity and mortality.
Effective bubble continuous positive airway pressure (bCPAP) reduces the need ► Only cost-effectiveness analysis evaluating the use of bubble continuous airway pressure (bCPAP) for paediatric pneumonia. ► We chose an example low-income country (Malawi) where costing and outcomes data exist. ► In general, we used conservative estimates that would overestimate bCPAP costs and underestimate benefits, and the intervention was still cost-effective. ► Because of extensive sensitivity analyses, we are confident that our results are robust. ► Cost-effectiveness analyses are inherently limited by the data available. ► Most individual inputs are based on a single study, generally with a small sample size. ► The case fatality rate for standard of care and bCPAP came from a randomised controlled trial in Bangladesh and were determined using the proxy of treatment failure rates as opposed to reported mortality rates given Malawi's more limited resources. The case fatality/treatment failure rates from the Bangladeshi trial are supported by results from prospective cohort studies conducted in Malawi. ► The cost of long-term sequelae is a rough estimate based on the cost of lifelong treatment, which likely overestimates the cost considerably. for invasive methods of respiratory support (intubation and mechanical ventilation), 5 6 and has been shown to improve clinical outcomes in several resource-limited settings: India, Malawi, Ghana, Vietnam and Bangladesh to name a few. [6][7][8][9][10][11] However, bCPAP is not universally available despite compelling evidence of its benefits, possibly because it is deemed too expensive for resource-limited settings.
Malawi is a low-income, HIV-endemic country in southern Africa with limited resources and a high burden of disease: 43 000 children under 5 years died in 2012 alone, 2 and pneumonia continues to be the leading cause of childhood death with a 24.3% annual incidence rate 4 and a CFR of 23.1% in children with very severe/severe pneumonia. 3 12 13 Our review of the literature yielded few cost-effectiveness analyses of bCPAP in the treatment of pneumonia in resource-limited settings, and no analyses of bCPAP in severe pneumonia in a paediatric, non-neonatal, population. This study addresses this gap in knowledge with the following aims: (1) to quantify the clinical benefits of bCPAP in the treatment of severe paediatric pneumonia in Malawi as measured by mortality rates and disability-adjusted life years (DALYs), (2) to assess the costs associated with implementation of bCPAP in Malawi, and (3) to determine the incremental cost-effectiveness ratio (ICER) of bCPAP as compared with standard of care.
Methods overview
The focus of this study is children under 5 years, excluding neonates, in Malawi with severe pneumonia, by WHO criteria. 14 figure 1A). The standard of care in Malawi for the treatment of severe paediatric pneumonia includes hospitalisation at a district or central hospital with a dedicated paediatric ward, antibiotic therapy, and oxygen therapy via an oxygen concentrator and nasal cannula in a high-dependency unit. 12
Intervention
Treatment for severe paediatric pneumonia ideally includes six elements: provider knowledge to appropriately manage pneumonia; oxygen; antibiotics; non-invasive positive pressure ventilation (such as bCPAP); non-invasive monitoring (continuous pulse oximetry); and nasopharyngeal (NP) suctioning. The first three are part of standard of care in Malawi. For bCPAP delivery, we modelled our analysis on a basic, modified nasal prong and oxygen concentrator model, 15 a bCPAP system previously shown to be effective in treating severe pneumonia in children in resource-limited settings. 7 8 For bCPAP, we also included the costs of provider training, pulse oximetry and NP suction as these are integral to the intervention.
Analytic approach
We took the perspective of a Malawian government hospital, encompassing all (ie, societal) direct medical costs, with a lifelong horizon in terms of morbidity and mortality. The benefit of averted mortality is the discounted average life expectancy, while the cost of longterm sequelae is the discounted cost of lifelong therapy.
Inputs and assumptions
Cost inputs came from published values in the literature or vendors (online supplementary table 1A). We identified resources required for bCPAP from prior microcosting studies in Malawi. 12 16 Specific indirect provider training costs are allocated for the implementation of bCPAP and based on published costs associated with the Child Lung Health Programme (CLHP) in Malawi. The CLHP trained providers in the diagnosis and treatment of pneumonia and the use of oxygen therapy. 3 13 The CLHP also supplied oxygen concentrators and essential supplies to 25 paediatric wards around the country. 12 We included the cost of essential capital equipment: an additional oxygen concentrator, pulse oximeter and NP suctioning device. We assumed the oxygen concentrator would be used for bCPAP for 90 days out of the year, and also assumed no additional benefit when not in use for bCPAP. The entire bCPAP system, including the concentrator, reusable components, pulse oximeter, NP suction device and spare parts, has a lifespan of 5 years.
We did not include extra personnel time in the bCPAP intervention as there are limited data on the extra time required, and based on conversations with providers from this setting, we assume it to be minimal. Prior analysts have made the same assumption. 16 17 We used activity unit costs and relied on data from WHO-CHOosing Interventions that are Cost-Effective to determine the average cost per bed day in a public teaching hospital in Malawi. 18 In addition to bed-day costs, we included the cost of antibiotics, a chest radiograph and laboratory investigations in the cost of hospitalisation. The range for vendor costs used in sensitivity analysis was set at ±50%.
Survival and sequelae probabilities were determined through review of the literature. CFRs for both bCPAP and standard of care came from a single, randomised controlled trial (RCT) conducted in Bangladesh with three treatment arms: low-flow oxygen, high-flow oxygen and bCPAP. 7 In this RCT, patients who failed low-flow oxygen were then randomised to high-flow oxygen or bCPAP therapy, and those who failed bCPAP or high-flow oxygen were intubated and mechanically ventilated. 7 In Malawi, high-flow oxygen, bCPAP and mechanical ventilation are not routinely available as rescue therapies. For this reason, we chose to use treatment failure rates as a proxy for mortality. When reliable studies were unavailable, educated assumptions were made and noted as estimates. We used the WHO and Global Burden of Disease published disability weights for treated or untreated lower respiratory tract infection (LRTI) for children 19 and accounted for the risk of long-term sequelae in survivors. 20 Complication rates of bCPAP in prior studies have been reported as negligible or non-existent; therefore, we did not include an input for bCPAP-related complications. [21][22][23][24] All costs are reported in US$ adjusted for inflation based on the Consumer Price Index. We discounted health outcomes (death and DALYs) and costs by 3%.
We calculated DALYs following a patient from birth with an average age of onset of severe pneumonia of 1 year 5 and an average life expectancy if one survives to age 5 of 65.4 years. 25 Long-term sequelae of pneumonia include: restrictive lung disease, obstructive lung disease, bronchiectasis, chronic bronchitis, asthma and abnormal pulmonary function or chronic respiratory disease not otherwise specified. 20 Most of these conditions are chronically controlled with a combination of an inhaled steroid and a β 2 -agonist. The Global Asthma Network recommends beclomethasone (steroid) and salbutamol (β 2 -agonist) in resource-limited settings, 26 and both are listed in the Malawian Standard Treatment Guidelines published by the Ministry of Health. 27 We assumed that sequelae are life-long and non-progressive and an affected person requires daily medications to control symptoms and prevent acute exacerbations. We used data from resource-limited settings for length of stay (LOS) for pneumonia survivors and non-survivors with bCPAP 1 9 and without it, 3 7 28 as well as for average duration of bCPAP therapy. 6 7 We assigned baseline values and ranges to each health outcome and cost input based on confidence intervals or plausible ranges as determined from review of the literature (table 2). Each input is an estimate based on the best sources available. We used a series of deterministic Open Access A series of one-way sensitivity analyses were performed to test key inputs across the range of input values. Variation in costs associated with bCPAP and their effect on the ICER are shown in figure 2, while variations in the CFRs for standard of care and bCPAP are shown in figure 3.
We ranked inputs in order of effect on the median ICER; the inputs causing the greatest variability were CFRs for standard of care and bCPAP, cost per day for bCPAP, and bCPAP duration. All inputs, including those pertaining to the intervention-CFR for bCPAP, duration of bCPAP, cost of bCPAP per day and one-time costs for bCPAP-influenced the median ICER between $9 and $40 per DALY averted ( figure 4). The multi-way probabilistic analysis resulted in a median ICER of $12.97 per DALY averted (90% CI, $12.77 to $12.99; online supplementary figure 2A).
dIscussIon
Our base case analysis demonstrated an ICER of $12.88 per DALY averted, which is highly cost-effective by most standards. National immunisation programmes in resource-limited settings cost approximately $7-$438 per DALY averted. 29 Multi-way sensitivity analyses produced a median ICER close to the base case and a narrow CI. The inputs that caused the greatest median ICER variability were CFRs for standard of care and bCPAP, daily bCPAP costs and LOS. LOS directly impacted the cost of hospitalisation and indirectly affected the cost of bCPAP; bCPAP lengthened LOS through increased survival for children who would otherwise have died, which was accounted for in this model. bCPAP therapy would need to extend LOS considerably longer than standard of care to create an unfavourable ICER, and there is no evidence for this in the literature.
CFRs were highly influential in this model. We used treatment failure rates from Chisti et al as a surrogate for mortality. 7 The CFR for standard of care was consistent with data from Malawi reported by Enarson et al, though higher than reported in an observational study by Lazzerini et al (CFR for severe pneumonia by WHO criteria was 21.9%-23.1% and 11.8%, respectively). 3 Our findings are consistent with past studies of similar interventions. In Papua New Guinea, oxygen therapy was cost-effective with an ICER of $50 per DALY averted, 31 and in Malawi, bCPAP was cost-effective for neonates with an ICER of $4.20 per life year gained. 16 The latter study by Chen et al appears more favourable than our results, but there are several notable differences in cost inputs: we accounted for training costs, maintenance costs, the cost of pulse oximetry and the cost for NP suction. When these additional costs are taken into account, our results are consistent with that of Chen et al. 16 There are several limitations to this analysis. Most individual inputs are based on a single study, generally with a small sample size. The CFR for standard of care and bCPAP came from an RCT in Bangladesh 7 ; we chose to use failure rates as a proxy for mortality due to treatment arm crossover and a lack of rescue therapies, namely mechanical ventilation, in Malawi. It is possible that the failure rates overestimate the CFR in both arms; however, the standard of care CFR is supported by results from prospective cohort studies conducted in Malawi, 3 30 though similar corroborating results do not exist for the bCPAP CFR in Malawi. Our sensitivity analyses examined wide ranges for both mortality rates and included rates beyond what is currently published. The cost of long-term sequelae is a rough estimate based on the cost of lifelong treatment with a recommended inhaled steroid and a β 2 -agonist; however, our estimate likely overestimates the cost as not all patients with sequelae will need or be prescribed therapy, and overall access to affordable medications in Malawi is poor. 32 Extensive sensitivity analyses were performed in an attempt to account for the imprecision in the model, and our finding of excellent cost-effectiveness is robust.
In general, we used conservative estimates that would overestimate bCPAP costs and underestimate benefits. This includes the assumption that bCPAP would be used for 90 days out of the year and only for the treatment of pneumonia. bCPAP is also an effective supportive therapy for sepsis, anaemia, dengue and shock, 11 which are not accounted for in this model. Added use of bCPAP would disperse fixed costs more widely. We modelled the cost of training, but no additional benefit, though skilled providers identify and manage patients more effectively. 33 Much of the overall cost of bCPAP can be attributed to additional hospital costs and, in part, to long-term sequelae due to increased survival. Overall, we believe that bCPAP may be more cost-effective than our model shows.
It is far more meaningful to estimate costs and effectiveness within the local context of disease burden and available resources 34 as opposed to assigning an arbitrary cost-effectiveness threshold. This analysis indicates that bCPAP for severe paediatric pneumonia can be life saving and cost-effective in resource-limited settings similar to that of Malawi. An estimated 95% of all episodes of clinical pneumonia are in resource-limited settings: if every child under 5 years with severe pneumonia had access to effective bCPAP, the worldwide pneumonia mortality rate would decrease by 33%. 2 7 When considering whether to introduce a new bCPAP device as compared with using an oxygen concentrator, 16 we were concerned about a possible unintended consequence; one oxygen concentrator with tubing can be 'split' to provide low-flow oxygen for up to four children at once. If the concentrator is used instead for bCPAP, which requires higher flow rates, only one patient can receive treatment per concentrator, leaving potentially three other patients without oxygen. We do not recommend that oxygen concentrators be used for bCPAP at the expense of children needing low-flow oxygen as this would deny children standard of care. This is why we included the cost of an oxygen concentrator in our model, though we recognise that this does not completely eliminate this allocation dilemma in settings with an insufficient number of concentrators.
The cost-effectiveness analysis is an analytical tool that adds data-in this instance favourable data-regarding the value of the implementation of interventions in relevant settings (for bCPAP, resource-limited contexts similar to Malawi). Much of the current global health funding is devoted to the introduction of new technologies, as opposed to focusing on wide implementation of already available, effective and inexpensive therapies. We found that the existing bCPAP technology is not only appropriate, but also cost-effective and life saving for the treatment of severe pneumonia in resource-limited settings. Malawi is primed for a nationwide roll out of bCPAP with modest investment from a donor or the Ministry of Health given the existing equipment, training and infrastructure. bCPAP applicability in other countries will need to be assessed, and implementation tailored to available resources and priorities. The results of this study support widespread implementation of bCPAP in Malawi, and potentially in similar resource-limited settings, which Open Access could greatly decrease childhood morbidity and mortality globally. | 2017-08-06T17:29:35.837Z | 2017-07-01T00:00:00.000 | {
"year": 2017,
"sha1": "e9aaf06107b412d67567a60e243d68e86935c3c8",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/7/7/e015344.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "3c50133ab87d4b925d7dc1abc1e5d70d4e3e2be5",
"s2fieldsofstudy": [
"Economics",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
11242498 | pes2o/s2orc | v3-fos-license | Comprehensive geriatric assessment pilot of a randomized control study in a Swedish acute hospital: a feasibility study
Background Comprehensive geriatric assessment (CGA) represent an important component of geriatric acute hospital care for frail older people, secured by a multidisciplinary team who addresses the multiple needs of physical health, functional ability, psychological state, cognition and social status. The primary objective of the pilot study was to determine feasibility for recruitment and retention rates. Secondary objectives were to establish proof of principle that CGA has the potential to increase patient safety. Methods The CGA pilot took place at a University hospital in Western Sweden, from March to November 2016, with data analyses in March 2017. Participants were frail people aged 75 and older, who required an acute admission to hospital. Participants were recruited and randomized in the emergency room. The intervention group received CGA, a person-centered multidisciplinary team addressing health, participation, and safety. The control group received usual care. The main objective measured the recruitment procedure and retention rates. Secondary objectives were also collected regarding services received on the ward including discharge plan, care plan meeting and hospital risk assessments including risk for falls, nutrition, decubitus ulcers, and activities of daily living status. Result Participants were recruited from the emergency department, over 32 weeks. Thirty participants were approached and 100% (30/30) were included and randomized, and 100% (30/30) met the inclusion criteria. Sixteen participants were included in the intervention and 14 participants were included in the control. At baseline, 100% (16/16) intervention and 100% (14/14) control completed the data collection. A positive propensity towards the secondary objectives for the intervention was also evidenced, as this group received more care assessments. There was an average difference between the intervention and control in occupational therapy assessment − 0.80 [95% CI 1.06, − 0.57], occupational therapy assistive devices − 0.73 [95% CI 1.00, − 0.47], discharge planning −0.21 [95% CI 0.43, 0.00] and care planning meeting 0.36 [95% CI-1.70, −0.02]. Controlling for documented risk assessments, the intervention had for falls − 0.94 [95% CI 1.08, − 0.08], nutrition − 0.87 [95% CI 1.06, − 0.67], decubitus ulcers − 0.94 [95% CI 1.08, − 0.80], and ADL status − 0.80 [95% CI 1.04, − 0.57]. Conclusion The CGA pilot was feasible and proof that the intervention increased safety justifies carrying forward to a large-scale study. Trial registration Clinical Trials ID: NCT02773914. Registered 16 May 2016.
Background
The examination of frail older patients presenting to the hospital is multifaceted as acute medical problem combined with other ailments make it challenging to find the reason behind the problem requiring admission [1]. This population often requires thorough assessment, continuity, and follow-up [2]. Older people have great confidence in the care and competences that they think hospitals have [3]. Despite this, the health care for older people is often fragmented, focusing on single health complaints and not adequately acknowledging their needs and well-being [4]. A prediction exists that a major focus in the future of in-patient acute medical care is going to be the care of the older people [5]. As the population ages and grows, admittances to the hospital and emergency department (ED) will also continue to increase [6] for older people with chronic and complex illnesses. Adapting health care services by fine tuning the tasks and roles of health care professionals is a supported method that reduces adverse risks during hospitalization while addressing the needs of the frail older people [7].
Frailty includes weakness, fatigue, weight loss, low physical activity, poor balance, slow gait speed, poor motor processing and impaired cognition [8]. Frailty is an agerelated central concept, where the reserves and ability to resist stressors presented to the body's physiological systems are severely limited [9]. Vulnerability in the population with three or more of the frailty factors was found to have a significantly increased risk of hospitalization, disability, and death [8].
Older people who frequently visit the hospital often have social and health care needs [6] and the majority is unable to perform at least one activity of daily living (ADL) which goes unidentified [10]. Hospital programs often do not emphasize the need for geriatric skilled services and staff, despite the increase in geriatric patients requiring hospital admission [11]. Historically, hospital care on acute medical wards is poorly adapted to address the multiple needs of frail older people and is insufficient in providing health care services for problems which could have been readily recognized and treated [12]. This population is frequently exposed to dangers which can exacerbate their loss of function, resulting in unjustified social and health care dependence and even death [3,13]. A focus on the right approach to health care for older people, stressing the empowerment of older people as rights holders, without discrimination and participation in relation to their health and well-being, is an important role of accountability in health [14].
The World Report on Aging and Health [15] has reported that for older people the maintenance of functional ability has the highest importance. Moreover, World Health Organization (WHO) [15] has identified the needs of older people which must include a transformation of health systems away from disease-based curative models, towards the provision of integrated care. Strategies delivering comprehensive person-centered services to older populations secured in evidence [15] and optimizing opportunities for health, participation, and security in order to improve life as people age [16] can be achieved.
A person-centered method designed for managing the frail population is the Comprehensive Geriatric Assessment (CGA) [7,[17][18][19][20], which is a coordination of multidimensional specialties, interdisciplinary diagnostic process used to determine medical, psychological, social, and functional capabilities. The CGA differs from a standard medical evaluation by including diverse domains, where the focus is on restoring healthy function and independence where possible [21,22]. By aiding in the development of a tailored treatment and follow-up plan, coordination of management of care and evaluation of long-term care needs, enhanced communication, and optimal living conditions [7,13,23] are attainable. Finally, the use of a CGA multidisciplinary team on an acute medical ward could produce significant improvements in outcomes of frail older people, including increased survival, improved functioning, and decreased need for elderly care facility placement [2,18].
Sweden is a country with mainly publicly funded health care, and despite the availability of highly specialized acute care services, Swedish health care is facing challenges related to access, quality, efficacy, and funding [24]. Services are not adapted to address frail older people requiring hospital admission, as older patients are marginalized evidenced by the lack of geriatric competence [25] thus limiting their capabilities and freedom [26]. Frail older people need a combination of approaches which are personcentred in praxis and can allow for their well-being. By identifying and acknowledging the needs unique to the person, the team can better understand and support how to assist and recognize what is required for each person to actively age. In spite of this knowledge, studies are lacking and the literature provides little guidance about the practicalities of implementing and evaluating a CGA program on an acute care setting. An urgent need for assessing the development and implementation of such an intervention in acute hospital care that optimizes the care outcomes of frail older people has yet to be realized [27][28][29][30].
This Comprehensive Geriatric Assessment study is designed to examine and prove or disprove if the frail older people who received a CGA intervention could demonstrate improved outcomes in a variety of areas. Specifically, the study will scrutinize independence in activities of daily living, physical function, self-rated health, life satisfaction, quality of care, and health care consumption compared to the control group. Prior to this exploration, a pilot study was carried out to determine the feasibility of the CGA ward's clinical methods, and research procedures used in the randomized control trial (RCT), before determining if a full-scale study should proceed. This article outlines the original study protocol and explores the feasibility of the pilot implementation for the CGA RCT in a Swedish acute care setting.
Pilot study objectives 1. The primary objective was to examine the feasibility of the research procedures of the RCT by evaluating the study procedures related to the rates of recruitment, consent, randomization, eligibility, and retention. In addition, determining the feasibility of the data collection form and questions of the CGA research assessment tools by observing completion rates, missing data, and time required to administer the form was analyzed.
2. The secondary objectives were to examine and identify the CGA ward's clinical methods evidenced through documentation and chart reviews by establish that the intervention participants were assessed in accordance with the CGA domains of functional ability, physical health, psychological state, and social environment. Finally, proof of principle was established for the intervention participants through the surrogate outcomes that they received hospital risk assessments, and care assessments and discharge planning.
Pilot sample size
Justifying the pilot trial sample size in order to decrease the total sample size of the pilot and the main trial together is described to be the most appropriate means of sample size calculation. Recognition must be paid to the pilot trial, as it is part of a larger clinical study. The nature of this pilot study was to determine the recruitment, retention rates, and proof of principle related to patient safety, and a single point estimate of 30 representative participants was determined [31]. Furthermore, it is the intention of the researchers to include the pilot data in the main study, if sampling and methodologies are the same and feasible [32].
Methods
As per CONSORT extension for reporting randomized pilot and feasibility trials [33] a pilot study should be conducted to determine the feasibility of the study's protocol, research procedures, assessment tools, and clinical methods, prior to proceeding with a full-scale RCT. The processes used on the CGA intervention ward were examined for their adherence to the recommendations for evidence-based key objectives for designing pilot and feasibility studies to ensure the methodological design approach taken in the pilot was robust and feasible [33][34][35]. In order to test that the randomization worked, the intervention and control groups from the pilot were compared at baseline for the participant's demographics. Furthermore, all baseline data collection questionnaires were reviewed for missing responses, refusals, and data entry anomalies.
Main study protocol
Design and setting for main study protocol The main study is a two-armed design with participants being randomized into a CGA acute geriatric medical intervention group and control acute medical wards in a university hospital in Western Sweden. Eligibility to participate in the CGA study is for persons 75 or older who presents to the ED and require an acute hospital admission, screened as frail using the FRESH-screen [36] and not admitted via fast track (for stroke, coronary infarct, or a hip fracture).
Intervention group
The participants randomized to the intervention group receive CGA which involves specialized treatment on a geriatric acute medical ward. Assessments are both comprehensive and person-centered, and are provided by an multidisciplinary team to address the frail older people's multiple needs as they related to physical health, functional ability, psychological state, cognition, and social environmental circumstances [2,7]. The focus of personcenteredness is to consider the social world of the person with regards to their everyday life, relationships with others and belief system as it narrates personal values, goals, and motivations [37]. A person-centered approach is tailored by the operational CGA team which consisted of a medical doctor, nurse and nurse assistant (NA), occupational therapist (OT), and physical therapist (PT). When appropriate, the team is extended to include a social worker and a nutritionist.
The key components of the CGA team is to essentially optimize the care and well-being of the frail older patients after identifying their multidimensional and rehabilitation needs, while putting forth a discharge plan in partnership with the frail older person which included recommendations for long-term follow-up. The assessments used on the intervention ward to safeguard a comprehensive understanding of all health domains, enabling problems of the frail older person's problems to be identified and coordinated by the multidisciplinary team are described in Table 1.
Control group
The participants randomized to the control group receive treatment on the acute medical wards. Several of the nursing and PT staff working on the control ward has geriatric competence and training from previous professional experience. Treatments and services such as those provided by OT, PT, social work, and the nutritionist are not automatically included on the acute medical wards; rather referrals are required from physicians or nursing, if participants requires a consultation, assessment, and or treatment from these disciplines.
Recruitment, consent, and randomization
Eligible candidates for the study are identified in the ED by the care coordinator (a nurse assistant). This individual is responsible for recruitment and the randomized inclusion, which is possible only if beds are available on both the inclusion and control wards. Prior to inclusion into the study, potential participants are invited to join. They are informed about the study, how it is conducted, what is expected of them, and that participation is voluntary. Information is provided both in writing and verbally. An opportunity to ask questions is offered. If they agree to participate, a consent form is signed by the participant. Following consent, the randomization is done by a computer-generated numbers and assigned by the case coordinator, where the allocation concealment with a sequentially numbered opaque sealed envelope (SNOSE) is employed. Due to the complexity of the ED including the turnover of staff, shifts, and high pace, it is not viable that an additional person be assigned in safeguarding the randomization.
Study sample size and calculated power for the main study
A power calculation has been created based on the primary outcome variable, dependence in activities of daily living (range 0-9), with an assumed difference between the intervention and control groups of one dependence (i.e., dependent in one more activity of daily living, a clinically relevant difference of importance to the individual as well as the caregiver), and a standard deviation of 2 in both groups. To detect a difference between the intervention and control groups with a two-sided test and with a significance level of α = 0.05 and 80% power, at least 64 participants in each group are needed. To account for loss to follow-up, a total of 156 persons (78 in the control group and 78 in the intervention group) will be included in the study. The power calculation and the assumed loss to follow-up, of 22%, are based on previous research on frail older people [38]. Furthermore, this study intends to pool the pilot and main study data, if the methods, procedures, and data collection remain the same following the pilot study.
Data collection
Once transferred to the ward and prior to the baseline interview, the researcher does a chart review and then completes the data collection with the participants on their respective wards (intervention and control) using the research assessment tools, see Table 2.
The participants are systematically followed up per study protocol in the person's home (or place of discharge), 1 month, 6 months, and 12 months after discharge from hospital. When possible, researchers are blinded during the 1-month data collection for both the intervention and control groups. Furthermore, the participants in the study are also blinded during the RCT and do not know if they are on a control or intervention ward. Finally, the staff is not informed that the patients they are treating are included in the study. The interviews and instruments employed are the same during all phases. Participants are contacted via telephone by the researchers and home visits are scheduled. In cases where participants are unable to complete the study, proxy questions are used, if approved by the participants. All data collected during the RCT are entered into a password-protected database for baseline, 1, 6, and 12 months. The original paper documents are filed in binders and stored in a locked study office. Key research outcomes, domains, and assessments for the main study The primary outcome measure is dependence in activities of daily living. Secondary outcome measures are capability, self-determination, physical function, self-rated health, life satisfaction, morbidity, symptoms, depression, cognition, satisfaction with quality of care, health care consumption, formal care, and mortality. More information on the assessments used for the primary and secondary outcome measures can be found in Table 2. Furthermore, frailty phenotypic criteria indicated is comprised of eight factors, which are assessed separately using instruments and questions to address weakness: hand dynamometer [39], vision: KM visual acuity chart at 1 m [40], gait speed: walking four meters at a speed of 6.7 s or slower [41], balance: Berg's balance test [42], cognition: MMS-E [43], weight loss: question if they have lost weight in the past 3 months, fatigue: question if they have suffered any general fatigue/tiredness over the last 3 months, and physical activity: was defined as one to two walks per week or less [44].
Statistical analysis for the main study
Both descriptive and analytical statistics will be used, in order to compare groups and for analyses of changes over time using IBM SPSS Statistics for Windows, Version 24.0, 2016, Armonk, NY: IBM Corp. Non-parametric statistics will be used when ordinal data are analyzed. Otherwise, parametric statistics will be used. Besides descriptive statistics, the chi 2 and Fisher's 2-tailed exact test to test differences in the proportions between the groups will be used. A value of p ≤ 0.05 (2-sided) will be considered significant. Analyses will be made on the basis of the intention-totreat principle, meaning that participants will be analyzed on the basis of the group to which they were initially randomized. Given the old age of the frail participants, a relatively high drop-out and or death rate is inevitable. The pattern of 'missingness' is described as non-random, since the likelihood that a missing response is directly related to data that were collected or requested [54]. Simply analyzing complete cases will not be relevant and might lead to bias about treatment effects. Therefore, a model addressing data imputation for missing data will be employed which replaces missing values with a value based on the median change of deterioration (MCD) between baseline and follow-up [55]. The reasons for this imputation method is (1) the study sample (frail older people) is expected to deteriorate over time as a natural course of the aging process and (2) reasons for not fulfilling the follow-ups are often deteriorated health. Worst-case changes will be used for those who have died before follow-up.
Statistical analysis for the pilot study
Both descriptive and analytical statistics were used to compare between the intervention and control group. Independent samples t test was performed using IBM SPSS Statistics for Windows, Version 24.0, 2016, Armonk, NY: IBM Corp and are reported as mean difference with a 95% confidence intervals (CI).
Pilot baseline characteristics
Of the 30 participants asked to participate, all consented. See Fig. 1 for the CONSORT 2010 flow chart showing the inclusion and randomization.
The ages ranged from 77 to 96 and 60% were female, 57% lived alone and 10% had higher education. Forty-three percent had decreased cognitive status and 100% were screened as frail. Additional information related to the participants can be found in Table 3.
Clinical methods
Common elements unique to the CGA practices and the team approach encompassed tailored treatment, focus on discharge planning and follow-up, and practiced good communication (both written and verbally) with fellow team members, during daily rounds and with the participants on the ward. On a weekly basis, rounds on the intervention wards comprised two additional team members addressing the comprehensive needs of those requiring nutrition and social work services.
Proof of principle
Chart reviews were performed for all 30 participants comparing the 16 on the CGA intervention ward, with the 14 on the medical control wards to further examine the "proof of principle". It was found that a structured risk assessment to a higher extent was documented among the intervention group which was statistically significant compared to the control group in addressing and documenting the risk for falls, nutrition, decubitus ulcers, and ADL status. See Table 4. Furthermore, it was noted following the chart reviews that the staff working with the control group occasionally addressed safety but were not consistent or systemic in how they documented this information in the charts. Despite demographic characteristic similarities between the intervention group and the control group, see Table 3, additional chart reviews revealed variability amongst the participants regarding the health care services received on the wards. In part, the pilot study intended to secure that the intervention ward was practicing and documenting as a CGA proof of principle. Following chart reviews of all 30 pilot participants, occupational therapy services were more often received and were statistically significant for the intervention group compared with the control. See Table 5.
Recruitment, consent, and randomization during pilot study
Inclusion for the study was estimated at three per week, and the pilot was projected to take 10-12 weeks. However, the pilot study took 32 weeks to complete the inclusion. During portions of the pilot, admission to hospital wards had an estimated wait time of up to 48 h, and in some cases, patients were treated and discharged from the ED without ever receiving a bed or reaching a ward. The inclusion process was frequently retarded, due to the lack of available beds on the acute medical wards. Additional control wards were opened during the pilot study to increase inclusion rates. One hundred percent of participants were screened for frailty; however, the majority (29/30) were screened after inclusion and randomization and 100% (30/30) met the FRESH-screen [36] criteria for frailty, see Table 3. In the early stages of the pilot, it was identified that not all participants were screened with FRESH-screen [36] in the ED prior to inclusion in the study. Furthermore, it was confirmed by ED staff that the FRESH-screen [36] was used more routinely by staff responsible for discharging patients from ED, but not by those responsible for admitting patients to the medical wards. Understanding and accepting that the tools use was not yet a routine among staff, this problem was rectified by adding the FRESH-screen [36] to the baseline questionnaire used by researchers to confirm that participants met the criteria for frailty and met the inclusion criteria for the study.
Consent was 100%, as all participants asked to participate before randomization and transferring to a medical ward agreed. Randomization was used to ensure that each arm of the study would be equal by employing a simple randomization to ensure a balance of the treatment groups with respect to the various combinations of the prognostic variables.
Retention rates
When the pilot study closed after 32 weeks, four (13%) participants had quit the study, two from the intervention, and two from the control group.
Research data collection
The baseline interviews took between 25 and 160 min, with a median time of 102 min. While it was understood, that the clinical assessment tools employed by the researchers were numerous, the comprehensive nature of this study at baseline was steadfast. Despite the wide range in time required to complete the baseline, researchers when necessary completed the data collection as piece work, due to the participants being too tired, too ill, and due to higher priority hospital procedures taking precedence. Therefore, data collection often required two, three, and on occasion four visits to the participants to complete the assessments.
Following the pilot, data was examined by performing chart reviews and comparing the data forms with what was entered into the data base. Clarifications explaining "incomplete data and refusals" were documented as Table 4 Structured safety risk assessment documented on wards *Assistive devices entails: needs assessment, training with, arranging for use of on the ward and or at discharge **95% confidence intervals (CI) ***Care planning meeting: ward staff, municipality home health services and patient planning of services required after discharge comments on the baseline forms by the researchers for participants who were too ill, too tired, too occupied with others (i.e., medical personnel, lab tests, X-rays), or just simply refused to participate. On one occasion during the pilot, a participant was discharged prior to the completion of their baseline. The research team discussed this dilemma and rather than losing the individual in the study, the baseline was completed in the home, as soon as possible after discharge. Throughout the pilot, the research team discussed their observations about how the participants acted during the assessment, often describing them as fatiguing quickly, refusing certain tests, and or not wanting to continue with the study after the baseline or 1-month follow-up. As a remedy, researchers reorganized the order of assessments to stagger the physical tests, allowing for recovery following physically exerting assessments and at the 1-and 6-month follow-ups the number of assessments and questions were reduced. Furthermore, during the pilot not all participant could speak Swedish and it was discovered that two test were culturally biased as the participant was required to read the Latin alphabet. This was addressed and rectified in subsequent follow-ups using universal symbols (for the KM vision test), and the translator writing out the text in the native language (for the MMSE).
The study was originally planned with the intention to use proxy questions if participants decided that they could not complete the study; however, proxy questions were never utilized during the pilot. Rather it was decided interviews for those with decreased cognitive status could be completed with the support of a next of kin, rather than having incomplete data or losing the participant. Furthermore, during the pilot the research team identified the need for flexibility in the data collection procedures, as a means to keep people in the study. It was agreed upon that a shortened version of the questionnaire would be permitted to use over the telephone, for those not permitting a home visit and the primary outcome measure for the main study the ADL staircase, was prioritized. In addition, several secondary outcome measures addressing physical health, fatigue scale, psychological state, self-rated health question, and social environment were prioritized, as well if there had been changes to home help/home health services. This method was applied on two occasions during the pilot study, once at the first month follow-up and once at the sixth month follow-up. The telephone interviews lasted 10 and 8 min, respectively. See Table 6 for a summary of pilot adjustments.
Discussion
The preliminary evidence in the pilot as hypothesized emphasized a positive predisposition towards those receiving intervention provided on the CGA ward. This was confirmed by chart reviews and documentation which displayed that they were receiving health care services with increased focus on safety and were assessed and treated by a multidisciplinary team which was statistically significant when including an OT. Furthermore they had a greater tendency to receive a care plan and leave the hospital with a discharge plan compared to the control group. With regards to a nonsignificant difference in PT services on the wards, it is thought to be attributed to geriatric competent and trained staff that delivered services to the frail older people regardless of their ward or intervention intent. In this framework, noteworthy attention is warranted in the organization and working procedures on an acute medical ward for geriatrics. Examining the process of screening for frailty, safety, clinical assessments, and common elements related to communication and planning crucial to the CGA team intervention is essential. The process of working comprehensively to assess all health domains, regardless of reason for admission onto the acute medical ward [7], and increasing interest in optimizing strategies for delivering comprehensive person-centered health care services to the older population is viable.
The pilot study provided an indication of the rates of recruitment, refusal, eligibility, retention, and sample size which should be expected in a full-scale RCT. Despite the pilot's success, a noteworthy limitation was the slow inclusion rate, predominantly complicated by the reduction in the number of beds at the hospital [56] cut backs [24,57] and a systemic shortage of nursing staff [58] which decreased the possibility to randomize people eligible for inclusion. While the pilot study highlights the weakness, the organizational and operational malfunction of the hospital could not be controlled or adjusted by the researchers. Politicians and hospital administrators through negotiations have addressed and attempted to resolve this predicament stymieing the hospitals organization. Efforts to amend the overcrowding in the ED were addressed by making more beds available on the hospital wards. The research team was proactive and added additional control wards increasing the likelihood for randomization and expedited the inclusion rate. Although the issue remains unresolved, it was evaluated following the pilot and a new estimate of two inclusions per week has been deemed feasible for this RCT going forward. Furthermore, several limitations were discovered in the pilot related to screening, recruitment, and randomization procedures. Specifically, the majority of FRESH-screen (securing that participants were frail) was done by researchers after the inclusion and randomization. The pilot plan was designed that people (75 or older) should be screened for frailty when presenting to the ED [10,15,36,59]; however, this was proven not to be the case in current practice procedures. Fortunately, all of the RCT participants later screened met the criteria as frail, but this was a concern during the pilot. To safeguard the FRESH-screen frailty inclusion criteria, screening was added during the pilot, to the researcher's protocol prior to the baseline data collection. Furthermore, a limitation might be that the pilot study results were not used to calculate the sample size for the main study. The pilot study aim was not to assess the output from ADL, but rather to insure randomization and retention and its surrogates regarding risk assessments and safety.
Another limitation identified in the pilot was related to the recruitment situation, as the responsibility to minimize bias was identified. Specifically, the person with access to the SNOSE should be distinct from those recruiting the participants into the study [60]. However, an acceptation to this rule as in the case of this pilot's procedure for recruitment and randomization, which is centered on emergency medicine and the emergency department, can be deemed feasible if employing the SNOSE [60]. Initial estimates of the demographics obtained in this pilot study were confirmed during the randomization process, and the retention rates are within the forecasted range set for the full RCT.
Regarding the assessment, it has been observed that completing the baseline was time-consuming due to comprehensive nature of this study. However, researchers made adaptations by reorganizing the sequence of questions prioritizing by domain and allowing for rest between physically exerting tests and completing the data collection as piece work if necessary. However, the baseline questions remained steadfast. Consequently, the 1-and 6-month follow-up forms were amended, reducing the number of clinical assessments/questions, to better accommodate the frail older participants. The 12-month follow-up however was unchanged and mirrors the baseline, with measures in place for missing data using MCD if necessary.
Finally, when examining and identifying the CGA ward's clinical methods, a proof of principle has been highlighted in the pilot by the practices on the intervention ward with good confidence. With regards to the research procedures and in determining the feasibility, it was found that the pilot study data collection had incomplete data and refusals. Many of these issues were clarified due to the ill and or frail state of the participants as assessments were omitted, and or not completed, in their entirety during the data collection. Other issues related to the procedures are clarified in Table 6.
Conclusions
The pilot has proven to be feasible in its methods, procedures, and data collection processes and thus is deemed secure in carrying forward to a large-scale study. The proof of principle supporting a CGA intervention was statistically significantly associated with risk assessments and occupational therapy services, and a positive tendency for receiving a care planning meeting and discharge plan compared to the control group. The pilot data is reasoned to be valid and will be utilized and pooled with the large-scale study data. This CGA randomized control trial is the precursor which may identify a breach in Swedish acute medical wards when treating the frail older population. Identifying and measuring what matters and sharing the information in a comprehensive manner will support health care workers' competence and behavior towards people and helps to build trust. Furthermore, the values and goals unique to the frail older person are the basis for the approach securing active aging. Finally, by using the CGA model opportunities for health, participation and security are optimized, enhancing life as people age and focusing on what people themselves value. Refining and transforming the health system's methods, standards, and policies away from disease-based curative models towards the establishment of person-centered integrated care, as displayed in the pilot, could have significant implications for the future of frail older people's health care. | 2018-01-30T11:21:09.444Z | 2018-01-29T00:00:00.000 | {
"year": 2018,
"sha1": "443baa30f443d8f73b71c085940a95606d2e1aec",
"oa_license": "CCBY",
"oa_url": "https://pilotfeasibilitystudies.biomedcentral.com/track/pdf/10.1186/s40814-018-0228-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "443baa30f443d8f73b71c085940a95606d2e1aec",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
264288255 | pes2o/s2orc | v3-fos-license | Age, quality of life and mental well-being in adolescent population: a network model tree analysis
This study presents the results of a network-based analysis of health related quality of life (HRQoL) among Slovenian adolescents. The study aimed to examine the relationship between HRQoL and mental well-being among adolescents of different age and gender groups. A cross-sectional study was conducted from November 2019 to January 2020 in 16 primary and 9 secondary schools in Slovenia. The KIDSCREEN-27 scale was used to collect the data on HRQoL, and the Warwick–Edinburgh Mental Well-being Scale to collect data on mental well-being. We used network model trees to demonstrate differences in psychometric network structure measuring correlations between different concepts in adolescent HRQoL. A total of 2972 students aged 10–19 years participated in the study. The significant split in the network tree (p < 0.001) indicated differences in relations between HRQoL subscale scores and mental well-being score among adolescents younger than 12 years old. In comparison to older adolescents the correlation between mental well-being and mood scores was significantly weaker in this group of the youngest participants (p < 0.001). A network model tree analysis also uncovered an interesting pattern based on gender and age (p < 0.013) where a correlation between mood and family support became weaker for female at the age of 12 and for male at the age of 16. Data mining techniques have recently been used by healthcare researchers and professionals. Network-based analysis is an innovative alternative to classical approaches in HRQoL research. In this study we demonstrate the significant differences in the perceptions of HRQoL and mental well-being among adolescents in different age and gender groups that were discovered using tree-based network analysis.
Settings and participants
The main criterion for inclusion in the study was the adolescent age between 10 and 19.The population was selected based on the systematic review, analysis, and synthesis of the literature.The research involved primary school (from the 5th to the 9th grade) and secondary school students (from the 1st to the 4th grade).Students who are under 10 years old and above 19 years old and those who are not included in the education system were excluded.
According to the Ref. 21there are 454 elementary and 182 secondary schools in Slovenia.Random sampling based on which the expected sample included all available students from 22 primary schools (5.0% of all elementary schools) and 12 secondary schools (5.0% of all secondary schools).Random sampling was chosen to represent student characteristics in the wider student population.The final number of participants was determined by the size of the total population of students, degree of confidence, and margin of error 22 , which amounted to 384 students.The sample size was increased to avoid the risk of attrition and dropout during the study, and to enable generalization of the findings.Survey questionnaires were distributed to 3860 students in primary and 3107 students from secondary schools who consent to participation in the classroom where the classes took place.The research involved primary school (from the 5th to the 9th grade) and secondary school students (from the 1st to the 4th grade).A total of 2972 students returned fulfilled questionnaires and participated in the study (Table 1).
Instruments
The KIDSCREEN-27 measures physical well-being, mental well-being, autonomy, parental relationships, support and peer support, and the school environment.The questionnaire was developed under the KIDSCREEN-27 project 23,24 .It consists of five segments: physical activity and health, general well-being, and emotions about oneself, family and leisure, friends, school, and learning.Each item was scored on a five-point Likert scale ranging from 1 meaning "not at all" to 5 meaning "very much".The KIDSCREEN-27 was validated in a pilot study among adolescents attending primary and secondary schools in Slovenia using a six-step analysis of the psychometric properties of the scale 25 by the authors of this study.
The Warwick-Edinburgh Mental Wellbeing Scale (WEMWBS) questionnaire was developed in Scotland in 2006.The questionnaire includes 14 items and measures positive mental health and mental well-being over the past two weeks.When choosing the answers, a five-point Likert scale is offered and ranges from "none of the time" to "all of the time".A translated questionnaire was used 26 .A sum of all answers gives a total WEMWBS score, which can be interpreted as poor (scores between 14 and 41), moderate (scores between 42 and 59), and excellent (scores above 60).The minimum score can be 14 and a maximum of 70 27 .The questionnaire was validated in a pilot study following the above-mentioned six-step analysis by Dima 25 .The Slovenian version of the WEMWBS achieved good validity and reliability in a sample of nursing students and can be recommended for future usage 28 .
The reliability of both instruments was assessed on the item-level using the Cronbach alpha measurements.Cronbach alpha levels for KIDSCREEN-27 subscales ranged from 0.778 to 0.869 with WEMWBS reaching a Cronbach alpha of 0.899.Tables 2 and 3 represent the Cronbach alpha values for scenarios of a single dropped item including mean and standard deviation values for all items in KIDSCREEN-27 and WEMWBS questionnaires.
Item-level alpha values for the scenario of dropping a single alpha value in the KIDSCREEN-27 ranged from 0.717 to 0.866.There were no cases where a removal of an item would improve the overall Cronbach alpha value of the subscale (Table 2).In case of WEMWBS scale the item-level alpha values in case of removing a single item ranged from 0.886 to 0.900.There was only one item where the removal would increase the overall alpha, but only by 0.001 (Table 3).
Data analysis
Network model trees 29 were used to demonstrate differences in psychometric network structure measuring correlations between different concepts in adolescent HRQoL.Initially, all participants with more than 50% of missing data were removed from the dataset.This step was followed by the data imputation using MissForest approach 30 in the remaining sample.MissForest is an R package that implements the random forest imputation approach for missing data.MissForest first imputes all missing data using the mean/mode, and then fits a random forest on the present values and predicts the missing values for each variable with missing values.In this study, networks were built by splitting the sample into groups of participants by their gender and age to detect significant differences in the network structure allowing high level of model interpretability.All data analysis, including visualization of the results, was conducted using R statistical programming language 31 .Shapiro Wilk's test was used to test the data for normality of distribution.Based on the results from the test of normality we used non-parametric Mann-Whitney test to check for the statistical significance of the differences.
Ethical aspects
Before conducting the study, ethical approval was obtained from the Slovenian National Medical Ethics Committee (no.0120-313/2019/13).The research was performed in accordance with relevant guidelines and regulations.Research has been performed in accordance with the Declaration of Helsinki.Before asking the permission to participate in study, parents and adolescents read research information form where study aims, rights and ethical aspect were explained.Adolescents and their parents gave their written permission to participate in this study.
Results
A total of 2972 students aged 10-19 years participated in the study.A total of 768 (54.6%) females in primary school (PS) and 969 (66.6%) in secondary school (SS), and 638 (45.4%) males in primary and 487 (33.4%) in secondary school participated in the study.Median age of students in primary school was 12 (IQR = 2) and 16 (IQR = 2) for students in secondary school.Other sample characteristics are shown in the Table 1.
To conduct further analyses, data distribution for the KIDSCREEN-27 (Fig. 1) and the WEMWBS (Fig. 2) scale by primary and secondary school was explored.We also conducted statistical tests of normality (Shapiro-Wilk's) where the deviation from the normal distribution was confirmed for KIDSCREEN-27 (p PS < 0.001, p SS < 0.001) and WEMWBS (p PS < 0.001, p SS < 0.001) data.We also calculated skewness of the distribution for all six variables which ranged from − 0.396 to − 1.258.Additionally, we calculated kurtosis which ranged from 2.627 to 4.415.Therefore, we used nonparametric tests in statistical testing of the findings from the exploratory analysis.
As presented in Fig. 2, data is distributed normally.There is also an evident difference in the mean values of mental well-being and HRQoL between primary school students and secondary school students.It is evident that primary school students have higher HRQoL and better mental well-being than secondary school students.
Median value of the mental well-being of primary school students was 55 (IQR = 12) and median value among secondary school students was 51 (IQR = 13), showing that younger students have better mental well-being than older students which was also confirmed by a Mann-Whitney test (p < 0.001).
Moreover, using network model trees we demonstrate the differences in psychometric network structure measuring correlations between different concepts in adolescent HRQoL.The first significant split in the network tree (p < 0.001) indicated differences in relations between HRQoL subscale scores and mental well-being score among students younger than 12 years old (Fig. 3).In comparison to all other groups the correlation between mental well-being and mood scores was significantly weaker in this group of the youngest participants.This might suggest that the WEMWBS might not be the most appropriate tool to measure mental well-being in the young adolescent population.A network model tree analysis also uncovered an interesting pattern based on gender and age (p < 0.013) where a correlation between mood and family support became weaker for female at the age of 12 and for male at the age of 16.This might correspond to puberty related changes in adolescence starting earlier in girls compared to boys which can be supported by the literature.
Correlation between HRQoL concepts differs among younger and older adolescents.Among younger adolescents correlations are strong between mood and general health (r = 0.27, p < 0.001), mood and support from school (r = 0.24, p < 0.001), and family (r = 0.23, p < 0.001).On the other hand, among older adolescents correlations are strong between mood and mental well-being (r = 0.30, p < 0.001), mood and general health (r = 0.28, p < 0.001) and family support (r = 0.21, p < 0.001).Older adolescents' mental well-being is strongly correlated with their general health (r = 0.20, p < 0.001), mood (r = 0.30, p < 0.001), and school support (r = 0.24, p < 0.001) (Fig. 4).Due to relatively low correlation coefficients, we also performed caclualtion of some centrality measures in the network.
Discussion
The link between adolescents' mental well-being and quality of life is now known in the literature and research into these dimensions is essential for the development of preventive and promotional measures aimed at creating a foundation for good mental health in adulthood 32 .Poor adolescents' mental well-being can decrease their satisfaction with life and can lead to various mental health problems in adulthood 33 .Although the correlation between adolescents' mental well-being and quality of life is known, there is a gap in explanations of those correlations.It is not known which factors have influence on the mental well-being of adolescents in correlation with their HRQoL.
Main findings of this study show differences in relations between KIDSCREEN-27 subscale scores and WEM-WBS scores among adolescents younger than 12 years old.This correlation has not been yet explored; thus, results www.nature.com/scientificreports/cannot be directly compared to other studies.The correlation between mental well-being and psychological wellbeing was significantly weaker in group of the youngest participants.It is also evident that correlation between psychological well-being and family support became weaker for female at the age of 12 and for male at the age of 16.It is known that family support has the biggest role in maintaining adolescents' mental well-being in younger adolescents.While friends and peers have bigger role in older adolescents.Giannakopoulos et al. 34 found out that mental health of parents is correlated with adolescents' better mental well-being, moods, and emotions.Also, adolescents' male gender, younger age, absence of chronic health care needs, high social support, and higher family income were positively associated with better HRQoL.Similar to our results, Meade and Dowswell 1 reported lower scores in HRQoL among female students where scores also declined over time across two of the five HRQoL dimensions (social support and peers, and school environment).Age differences were found across all but one dimension (autonomy and parents relations).HRQoL, the teacher's opinion of performance and the perception of health status are better among adolescents with better family support 35 .When caring about adolescents mental well-being, both social support and quality of life should be considered.Future research should be focused on exploring other factors that may influence adolescents mental well-being.
Limitations
Although, this study revealed interesting and new findings, there are a few limitations which should be considered when interpreting study results.First, a convenience sample was used and the generalization of the findings to the entire population of adolescents might be limited.Also, the WEMWBS and the KIDSCREEN-27 are self-reporting scles which may lead to different biases and limitations.Participants may give socially acceptable answers.They may not be able to assess themselves accurately.For younger adolescents, the wording of the questions may be confusing or have different meaning.Also, they may not understand the concept of mental well-being.From the technical perspective one should be aware that decision tree splits always introduce a certain degree of instability.In practice this could mean that a small change in the data could produce a different tree and consequently different networks 25 .However, this problem can be mitigated using a large sample size in most cases.It should also be noted that although statistically significant the correlation coefficients are relatively small.In such cases additional network analysis metrics might be used.However, due to fully connected unweighted networks used in our approach this would not provide any additional information on strength of connections.On the other hand, a large sample of the data should alleviate some of the concerns in this regard.Additionally, it should be noted that for an exploratory study, it would be very useful to obtain information about the potential cofounders such as socio-economic status, health, and diseased conditions, or early life stress.
Conclusions
The correlation between adolescents' mental well-being and their HRQoL is weaker among the youngest participants.This might suggest that the WEMWBS scale is not the most appropriate tool to measure mental well-being in the young adolescent population.Also, concept of mental well-being is probably not understood among young adolescents.This should be further explored and researched.The correlation between mood and family support is weaker for female at the age of 12 and for male at the age of 16.This might be explained with puberty related changes in adolescence starting earlier in girls compared to boys.The average age for girls to begin puberty is 11, while for boys the average age to begin puberty is 12 or later.Those differences in puberty changes between different genders in correlation with their mental well-being is known but poorly understood and researched.All important persons should be involved in decisions about adolescents' care to provide high-quality and person-directed care.Also, it is important to involve nurses in schools to ensure holistic and continuous care for adolescents who need help and may develop mental health problems.More practical options for adolescents and parents must be provided to get the needed help when faced with mental health issues.Healthcare and educational institutions could ensure professionals for adolescents to help them if they struggle with financial problems, poverty, bad interpersonal relationships, social exclusion, troubles with learning, or other factors that may contribute to worsening mental well-being and mental health.
Figure 1 .
Figure 1.The KIDSCREEN-27 score distribution.PS primary school students, SS secondary school students.
Figure 2 .
Figure 2. The WEMWBS score distribution.PS primary school students, SS secondary school students.
Figure 3 .
Figure 3. Network model tree comparing network structure based on gender and age.WEM Warwick Edinburgh mental well-being scale, KFr social support and peers subscale, KFm parents relations & autonomy subscale, KSc School environment subscale, KMd Psychological well-being subscale, KHI Physical well-being subscale.
Table 1 .
Sample characteristics.PS primary school students, SS secondary school students, M mean, SD standard deviation, n number of participants. | 2023-10-19T06:18:16.087Z | 2023-10-17T00:00:00.000 | {
"year": 2023,
"sha1": "d3d65f54e65e2bc0ff9af96dd0317e714579a378",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-023-44493-w.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8d9f3c4b8349162d62d4c4a3c6470e4b233a63c4",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234392848 | pes2o/s2orc | v3-fos-license | SCIENCE EDUCATION IN THE FAMILY ENVIRONMENT WITH THE EXPERIMENTAL METHOD OF FACIAL COSMETICS PLANT FERTILIZATION IN THE COVID-19 PANDEMIC ERA
The physical and non-physical family environment is one of the most critical factors in student learning outcomes. This research was assessed using a mixed-method descriptive qualitative and quantitative method to see how the family environment, both physical and non-physical, with experimental science learning was conducted on 60 Junior High School students. The requirements of the respondents in this study were 30 open school students and 30 non-open school students. The results showed that: (a) family involvement motivates students when conducting experiments; (b) the feasibility of the physical environment of the family allows students to complete the experiment of making simple liquid fertilizers to the application of these fertilizers to cosmetic plants, (c) students can complete the experiment by giving directions to the online method because of the high involvement family in the learning process. Other studies state that the family environment in the form of parents ‘expectations of students’ future careers in science is the main reason for high family participation in the experimental process to provide maximum student experimental results. In conclusion, the family’s physical and non-physical environment dramatically determines and encourages students to optimize experimental science learning methods so that science education applied during the Covid-19 pandemic by studying at home can be continued and carried out experimentally. The New Normal Education Model through online and non-online methods for science learning can still be done at home, with the involvement of the family’s physical and non-physical environments that support students to complete experiment-based science learning projects. Independent learning in junior high school students can also be done by providing direct control between educators and the students’ non-physical environment, namely parents, to remain motivated. © 2020 Science Education Study Program FMIPA UNNES Semarang
INTRODUCTION
The world faces a Coronavirus (Severe Acute Respiratory Syndrome Coronavirus 2/ SARS-CoV-2/Covid-19) outbreak. Many orders have changed in the current state of emergency. One area that needs to be adapted is education. Formal schools are required to carry out learning at home to break the chain of disease transmission. To make it easier for students to receive edu-cational learning materials designed with an online system and various techniques, the teaching and learning process continues without obstacles.
Students' difficulty absorbing the material is relatively significant, especially if a practical system requires the material. However, in science subjects themselves, students must understand the theory and need practical learning methods to absorb the material.
Student achievement in Indonesia, especially in science learning according to PISA (OECD, 2019), is at level 2, which means that students will be better able to accept science lessons through the scientific exploration of the surrounding phenomena so that they can use their knowledge to draw truth from certain factual conclusions.
Science is a field that students find difficult, so misconceptions often occur (Soeharto et al., 2019). Science education is generally related to scientific competence and intelligence that is consciously acquired about the environment to scientifically solve natural problems and solve student curiosity and build student awareness (Jack et al., 2017).
Science learning today is no longer only possible at school, but students can also study science outside of school, such as at home, in museums, and at other science learning centers (Karim & Roslan, 2020). The trend of science education is beginning to shift. The environment, including the family environment, can be used as a science learning media.
Previous researchers have stated that the environment is more important in contributing to students' academic abilities in science. This atmosphere is very relevant to the students' science learning outcomes (Haworth et al., 2009). Of course, a pleasant environment arouses students' desire to learn science to provide more understanding to students.
Students' perceptions of their learning environment will show how the quality of education students receive. The educational climate positively affects student motivation, happiness, achievement, success, and satisfaction (Wach et al., 2016).
One of the obstacles in science education is the lack of student involvement in scientific activities. Previous research has proven that science education through the home assignment strategy can significantly improve junior high school students (Iksan et al., 2018).
Social factors have a lot of influence on student academic outcomes. These factors are devoted to the student's social network, the colonial capital the student has. The support of the social environment that students get from their families, surrounding culture, religious teachings of students, peers, and the academic climate's role in learning to play a role in student academic success (Mishra, 2020).
The scientific achievement of students in East Asia consistently and having high scores are factors related to the family environment in the form of the ease of students seeking literacy through the availability of books from home, parental education, gender, and students' attitude towards school. On the other hand, students' science learning achievement appears to be negatively correlated with the school and teacher environment, meaning that the school and teacher environment is not related to student science learning achievement in East Asia (Hu et al., 2018).
The learning environment at home, including within the family, also supports students in better understanding literacy, numeracy, and social development (Niklas et al., 2016). Family is an institution that is a leading educational institution for individuals. Class environment, family, and peers are the most critical factors affecting science learning in students. Families motivate students to learn science better. The study results indicated a significant positive relationship between science learning and family, such as science learning with parents and siblings. Science learning that adopts a family environment-based approach can improve science learning outcomes (Soltani, 2018;Sari & Islami, 2020).
Apart from supporting the research results, other studies have also found that the family environment can increase or even reduce school involvement with various lessons. Family support in student involvement in the learning process includes expectations, attribution, disciplinary orientation, family environment, parental participation, and family support systems to support student involvement, to family partnerships with schools (Reschly & Christenson, 2019).
One of the science learning techniques that can be done in a family environment is experimental learning. Experiments can be applied to various types of education, both science and other subjects. The experiment is based on John Dewey's educational psychology theory, prioritizing students to learn and act independently.
Experimental learning is the most effective approach to achieving high academic achievement and improving scientific process skills. This learning model provides scientific experiences to students so that in the science learning process, students can more easily absorb learning (Alkan, 2016).
Methods that learn through experimentation can arouse students to be more active than when educators use traditional lecture and discussion methods. Experiments provide space and time so that students can make decisions about the right way to gain knowledge according to themselves and engage in social interactions (Egbert & Mertins, 2010).
Experiments can satisfy and provide answers to students' curiosity by feeling, seeing, and touching the object of knowledge being studied. Students are more enthusiastic about carrying out learning activities. In general, experiments are independent methods that students can choose according to their convenience. Science by experiment offers closer and more tangible learning. Students no longer only imagine abstract concepts.
Experiments are deemed suitable for use by adults through experience to make it easier to understand certain subjects. The function of scientific-based experimental learning is to hone metacognition and understanding of science. If experimental knowledge is applied to two different scientific linearity groups, it will give different results (Aini et al., 2019).
In this study, experimental learning involved students in making simple liquid organic fertilizers starting from the school experiments then continued with experiments at home involving the family environment, which would be used to fertilize and plant tree seedlings for cosmetic plants like ginger, lemongrass, brotowali, and turmeric.
Experimental learning is more widely used in science classes. With this breadth of knowledge, the application of experimental learning in social studies classes is now starting to develop. But at PISA 2019 (OECD, 2019), Data from the OECD states that the average proficient student and achievers in science who is creative and independent in applying science-based knowledge in Indonesia's various lives are only 7%. This figure is still relatively low compared to other countries. Therefore, other methods, techniques, and designs for science education are needed to foster motivation to learn science for students. The higher the level of student achievement in science.
This study discusses the family environment as the basis for experimental science education. The learning density at home also requires excellent support from the physical and non-physical environment of the family. This research is considered essential to do, considering that the material and non-physical environment play a role in the academic climate and student learning outcomes based on the previously described learning. The Ministry of Education and Culture (Kemdikbud) plans to expand and change the online-based school system. A reference is also needed to what extent the family environment, both physical and non-physical, is ready to support learning design in the familiar new era.
Referring to the explanation above, this study aims to see how learning science's effectiveness and success rate using experiments when applied in a family environment. The novelty of this research, from previous research, lies in the family environment, which is used as a learning support factor, especially in science learning and fertilization experiment learning techniques, and planting more specific plants in cosmetic plants.
This study aims to describe the physical environment of the family and the non-physical environment of the family in the form of family social and emotional support in experimental science-based learning through fertilizing cosmetic plants in the Covid-19 era, which requires students to study thoroughly in their respective homes. METHODS
Research Design
This type of research is a mix of descriptive and quantitative analysis.
Research Respondents
This research population is Junior High School students of 138 East Jakarta who have a whole family and live in one house. The research sample was students of 138 Junior High School who were students of open schools and non-open schools. The model taken is 30 free school students and 30 non-open school students.
Instrument
The questionnaire was administered and filled out by family members online using Google Form referring to the 2015 Parents Questionnaire for PISA, with several items modified to suit this study's contents. The questionnaire was tested first and found the reliability coefficient value of 0.89, which means that the questionnaire can be said to be valid and reliable because it has met the criteria for an amount of α greater than 0.5, so the results of the questionnaire have an adequate level of reliability and can be trusted.
In this questionnaire, respondents' answers are divided into several types of choices; (a) Yes and No, (b) N = Never to give a statement, the thing in the question has never been the same, even though it has been done, OC = Once to explain, it has been done once, ST = Sometimes it is done to say it is not intense, Often = It is done every day and repeatedly, (c) N = Never done at all, TWY = Done but only once/twice a year, TWM = Done with a frequency of once/twice a month, ED = Almost every day.
Data Analysis Technique
Processing in this study used mixed methods shows descriptive statistics and different test results that show the value through the percentage.
Different treatment methods are carried out using open school students, given direct experimental learning to the fertilization process. Whereas for non-open school students, only experimental material was given via a YouTube link and tried themselves on the plants they already had in their respective homes, provided that they were obliged to include parents in the experimental process and after the experiment was given, they were required to fill out a questionnaire through The Google Form is the same as the respondents of open school students who live in the village environment.
Data analysis was conducted to see how the family environment plays a role in students' science education through student experiments ranging from fertilizers to planting cosmetic plants.
The analysis results are presented in figures, tables, and graphs with statistical tools, SPSS version 24, and Microsoft Excel. To make it easier to understand, the results of the study were then categorized descriptively into very high (100-80%), moderate (81-50%), and low (51-0%).
Operational Definition of Variables
To limit the research scope, this study has an operational definition for each variable. The working definition is: Family environment-based science education is scientific education carried out by students by involving families in their respective environments. After first receiving science education for experimental methods in schools, before the Covid-19 pandemic in December 2019 -February 2020.
The family environment is the physical and non-physical environment of students involved as respondents in this study.
This research experiment uses bioremediation techniques to make a simple liquid fertilizer, then apply it to cosmetic type plants and see whether the plants given liquid bioremediation can grow well or not while the students are studying at home. Experiments were carried out with family in the home environment.
Student Characteristics
The results showed that nearly threequarters of respondents (72.2%) were female, and more than a quarter of respondents (27.8%) were male (Figure 1). This study did not look at the influence or relationship of gender with the learning outcomes of experimental scientific methods based on the family's physical and nonphysical environments.
Figure 1. Respondent Gender
Although this study does not look at the influence or relationship of gender in the learning process, several relevant studies state that gender affects the teaching and learning process using multimedia methods. Men are shown to have advantages in various types of multimedia learning. Still, men are lower in retention ability (Heo & Toomey, 2020). Thus, at least, gender will affect the online learning process that uses various multimedia learning types.
Other research also states that in STEM (Science, Technology, Engineering, and Mathematics) learning, gender has an intrinsic relationship with this learning. Girls tend to be moderately motivated in learning mathematics, but in the learning process and learning aspirations, girls have a higher motivation than boys (Oppermann et al., 2020).
Figure 2. Order of Birth of Respondents
Based on the study results, more than a quarter of the respondents (27.3%) were the firstborn, two-fifths of respondents (40.0%) were middle children, and nearly a third of respondents were the firstborn. Respondents (32.7%) were the youngest (See Figure 2).
The results showed that parents' involvement in the family's physical and non-physical environment in learning science experiments in this study was high. Previous researchers have suggested that birth order is a result of the educational process and varies. Birth order does not entirely affect students' academics (Esposito et al., 2020).
Another finding was that birth order and the non-physical environment in the form of family and parents' education in middle school affect parents' expectations, children's attitudes, academic achievement, and children's IQ entirely school. Significantly, the first child gained a more significant advantage in educational support (Kim, 2020). However, this data does not show any influence or relationship, only describes descriptive literacy studies regarding the respondents' birth order.
Family Characteristics
Family is an essential source in determining student academic achievement, including achievement and cognitive level of science learning. In this study, family characteristics include the socio-economic conditions of the family. Based on table 1, more than three-quarters of all respondent fathers (74.6%) have high school education/equivalent, and less than onefifth of fathers (15.4%) have a Diploma degree, and as many as one-tenth of fathers (10%) have a bachelor's degree. Meanwhile, more than fourfifths (80.4%) had high school graduates, less than one-fifth (17.3%) had a diploma degree, and less than one-tenth (2.3%) mothers graduated with a bachelor's degree.
Regarding the respondents' parent occupations, table 1 shows that the highest proportion of the respondents' parent jobs is as employees in the private sector and employees of State-Owned Enterprises. Meanwhile, the respondent's mother worked more as an entrepreneur with more flexible and non-binding working hours.
Based on the research results, family income shows that the highest average family income is between Rp. 3,000,000 -4,000,000,-. Family income and time spent by parents in this study are measured based on the type of work that determines the length of work hours of the father and mother, from the results of studies conducted by previous researchers, have a significant positive effect on academic achievement of parents' moral support for adolescent education. And career guidance provided by the family (Ion et al., 2020).
Physical and Non-Physical Environment Characteristics
The physical environment is seen from land ownership and the family's physical environment feasibility for planting cosmetic plant seeds used to plant cosmetic plant seeds. The feasibility of the family's physical environment is seen in this study to assess the experimental results. If the family's physical environment is right, then there are no obstacles in planting cosmetic plants.
Based on the research results (see table 2), more than two-thirds (68.0%) of the physical environment of the family has a decent occupancy density, more than four-fifths (81.0%) have sufficient housing space, nearly four-fifths (78%) are available land for planting, almost three-quarters of the house (74.2%) with proper lighting, and more than seven tenths (72.3%) have adequate environmental humidity.
Viewed as a whole, the family's physical condition is in the very high category, which can be interpreted as the student's family physical environment to support students in conducting experiments for making simple organic liquid fertilizers and planting types of cosmetics. Plants in the physical environment of the family.
Cosmetic Plant Fertilization Experiments
The first experiments were carried out at school; Students learn experimentally and get material about making simple liquid bioremediation fertilizers through kitchen waste. After that, students practice at home by themselves. The visible result is the application of bioremediation fertilizers in planting cosmetic plants.
In group 2, respondents were given treatment such as online learning, which is currently often carried out by teachers, namely providing YouTube links and short material through the WhatsApp group. Students learn to understand independently with an audio-visual process and practice at home with their families. The results are reported in the same Google form as the first group. The cosmetic plants referred to in this study are plants that can be grown at home but have a function in cosmetology, which can be applied to the face, and used as necessary facial treatments such as kencur, ginger, and turmeric. These plants are plants that are readily available and planted.
Cosmetological properties in this study: when harvested or picked, these plants can be used for daily facial care by applying topically, added with other simple ingredients or only used as a drink that gives a beauty effect.
Turmeric, for example, can be used cosmetically to make face masks. The method of making this turmeric mask is relatively easy; it only needs to be cleaned, then pounded or finely blended, and then mixed with rice and or milk. This face mask can provide a relaxing effect, especially during the pandemic, making many people not do facial treatments because it is challenging to find a cure. Besides, cosmetic plants cultivated on their own are safer for adolescents' skin, especially in junior high school. At this age, adolescents undergo hormonal changes resulting in acne, dullness, and oiliness. Rhizome types in cosmetics are generally used to solve this problem.
Simple organic liquid fertilizers that students learn through the experimental learning method are made using kitchen waste materials such as kitchen spices, unused vegetables, and fruit peels.
With the experimental method in this research, science education, based on the interview results, is more encouraging for students to be enthusiastic about learning science. Experiments involving the physical and non-physical environments of the family, as in this study, can be carried out and practiced by teachers, especially in the field of science. The scientific material included in this experiment is not only one subject. But The process of making fertilizer is by (a) separating organic and non-organic waste, then setting aside non-organic waste and using only organic waste, (b) organic waste and then reducing it by cutting, (c) after that, the organic waste is put into a can/a bucket that has a lid, (d) adding approximately 1.5 liters of water and a bio activator, (e) covering the barrel/bucket, (f) during processing it is not allowed to be opened for up to 2 weeks, (g) liquid fertilizer can be used (see figure 4).
Students' understanding of making liquid fertilizer is relatively high because it is considered to attract students' attention and foster curiosity. However, based on the interview results, both students and parents in group 1 experienced problems in the long process of deposition in liquid fertilizer.
Whereas, in group 2, students who did not get the treatment of giving material directly first, resulting in less motivation. Because it is considered difficult to understand YouTube content if it is not synergized with clear instructions, the limited time for group 2 in the fertilization process is also an obstacle. Because in group 2, the bioremediation fertilizer fermentation was only carried out for 1x24 hours. It was then applied to cosmetic plants for facial care, and the fertilization results were not visible. This research data states that when students are given material only in a YouTube link, it will not run effectively. Teachers still have to interact with students even though they are online, for example, through WhatsApp Group Video Calls, Zoom, Google Meet, and other online teleconferencing media.
In science learning itself, fully online learning methods during Covid-19 without reciprocity and interactions between teachers and the family's non-physical environment can reduce students' interest in science learning, especially if the science material exposed to students is considered difficult by students to solve.
Non-Physical Family Environment in Student Experimental Learning Process
During the Covid-19 pandemic, students repeated and continued the learning process of the experiment of making simple organic liquid fertilizer until they applied the fertilizer to cosmetic plants in their respective homes. In experimenting, students must involve family members in making fertilizer and planting cosmetic plant seeds.
After cosmetic plants started to shoot or grow, group 1 students and parents were given a questionnaire through Google Form to determine the non-physical family environment's role and see family involvement in this experimental science learning results.
Group 2 to students, only given short learning materials to make organic fertilizers using YouTube content, provide material online with Zoom, and provide discussion space through the WhatsApp group. For the most part, this method has been the most common method used in the education process since the Covid-19 Pandemic.
The results of the study (Table 3) state that students get support in the form of scientific stimulation at the age of fewer than ten years from their non-physical environment. The results showed as many as nine out of twenty respondent families (45.0%) often provided stimulation to children when they were young to foster motivation to learn science by watching science-themed TV frequently, a quarter of the respondent's family (25.0%) often read science-themed books, more than one-eighth of the respondents' families (13.3%) often traveled to museums or vacation spots related to science, nearly three-tenths of respondents' families (28.3%) often visited sites with a science theme, less than one tenth of respondents' families (3.3%) often introduced science to children through the community regarding science, less than one tenth of the respondents' families (8.3%) often played lego when children were still young to introduce science, three-fifths of the respondents' families (60.0%) often played with simple household appliances to introduce children to science, more than three tenths of respondents' families (31.7%) often invited children to improve With damaged electronics, less than a tenth of the respondents' families (8.3%) often conducted science experiments using readily available tools such as magnifying glasses and matches, and nearly one-eighth of respondents' families (11.7%) often played science games on their gadgets when respondents still a child.
Parents' parenting styles, including how parents educate their children when they are young, also affect parents' stress in this pandemic era because it is related to life satisfaction. At the time of the Covid-19 Pandemic, parents of children who attended junior and senior high school admitted to being more depressed than parents whose children were already at the college education level. This stress factor is triggered by anxiety, affecting family support for children at home (Wu et al., 2020).
Parenting styles and providing scientific stimulation to children will have a significant impact on children. Families who support children's science education certainly encourage students to get used to science itself to enjoy science. Information: N = Never, OC = Once, ST = Sometimes, Often = Often Table 4 shows that more than three-tenths of the respondents' families (31.7%) communicated with children almost every day about their children's achievement at school, three-fifths of the respondent's families (60.0%) eat with their children nearly every day, seven-tenths of respondents' families (70.0%) had almost Every day telling stories and had positive interactions with children, nearly three-fifths (58.3%) of the respondents' families nearly every day helped children with science assignments, seven-tenths of respondents' families (70.0%) asked children about values and achievements in science in children, seven-tenths Respondents' families (70.0%) invited children to learn science quickly, more than five-eighths of the respondent's families (63.3%) discussed science used in everyday life. Information: N = Never, TWY = Once / Twice a year, TWM = Once / Twice a month, ED = Almost every day Table 5 states that more than two-fifths of the respondents' families (41.7%) work in science, more than half of the respondents' families (51.7%) think that their children are interested and interested in working in science, nearly seven-eighth of the respondents' families ( 85.0%) expect their children to work in science such as researchers, science teachers, doctors, and so on. More than seven-tenths of respondents' families (71.7%) see their children are more interested in continuing their education to science at Senior High School. As many as four-fifths of respondents' families (80.0%) hope or try the best steps so that their children can continue their education in the science field. The results of other research from this study show that more than seven-eighth of the respondents' families (88.3%) consider it essential that their children understand the world and the universe through the context of science, more than four-fifths of the respondents' families (81.7%) state that science is valuable and can be used. In social life, nearly seven-eighths of respondents' families (85.0%) stated that science has a strong correlation in human life; nine-tenths of respondents' families (90.0%) agreed that science can encourage people to know and understand many things in life and the environment, nearly four-fifths. Respondents' families (78.3%) think that human social life will advance when science develops more rapidly (See table 6). The advancement of science will provide social benefits for us 78.3 21.7 Table 7 shows that almost all of the respondents' families (96.7%) did not feel bothered when participating in the child experiment; nearly seven-eighth of the respondents' families (86.7%) encouraged their children so that schoolwork in the form of this experimental project could be completed immediately, as many as nine-tenths of the respondents' families (90 %) provided material support for children while working on projects. In table 7, it is also explained, nearly seven-eighth of the respondents' families (86.7%) are happy to be involved in this experimental project, and almost all of the respondents' families (93.3%) try to accompany the children, cooperate, and provide the best for their children to be able to maximize themselves in this science experiment.
Learning online during the Covid-19 Pandemic has many challenges; previous studies say that one of these problems can be overcome by intense communication, which is quite timeconsuming for parents, ultimately needing to help children in the learning process (Putri et al., 2020). The Covid-19 pandemic has placed a more significant burden on the responsibility for education on parents of students. Besides having to coordinate with the teacher, parents need to supervise the discipline of learning of children (Suryaman et al., 2020).
Teachers and schools are ultimately encouraged to continue learning online, even though the risk is relatively high (Lederman, 2020). However, online education in primary and secondary education can be useful. This research has seen that if parents are involved in children's science education, the child will carry out the projects and tasks given well.
Different tests on open school and nonschool students showed no significant difference in the family's understanding of science. The non-physical environment of the family gave encouragement and motivation to children to excel in science. Overall, based on the students' experiments final results, it can be seen that students can make simple liquid fertilizer and have no difficulty involving their families. The liquid fertilizers tested on students can also be applied to cosmetic plants. The role of the family's physical and non-physical environment also increases students' motivation to complete experiments as much as possible.
The results showed that family stimulation in students' science education from childhood was moderate. This also contributed to the effect of the student experiment. According to previous researchers, since childhood, humans understand the physical world using their intuitions from eve-ryday life so that they can construct a scientific framework of understanding science (Vosniadou, 2019).
Family support in stimulating science from childhood impacts student success, especially when given their families' experimental assignments. If the family environment, both physical and non-physical, supports students in the science learning process, it will produce exemplary achievements. This complements previous research, which states that when students in primary schools do not get the right learning methods, lack parental attention, and are negatively influenced by mass media, children lack concentration and motivation to learn (Maryani et al., 2018).
The non-physical family environment habits towards science learning in students in this study are classified as high. The experiences that students get with their families regarding science can encourage students to understand science better. According to previous research, the knowledge that comes from family experiences can motivate students to build student self-efficacy, make students more active, and facilitate science learning goals (Schulze & Lemmer, 2017).
According to students, the family's nonphysical environment supports students in making liquid organic fertilizer and reminds them to watch the development of cosmetic plants planted every day. Parents' high hopes that students can complete their experiments also encourage students to be more optimal in doing science learning because they don't want to disappoint their families.
In addition to individual dependence on the environment, previous researchers conducted an interview study which revealed that parents are a positive factor in students' science learning. Parents provide support, academic hope, various assistance and are involved in student science education. When parents have high involvement, are carried out continuously, have an interest and hope for their children's scientific achievements, it will foster students' interest in learning science to improve students' science learning achievement (Halim et al., 2018).
High expectations from non-physical family environments, indicated by the research results that family expectations for students' future careers in science are high, spurring students to do science assignments consistently to produce quite good experiments and social context. Such as the values and expectations of students' social environment. Students will choose a career in science in the future, according to the socio-cognitive theory, which states that social factors influence a person's cognition when making choices in his life (Lent et al., 2008).
Students' success in conducting experiments in this study is also determined by personal encouragement from the family environment, both physical and non-physical, in the form of social motivation. Students are interested in science learning education because of intrinsic factors, namely pleasure, satisfaction, and the desire to try science. Extrinsic factors in the form of scientific assumptions can provide benefits to realize the plan.
This fact corroborates previous research and suggests that student engagement in science in the future is driven by scientific thought and activity being studied. Family factors vital in encouraging students to have future careers in science can make students more diligent in completing experiments. In the theory of fate determination, external motivation can create internal reasons so that when this motivation becomes intrinsic motivation, it gives more positive encouragement to students in learning science (Ryan & Deci, 2009).
High family perceptions of science enable students to complete their experiments through collaboration with their families. Their family preferences and involvement also mediate students' perceptions of science learning. Family perceptions of science learning can encourage student interest and independence better to understand science (Sha et al., 2016).
Students' success in the science experiment of making simple organic liquid fertilizer and planting cosmetic plants can be concluded because of the family environment's high involvement in the feasibility of the family's physical environment and the support provided by the non-physical environmental factors of the family.
The family environment's involvement will make students see themselves as worthy of being loved to develop their competence and foster selfefficacy in science learning. Family involvement is useful in students' psychological development, motivating learning, and increasing academic grades and student achievement (Bowlby, 1969;Pomerantz et al., 2012).
This study proves that the family's physical environment and non-physical factors motivate students to conduct experiments. The family environment is also responsible for student experimentation when family involvement is high. Increasing family participation in student science education will positively impact student learning outcomes, carried out experimentally.
CONCLUSION
The spread of the Covid-19 virus, which has changed Indonesia's educational system, can be seen from the research results through an experimental learning process. This study concludes that students, especially those in junior high school, cannot only learn science online using Zoom, Google Meet, and other virtual media. Students will be bored and not motivated if the teacher's learning method is solely face-to-face online. Moreover, for junior high school students, students are not fully able to learn independently.
Science experiment activities are one of the methods that can be done remotely, namely through (1) giving instructions beforehand by the teacher through online meetings, or providing educational video links that can be taken from YouTube, and made by the teacher himself, (2) students are given the opportunity to practice on their own and assisted by their families regarding the experiment, (3) students report what they find by filling in the Google Form, (4) other alternatives to interact and communicate with students during the experimental process, the teacher can create a study group via WhatsApp to monitor students and provide facilities for students if there is something to be discussed regarding experiments, (5) to be effective, also provide questionnaires or questions to the parents of students, because parental involvement provides students with higher motivation and responsibility to complete their schoolwork, (6) check the results of the questionnaire again, if the students are not optimal, give another try and communicate with their family to participate as school partners during distance learning in the era of the Covid-19 pandemic.
There was no significant difference for the non-physical family environment in student learning support, both from open and non-open students. The family did not mind participating in student school activities (96.7%), then the family encouraged students to complete the project (100.0%). The amount of family non-physical support is also enabled because parents feel happy to participate in children's experiments (86.7%) to try their best to accompany them (93.3%).
The increased physical and non-physical support of the family is one of the students' success factors in making simple liquid fertilizers to apply these fertilizers to cosmetic plants. This research shows that during Covid-19, the design of family participation in education units is because most of the learning is done at home, to continue and familiarize education that involves families and for families to create a physical and non-physical environment, which is conducive to student learning, so that knowledge can be maximized.
This research also needs to be continued by looking at the physical and non-physical family environments in student respondents' learning outcomes in academic units and other subjects, with learning methods besides experiments. | 2021-01-19T14:02:03.325Z | 2020-12-31T00:00:00.000 | {
"year": 2020,
"sha1": "5d1297af4b70d4cb4d5b39138432bd2ea65e7a8d",
"oa_license": "CCBY",
"oa_url": "https://journal.unnes.ac.id/nju/index.php/jpii/article/download/26563/11144",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8df16d2cf3ffbbc092ad6b4a2d5b80a63dce2af3",
"s2fieldsofstudy": [
"Education",
"Environmental Science"
],
"extfieldsofstudy": [
"Psychology"
]
} |
115532667 | pes2o/s2orc | v3-fos-license | Robust and Reliable Transfer of a Quantum State Through a Spin Chain
We present several protocols for reliable quantum state transfer through a spin chain. We use a simple two-spin encoding to achieve a remarkably high fidelity transfer for an arbitrary quantum state. The fidelity of the transfer also decreases very slowly with increasing chain length. We find that we can also increase the reliability by taking advantage of a local memory and/or confirm transfer using a second spin-chain.
I. INTRODUCTION
One of (Di Vincenzo's) seven requirements of a quantum computer is the transmission of flying qubits, meaning, we must be able to transmit information between components of a quantum computing device. In the interior of a such a device, where short-distance communication is required, a spin chain is a promising candidate for the transfer of information.
Reliable transfer through a spin chain has been studied extensively since the original proposal by Bose [1]. In that proposal, a spin state at one end of the chain is allowed to evolve freely under constant couplings until it arrives after some time at the other end of the chain. Such a system is simple and does not require couplings to be precisely tuned or to be switched on and off. This is desirable for experimental considerations where controllability can be a definite problem.
It is indeed somewhat surprising that a spin state can be reliably transferred through a chain. However, it certainly can be done when particular conditions are met. For example, with finely-tuned, yet fixed couplings, a variety of networks will allow for perfect transfer [2,3]. There are also methods using two chains which allow for perfect transfer [4,5]. In this case the perfect transfer is conditioned on the outcome of two measurements, one from each chain. Other methods require a wave packet to be constructed at the beginning of the chain so that the state can be transferred reliably [6,7].
In each of these scenarios, the state is transferred using an always-on Heisenberg exchange interaction with nearest-neighbor interactions between the spins. The Heisenberg exchange interaction is readily available in many different experimental systems. Thus its use is well-motivated. However, its experimental viability is important for another reason-it enables universal quantum computation on decoherence-free subspaces and noiseless subsystems without the need for individual control over physical qubits. (See [8] and references therein.) There are several documented instances of this. One promising proposal uses two spins, or qubits, to encode one logical qubit [9]. However, it is also known to be universal for DFS/NSs using three or four physical qubits to encode one logical qubit. In our case, the universality condition, prompted the following question, "Can a decoherencefree, or noiseless, encoding be used to enable the reliable propagation of a spin state through a spin chain." In this article, we answer this question and show that a particular state can be used to reliably transfer quantum information over long distances through a spin chain. Our proposal uses encoded states which provide reliable state transfer over relatively long distances through an unmodulated spin chain. This makes our proposal a prime candidate for use in experimental systems where this sort of state transfer is required. Namely within a solid-state quantum computing device.
II. THE HAMILTONIAN AND THE CALCULATION OF THE FIDELITY
The Hamiltonian of a one dimension anisotropic Heisenberg XY model can be described by where J is the exchange constant for the xy components, J z is the exchange constant for the z component, and h is external magnetic field along z direction. The σ x,y,z i are the Pauli operators acting on the i th spin. We will use a ferromagnetic coupling and take J = 1.0 throughout this paper. Furthermore, we consider only nearest-neighbor interactions and an open ended chain which is the most natural and practical geometry for this system.
Note that for this Hamiltonian, the z-component of the total spin, σ z = σ z i , is a conserved quantity, which indicates that the system contains a fixed number of magnon excitations. When there is only one magnon excitation, the time evolution of the initial state is not affected by the σ z − σ z interaction, whereas for the twomagnon excitations it is. This can be quite complicated [10] due to the magnon interactions arising from a nonzero J z . For these reasons, we let J ≫ J z .
The Hamiltonian can be diagonalized by means of the Jordan-Wigner transformation that maps spins to onedimensional spinless fermions with creation operator de-fined by c † l = ( l−1 s=1 −σ z l )σ + l , where σ + l = 1 2 (σ x l + iσ y l ) denotes the spin raising operator at site l. The action of c † l is to flip the spin at site l from down to up and c l , c † m satisfy the anticommutation relations {c l , c † m } = δ lm . The creation operator evolves as [11] where N is the number of spins, E m = 2h − 2J cos q m , and q m = πm/(N + 1). Eq. (1) indicates that the excitation which, initially created in site j, is generally distributed over all the sites. At time t, the probability of the excitation being at site l is |f j,l (t)| 2 with the normalization condition N l=1 |f j,l (t)| 2 = 1. When the number of magnon excitations is more than one, the time evolution of the creation operators is given by [ Our procedure is as follows. First we cool the system to the ferromagnetic ground state |0 , where all spins are down. Then we encode the state |ϕ(0) = α |0 L + β |1 L at one end of the chain. The initial state of the whole system is then |Φ(0) = (α |0 L + β |1 L ) ⊗ |0 . Note that we are using |i L to denote a logical basis state, emphasizing that our physical spins are encoded into a logical qubit.
III. DFS ENCODINGS
We will consider several different encodings with the potential for reliable information transfer and will provide the best overall solution at the end of our analysis. In each case we are motivated to consider a state encoded in a DFS given the universality properties of the states with respect to the Heisenberg exchange interaction. We will consider two, three, and four-qubit DFSs which encode one logical qubit using a subsystem of three or four physical qubits, respectively.
The fidelity between the received state and ideal state is the reduced density matrix at the receiving end. (We let our ideal final state be represented by the same vector as our initial state although it is actually at the end, rather than the beginning, of the chain.) For example, for the two-qubit encoding, the fidelity Similarly we can calculate the fidelity for the three-and four-qubit encodings, although the expressions are understandably much more complicated.
From the initial state, |Φ(0) , the system undergoes the time evolution given by Eqs. (1)-(3). The transmission amplitude f j,l (t) describes the propagation and the dispersion characterizes the state transfer. Note also that if the initial state involves a fixed excitation number, the magnetic field will only produce a global phase which will not affect the fidelity. (See, for example, the fidelity of the two-spin encoding above.) So for a fixed excitation number, we will neglect the effect of magnetic field be fixed h = 1.0.
IV. DFS-ENCODING RESULTS
In Fig. 1 we compare the maximum fidelity for logical state transfer when |Φ(0) = (|0 L + |1 L )/ √ 2 for different logical/encoded states. Throughout this article the maximum fidelity is found in the time interval [0,100] since the first peak, which is the maximum, is obtained in this time interval for N ≤ 80. For the 3-qubit encoding, we use 3-qubit(1) and 3-qubit(2) to signify one excitation or two excitations in the chain, for example a state in 3-qubit(1) has α 0 = α 1 = 1.
For the two-spin encoding with N = 4, 5, F max ≈ 1 thus near-perfect state transfer can be obtained for these values of N . However, for these and almost all other logical states using two, three, or four spins, the maximum decreases quickly with increasing N . There is one exception in the space of the 3-qubit encoding. For a particular state, the fidelity decreases very slowly with increasing N . We will explore this particular case, which shows impressive variance in the fidelity in Fig. 1, and show how it can be used to reliably transfer an arbitrary qubit state through the chain.
V. THREE QUBITS
It turns out that two excitations in the chain do not allow for reliable transfer. So we will consider the 3qubit(1) DFS. In Fig. 2 we plot the maximal fidelity, F max , as a function of N and θ when φ = 0 and t ∈ [0, 100] for an arbitrary initial state cos( θ 2 ) |0 L + sin( θ 2 )e iφ |1 L . For N from 6 to 50, a maximum is achieved at θ = 2π/3. And surprisingly, F max decreases very slowly with increasing N for a wide range of θ. For example, in the range θ ∈ [0.5π, 0.8π], for any site N ≤ 50, F max > 0.8. So if the state is encoded in this range, the fidelity is exceptionally large for a quite long chain. In fact, we have found that F ≈ 0.7 after traversing a spin chain of two hundred spins! Therefore, we can achieve a very reliable state transfer since the fidelity is large and provides a significant robustness to errors in the initial encoding, or during transport, since a variation in the encoded state does not significantly affect the long distance trend in the fidelity. It still decreases very slowly over long distances. We next show how to take advantage of this remarkable state. . We use the fidelity at sites i to denote that we receive the state at sites i−2, i−1, i. The initial state
VI. EFFICIENT ENCODING FOR ENCODED QUBITS
When φ = 0, θ = 2π/3 the 3-qubit(1) encoding can be written as |Ψ = 1 √ 2 (|001 − |100 ) = 1 √ 2 (|01 13 − |10 13 ) ⊗ |0 2 , i.e. the first and third spins are in a singlet state. In order to show the high-fidelity transfer of this state, in Fig. 3 we plot the time evolution of the fidelity at every site of a chain of length N = 48. At t = 0 the initial state is encoded at the sites of the first and third spins, then it evolves freely under the Heisenberg Hamiltonian. The time dependence is given by . At time t, the fidelity at the i−2, i−1, i sites in the interior of the chain is F ≈ 0.5 implying only partial information is located at these sites. However, at the end of the chain (i = 48), the fidelity shows a peak (F 1 ≈ 0.86, t ≈ 25). After this, the wave is reflected by the boundary, and starts to propagate back. This behavior can be interpreted as a wave which broadens inside the chain, but when arriving at the boundary it becomes narrower which enhances the fidelity. Thus we have an end-effect of the chain. From Fig. 3, the oscillation of the state between boundaries is T ≈ 50 and at time t k ≈ 25 + (k − 1)T, a maximum is achieved at the end of the chain N = 48, where k denotes the kth peak. For example, at time t = 75, the state will travel to the other end once more (the second peak F 2 ≈ 0.76), but F 2 < F 1 which shows some reduc-tion of fidelity with each pass, but still relatively high.
We have shown that the state 1/ √ 2[|001 − |100 ] can be transferred through the chain with high fidelity even when the chain is quite long. For quantum communication, we need to transfer an arbitrary state with high fidelity. In this case it important to realize that an encoding which uses: |0 L = |000 and |1 L = 1/ √ 2[|001 − |100 ] can reliably transfer the state since the vacuum state is fixed throughout. Using this encoding we can fully utilize the extremely reliable state In Fig. 4 we compare two different encoding schemes for state transfer, circles for the original single-spin encoding [1] and squares for ours. The average fidelity 1 4π where The results are maximized over the time interval [0,100] and magnetic field h ∈ [0, 2]. (Unlike transferring the |1 L state, here the magnetic field can be adjusted to enhance the fidelity since the phase is now not negligible.) For single-spin encoding, F av decreases relatively quickly with increasing N . However, using our scheme, F av decreases very slowly with increasing N and F av is always relatively high even in a long chain (N 80). For example N ≈ 70, F av is still greater than 0.9.
VII. PROTOCOLS FOR IMPROVING RELIABILITY
Here we present two protocols for increasing the reliability of state transfer using our encoded state. One is due to Giovannetti et al. [15,16] and the other is based on, but a generalization of, a protocol by Burgarth and Bose [4].
In the first protocol we consider, Giovannetti et al. [15,16] showed that the reliability of the the one-spin encoding can be enhanced using a memory. In this protocol, the receiver swaps the state at the end of the chain to a quantum memory for decoding at a later time. This process is repeated for later times, with each swap and storage increasing our overall chance of success. Fig. 3 shows the variation of the fidelity from which we may infer the chance of success for our protocol. If we perform the swap operation as in Ref. [15] at t = 25, the probability that the |1 L state has been swapped to the first memory is η = 46,47,48 1 L | 0| e −iHt1 |1 L 1,2,3 |0 2 ≈ 0.86 2 , which corresponds to the square of the first peak value at N = 48. Performing additional swap operations at some later optimal time will increase our already large probability for success, just as it does in the original protocol for the single-spin encoding.
We next provide a protocol which can confirm if a state was indeed transferred appropriately, which is a generalization of that presented in Ref. [4]. We begin with two spin chains which are initially decoupled and proceed as follows. First the logical state is encoded into the first and third site of the first chain. Then a logical X gate is performed on spins one and three of the second chain conditioned on the logical state of the first chain being zero [17] (using the standard ordered basis). The form of the logical X is (From this it is straight-forward to obtain the logical CNOT which is also to be performed on spins one and three.) The state of the chain is then |Ψ = α |0 L ⊗|1 L + β |1 L ⊗ |0 L where the first factor for the first chain and the second for the second chain. We now let both chains evolve freely. After the same amount of time as above, we perform a logical CNOT operation to decode the operation. Performing a measurement on the last three spins of the second chain and finding |000 will confirm that the state has been reliably sent through the chain. This confirmation, along with the very high fidelity of our protocol, provides a high probability of reliable transfer, along with confirmation.
VIII. CONCLUSIONS
We have presented several results for a decoherencefree/noiseless subspace encoded qubit which is transferred through an unmodulated spin chain. Although many of the encoded states were not more reliable than the single-spin encoding, we have found remarkable results. A most striking result is that when the initial state is a 3-qubit(1) state, cos( θ 2 ) |0 L + sin( θ 2 )e iφ |1 L , with θ near 2π/3 and φ near zero, is sent through the spin chain, the fidelity is incredibly high. Surpassing any known result so far. Even out to two hundred spins, the fidelity is quite high(≈ 0.7). For transferring an arbitrary state, we have found a very high fidelity based on these results. For example, when N ≤ 70 the fidelity F ≥ 0.9. This is a remarkable result for a simple two-spin encoded state and experimentally viable due to its Heisenbergmediated transfer with control assumed only on two of the three at the ends of spins of the chain.
Furthermore, we have shown that our protocol can be combined with a protocol using a local memory to en-hance the fidelity beyond an already impressive value. We have also presented a protocol for confirmation of the receipt of the state at the other end. We therefore believe this is by far the best protocol to date for the transfer of a quantum state through an unmodulated spin chain. | 2008-12-24T22:16:08.000Z | 2008-12-24T00:00:00.000 | {
"year": 2008,
"sha1": "100885577b6389fab0608a51c9ba9b51bada96fa",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0812.4578",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "100885577b6389fab0608a51c9ba9b51bada96fa",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
262381452 | pes2o/s2orc | v3-fos-license | RNA amplification for successful gene profiling analysis
The study of clinical samples is often limited by the amount of material available to study. While proteins cannot be multiplied in their natural form, DNA and RNA can be amplified from small specimens and used for high-throughput analyses. Therefore, genetic studies offer the best opportunity to screen for novel insights of human pathology when little material is available. Precise estimates of DNA copy numbers in a given specimen are necessary. However, most studies investigate static variables such as the genetic background of patients or mutations within pathological specimens without a need to assess proportionality of expression among different genes throughout the genome. Comparative genomic hybridization of DNA samples represents a crude exception to this rule since genomic amplification or deletion is compared among different specimens directly. For gene expression analysis, however, it is critical to accurately estimate the proportional expression of distinct RNA transcripts since such proportions directly govern cell function by modulating protein expression. Furthermore, comparative estimates of relative RNA expression at different time points portray the response of cells to environmental stimuli, indirectly informing about broader biological events affecting a particular tissue in physiological or pathological conditions. This cognitive reaction of cells is similar to the detection of electroencephalographic patterns which inform about the status of the brain in response to external stimuli. As our need to understand human pathophysiology at the global level increases, the development and refinement of technologies for high fidelity messenger RNA amplification have become the focus of increasing interest during the past decade. The need to increase the abundance of RNA has been met not only for gene specific amplification, but, most importantly for global transcriptome wide, unbiased amplification. Now gene-specific, unbiased transcriptome wide amplification accurately maintains proportionality among all RNA species within a given specimen. This allows the utilization of clinical material obtained with minimally invasive methods such as fine needle aspirates (FNA) or cytological washings for high throughput functional genomics studies. This review provides a comprehensive and updated discussion of the literature in the subject and critically discusses the main approaches, the pitfalls and provides practical suggestions for successful unbiased amplification of the whole transcriptome in clinical samples.
Introduction
Quantification of gene expression is a powerful tool for the global understanding of the biology underlying complex pathophysiological conditions. Advances in gene profiling analysis using cDNA or oligo-based microarray systems uncovered genes critically important in disease development, progression, and response to treatment [1][2][3][4][5][6][7][8][9][10][11][12]. While the expression of a single or a limited number of genes can be readily estimated using minimum amount of total or messenger RNA (mRNA) from experimental or clinic samples, gene profiling requires large amount of RNA which can only be generated from global RNA amplification when using often limited amount clinical material. Conventionally at least 50 -100 µg of total RNA (T-RNA) or 2 -5 µg poly(A) + RNA are generally necessary for global transcript analysis studies though efforts to enhance signal intensity and fluorochrome incorporation have reduced the amount of total RNA needed for array analysis to 1-5 ug [13]. Large amounts of RNA are not usually obtainable from clinical specimens. Thus, they pertain to experimental endeavors where cultured cell lines or tissues from pooled experimental models are used while only occasionally they are obtainable from large excisional biopsies [14]. However, most biological specimens directly obtained ex vivo for diagnostic or prognostic purposes or for clinical monitoring of treatment are too scarce to yield enough RNA for high throughput gene expression analysis. Needle or punch biopsies provide the opportunity to serially sample lesions during treatment or to sample lesion to identify predictors of treatment outcome by observing the fate of the lesion left in place. In addition, the simplicity of the storage procedure associated with the collection of small samples which can be performed at the bed side provides superior quality of RNA with minimum degradation [15]. Finally, the hypoxia which follows ligation of tumor-feeding vessels before excision is avoided with these minimally invasive methods, therefore, obtaining a true snapshot of the in vivo transcriptional program. These minimally invasive sampling techniques yield generally few micrograms of total RNA and most often even less [15,16]. Similarly, breast and nasal lavages and cervical brush biopsies, routinely used for pathological diagnosis, generate insufficient material far below the detection limit of most assays. Acquisition of cell subsets by fluorescent or magnetic sorting or laser capture micro-dissection (LCM) for a more accurate portraying of individual cell interactions in a pathological process generate even less material, in most cases, nanograms of total RNA [17][18][19][20].
Efforts have been made to broaden the utilization of cDNA microarrays using two main strategies: intensifying fluorescence signal [13,[21][22][23][24] or amplifying RNA. Signal intensification approaches have reduced the requirement of RNA few folds but cannot extend the utilization of microarray to sub-microgram levels. RNA amplification in turn has gained extreme popularity based on amplification efficiency, linearity and reproducibility lowering the amount of total RNA needed for microarray analysis to nanograms without introducing significant biases. Methods aimed at the amplification of poly(A)-RNA [25] via in vitro transcription (IVT) [26] or cDNA amplification via polymerase chain reaction (PCR) [27] have reduced the material needed for cDNA microarray application and extended the spectrum of clinical samples that can be studied. Nanograms of total RNA have been successfully amplified into micrograms of pure mRNA for the screening of the entire transcriptome without losing the proportionality of gene expression displayed by the source material. Curiously, the most important advances were made by Eberwine whose main goal was not to use clinical material for high-throughput studies but rather to amplify enough material from single cells for individual or few gene analysis [28,29]. His revolutionary contribution has, however, provided a striking opportunity to explore the function of the human genome ex vivo and has exponentially opened the frontiers of clinical investigation. Modifications, optimizations and validations of RNA amplification technology based on Eberwine's pioneering work are still actively explored.
In this chapter, we will summarize efforts to optimize RNA amplification and describe in detail current amplification procedures that have been validated and applied to cDNA microarray analysis.
Collection of source material and RNA isolation
Samples used for RNA isolation and amplification should always be collected fresh and immediately processed. Excisional biopsies should be handled within 20 min and stored at -80°C (for instance with RNAlater™, Ambion, Austin, TX) if RNA isolation cannot be performed right away. Material from FNA should be collected in 5 ml of ice cold 1 × PBS or other collection medium without serum at the patient's bedside to minimize RNA metabolism or degradation. After spinning at 1,500 rpm for 5 minutes at 4°C, 2.5 ml of ACK lysing buffer should be added with 2.5 ml of 1 × PBS and incubated for 5 minutes on ice to lyse red blood cells (RBC) in case of excessive contamination. Cell pellets should be washed in 10 ml 1 × PBS and then re-suspend in small volumes of RNAlater followed by snap freezing or prior lysis of the pellet in 350 µl of RLT buffer with fresh addition of 2-mercaptoethanol (2-ME) (RNeasy mini kit, QIAGEN Inc, Valencia, CA USA) before snap freezing at -80°C. For LCM, good results can be obtained by lysing cells directly in 50 µl RLT buffer with 2-ME. Total RNA (T-RNA) and poly A RNA can both be used as starting material for RNA amplification.
The RNA isolation method strongly affects the quality and quantity of RNA. T-RNA can be isolated using commercially available RNA isolation kits. The T-RNA content per mammalian cell ranges between 20 to 40 pg of which only 0.5 -1.0 pg are constituted by messenger RNA (mRNA) [30,31]. Sample condition, viability, functional status and phenotype of the cells are the major reasons for differential yield of T-RNA. Sample handling with precaution for RNase contamination always improves the quality and quantity of the RNA obtained. Measurement of T-RNA concentration can be performed with a spectrophotometer at OD 260 . An OD 260/280 ratio above 1.8 is to be expected. When a very limited number of cells is available such as from LCM or FNA, very low or even negative OD readings may be observed. In this case, OD reading can be omitted. When RNA is isolated from archived samples or from samples whose collection and storage conditions were not controlled and optimized, it is preferable to estimate RNA quality and quantity using Agilent Bioanalyzer (Agilent Technologies Inc. Palo Alto, CA) or RNA gels. Clear 28S and 18S ribosomal RNA bands indicate good quality of RNA. Since 28S rRNA degradation occurs earlier than 18S rRNA and mRNA degradation in most cases correlates with 28S ribosomal RNA, the ratio of 28S versus 18S rRNA is a good indicator of mRNA quality [32]. 28S/ 18S rRNA ratios equal or close to 2 suggest good RNA quality.
Single strand cDNA synthesis
A critical step in RNA or cDNA amplification is the generation of double stranded cDNA (ds-cDNA) templates. First strand cDNAs are reverse transcribed from mRNA using oligo dT or random primers. In order to generate full length first strand cDNA, oligo dT (15-24 nt) with an attachment of a bacterial phage T7 promoter sequence is commonly used to initiate the cDNA synthesis [25,29,[33][34][35][36]. In case of degraded RNA [37], random primers with attachment of T3 RNA polymerase promoter (T3N9) have been used for first and second strand cDNA synthesis [38]. To prevent RNA degradation while denaturing and during the reverse transcription (RT) reaction, it is useful to denature the RNA (65°C for 5 minutes or 70°C for 3 minutes) in the presence of RNasin ® Plus RNase Inhibitor (Promega, Madison, WI) which forms a stable complex with RNases and inactivates RNase at temperatures up to 70°C for at least 15 minutes.
To enhance the efficiency of the RT reaction and reduce incorporation errors, the temperature of the RT reaction can be maintained at 50°C [39,40] instead of 42°C to avoid the formation of secondary mRNA structures. This can be done by using thermo-stable reverse transcriptase (ThermoScript™ RNase H-Reverse Transcriptase, Invitrogen, Carlsbad, CA) or regular RTase [41] in the presence of disaccharide trehalose [42][43][44]. Disaccharide trehalose not only can enhance the thermo-stability of RTase but also posses thermo-activation functions. This modification greatly enhances the accuracy and the efficiency of RT with minimum impact on the DNA polymerase activity [39]. The utilization of DNA binding protein T4gp32 (USB, Cleveland) in RT reactions also improves cDNA synthesis [40,41,45,46]. T4gp32 protein may essentially contribute to the qualitative and quantitative efficiency of the RT reaction by reducing higher order structures of RNA molecules and hence reduce the pause sites during cDNA synthesis.
In Van Gelder and Eberwine's T7 based RNA amplification [28], the amount of oligo dT-T7 primer used in the first strand cDNA synthesis can affect the amplified RNA in quantity and quality. Excessive oligo dT-T7 in the RT reaction could lead to template independent amplification [47]. This phenomenon is not observed when the template switch approach is combined to in vitro transcription (Wang, E. unpublished data).
Double stranded cDNA (ds-cDNA) synthesis
RNA amplification methods differ according to the strategies used for the generation of ds-cDNA as templates for in vitro transcription or PCR amplification. There are two basic strategies that have been extensively validated and applied for high throughput transcriptional analysis. The first is based on Gubler-Hoffman's [48] ds-cDNA synthesis subsequently optimized by Van Gelder and Eberwine [28,29]. This technology utilizes RNase H digestion to create short fragments of RNA as primers to initiate the second strand cDNA elongation under DNA polymerase I. Fragments of second strand cDNA are then ligated to each other sequentially under E. Coli DNA ligase followed by polishment using T4 DNA polymerase to eliminate loops and to form blunt ends. Amplifications based on this methods have been widely used in samples obtained in physiological or pathological conditions and extensively validated for its fidelity, reproducibility and linearity compared to un-amplified RNA from the same source materials [29,33,47,[49][50][51][52].
The alternative ds-cDNA synthesis approach utilizes retroviral RNA recombination as a mechanism for template switch to generate full length ds-cDNA. The method was initially invented for full length cDNA cloning and, therefore, the main targets of this method are undegredated transcripts. Gubler-Hoffman's ds-cDNA synthesis has the potential of introducing amplification biases because of a possible 5' under-representation. In addition, the low stringency of the temperature in which ds-cDNA synthesis occurs may introduce additional biases [33]. Although 5' under-representation could, in theory, be overcome by hairpin loop second-strand synthesis [53], the multiple enzymes (4) used in the reaction could also in turn cause errors.
To ensure generation of full-length ds-cDNA, [54] synthesis is performed taking advantage of the intrinsic terminal transferase activity and template switch ability of Moloney Murine Leukemia Virus RTase [55]. This enzyme adds non-template nucleotides at the 3' end of the first strand cDNA, preferentially dCTP oligo nucleotides. A templateswitch oligonucleotide (TS primer) containing a short string of dG residues at the 3' end is added to the reaction to anneal to the dC string of the newly synthesized cDNA. This produces an overhang that allows the RTase to switch template and extend the cDNA beyond the dC to create a short segment of ds-cDNA duplex. After treatment with RNase H to remove the original mRNA, the TS primer initiates the second stranded cDNA synthesis by PCR. Since the terminal transferase activity of the RTase is triggered only when the cDNA synthesis is complete, only fulllength single stranded cDNA will be tailed with the TS primer and converted into ds-cDNA. Using the TS primer, second strand cDNA synthesis is carried at 75°C after a 95°C denaturing and a 65°C annealing step in the presence of single DNA polymerase [35]. This technique, in theory, overcomes the bias generated by amplification methods depending only on 3' nucleotide synthesis and hence it is, in theory, superior to the Gubler-Hoffman's ds-cDNA synthesis. However, no significant differences in correlation coefficients of amplified versus non amplified RNA were observed when the Gubler-Hoffman's ds-cDNA method was compared with the TS ds-cDNA amplification using high throughput analysis [40,56] The fidelity of template switch-based amplification methods has been assessed by numerous gene profiling analyses on different type of microarray platforms, real time PCR and sophisticated statistical analyses and it has been well accepted for high throughput transcriptome studies.
RNA amplifications
Linear amplification Amplification of mRNA without skewing relative transcript abundance remains a focus of research. Linear amplification methods have been developed that in theory should maintain the proportionality of each RNA species present in the original sample. IVT using ds-cDNA equipped with a bacteriophage T7 promoter [28] provides an efficient way to amplify mRNA sequences and thereby generate templates for synthesis of fluorescently-labeled single-stranded cDNA [25,26,28,29,33,53]. Depending upon the T7 or other (T3 or SP6) promoter sequence position on the ds-cDNA, amplified RNA can be either in sense or antisense orientation. Oligo dT attachments to the promoter sequence, for example oligo dT-T7, prime first strand cDNA positioned the promoter at the 3' end of genes (5' end of cDNA) and, therefore, lead to the ampli-fication of antisense RNA (aRNA) or complement RNA (cRNA). Promoters positioned at the 5' end of genes by random [57] or TS primers (Wang E, unpublished observation) generate sense RNA (sRNA). Amplified sRNA can be also produced by tailing of oligo dT to the 3' of the cDNA followed by oligo dA-T7 priming for double stranded T7 promoter generation at the 5' end of genes [58]. The singularity of this approach resides in the utilization of a DNA polymerase blocker at the 3' of the oligo dA-T7 primer which prevents the elongation of second strand cDNA synthesis while priming for the elongation of the double stranded promoter. In this fashion, only sense amplification can be achieved by the presence of the 5' ds-T7 promoter followed by single strand cDNA templates.
IVT using DNA-dependent RNA polymerase is an isothermal reaction with linear kinetics. The input ds-cDNA templates are the only source of template for the complete amplification and, therefore, any errors created on the newly synthesized RNA will not be carried or amplified in the following reactions. Overall, RNA polymerase makes an error at a frequency of about once in 10,000 nucleotides corresponding to about once per RNA strand created http://www.rcsb.org/pdb/molecules/pdb40_1.html. This contrasts with DNA-dependent DNA polymerase which incorporates an error once in every 400 nucleotides. Most importantly, these errors are exponentially amplified in the following reaction since the amplicons serve as templates. Thus, RNA polymerase catalyzes transcription robotically and efficiently without sequence dependent bias. Recombinant RNA polymerases have been engineered to enhance the stability of the enzyme interacting with templates and reduce the abortive tendency [59] of the wild type RNA polymerase which in turn improved the elongation phase resulting in complete mRNA transcripts. The length of amplified RNA ranges from 200 to 6,000 nucleotides for the first round of amplification and 100 to 3,000 nucleotides for the second round when random primers are used [36,60] The amplification efficiency is greater than 2,000 fold in the first round and 100,000 fold in the second round [35,60] Two rounds of IVT are commonly required when sub micrograms of input total RNA are used. It has been estimated that after two rounds of amplification the frequency of only 10% of the genes in a specimen is reduced [61] and more than two rounds of amplification may still retain at least in part the proportionality of gene expression among different RNA populations [35]. However, we, generally, do not recommend going over two rounds of amplification unless necessary for extremely scant specimens such as when processing single or few cell specimens, to avoid unnecessary biases related to amplification. The fidelity of IVT has been extensively assessed by gene profiling analysis, quantitative real-time PCR and statistical testing by comparing estimates of gene expression in amplified versus non-amplified RNA [35].
Pitfalls have been also associated with IVT. The fidelity of the first round amplification decreases when the input starting material is less than 100 ng because of the intrinsic low abundance of transcripts (particularly those under represented in the biological specimen). This can be rescued by two rounds of IVT if sufficient RNA species are present in the input material [35]. In addition, two rounds of amplification tend to introduce a 3' bias due to the use of random primers in the cDNA synthesis for ds-cDNA template creation. This should not affect the usefulness of the technique for high throughput gene profiling analysis since cloned cDNA arrays are 3' biased and even oligo arrays are designed to target the 3' end of each gene. Sequence-specific biases introduced during amplification are generally reproducible and, although negligible, could mislead data interpretation only when amplified RNA is directly compared with non amplified RNA on the same array platform. This type of error can be easily circumvented by using samples processed in identical conditions. Degradation of amplified RNA during prolonged (more than 5 hours) IVT may result in lower average size of aRNA and decreased yields [37]. This results from residual RNase in the enzyme mixture used for IVT reaction and can be prevented by the addition of RNase inhibitor in the reaction if a prolonged amplification is needed.
PCR-based exponential amplification
IVT is burdensome time consuming and may, theoretically, produce a 3' bias especially when two rounds of amplification are employed. Exponential amplification (PCR-based) may avoid these drawback and it has shown promise since, contrary to the IVT, is simple and efficient. However, PCR-based amplification has its own drawbacks.
The limitations of PCR-based amplification stem from the characteristics of the DNA-dependent DNA polymerase enzymatic function. The function of this enzyme is biased towards a lower efficiency in the amplification of GC rich sequences compared with AT rich sequences. In addition, as previously discussed, not only creates errors more frequently than RNA polymerase but also amplifies these mistakes because the reaction utilizes the amplicons as templates for subsequent amplification [62]. In addition, due to the exponential amplification, the reaction could reach saturation in conditions in which excess input template quantities are used or because of the exhaustion of substrate. This would favor the amplification of highabundance transcripts which would compete more efficiently for substrate in the earlier cycles of the amplification process resulting in loss of proportionality of the amplification process. Optimization of PCR cycle number to avoid reaching the saturation cycle and adjustments in the amount of template input could overcome the problems [63]. The utilization of DNA polymerase with proofreading function could eradicate errors created in the cDNA amplification [64]. This approach preserves the relative abundance of transcript [65] and it may outperform IVT when less than 50 ng of input RNA are available as starting material [66,67].
PCR-based cDNA amplification can be categorized as template switching (TS)-PCR [52,68,69], random PCR [70] and 3' tailing with 5' adaptor ligation PCR [71] based on the generation of a 5' anchor sequence which provides a platform for 5' primer annealing. TS-PCR employs the same template switch mechanism in ds-cDNA generation and in the amplification of ds-cDNA using 5' TS primer II (truncated TS primer) and 3' oligo dT or dT-T7 primers (depending upon the primer used in the first strand cDNA synthesis). Random PCR utilizes modified oligo dT primers (dT-T7 or dT-TAS (Target Amplification Sequence) or random primers with an adaptor sequence for the first strand cDNA initiation and random primers with an attachment of the same adaptor, for example dN10-TAS [70], for second strand cDNA synthesis. The attached sequence, such as TAS, generates a 5' anchor on the cDNA for subsequent PCR amplification with a single TAS-PCR primer. This approach is more suitable for RNA with partial degradation and with the risk of under representation of the 5' end. The third exponential amplification utilizes terminal deoxynucleotidyl transferase function to add a polymonomer, for example poly dA, tail to the 5' end of the gene. The tailed poly dA provides an annealing position for the oligo dT primer which lead the second strand cDNA synthesis. Ds-cDNA can then be amplified under one oligo dT primer or dT-adaptor primer if an adaptor sequence is attached [66]. Direct adaptor ligation is another alternative way to generate ds-cDNA with a known anchor sequence at the 5' end [71]. In this way, single strand cDNA is generated using oligo dT primers immobilized onto magnetic beads and second strand cDNA is completed by Van Gelder and Eberwine's ds-cDNA generation method. A ds-T7 promoter-linker is then unidirectionally ligated to the blunted ds-cDNA at the 5' end. PCR amplification can then be performed using the 5' promoter primer and the 3' oligo dT or dTadapter primer, if an adapter is attached. PCR amplified ds-cDNA is suitable for either sense or antisense probe arrays.
The combination of PCR amplification to generate sufficient ds-cDNA template followed by IVT [70,71]. is an attractive strategy to amplify minimal starting material since it takes advantage of the efficiency of the PCR reaction and the linear kinetics of IVT while minimizing the disadvantage discussed above. Validations of PCR-based RNA amplification methods are fewer than those for IVT but have been so far persuasive in spite of the prevalent expectations. Skepticism concerning the reproducibility and linearity are still one of the key factors preventing the extensive application of this approach.
Target labeling for cDNA microarray using amplified RNA
The generation of high quality cDNA microarray data depends not only on sufficient amount and highly representative amplified target, but also on the target labeling efficacy and reproducibility. Steps involved in the targets preparation such as RNA amplification, target labeling, pre-hybridization, hybridization and slides washing are imperative in enhancing foreground signal to background noise ratios. Linear spectrum of signal intensity that correlates with gene copy numbers without having to compensate detection sensitivity is one of the key factors for high quality cDNA analysis. Therefore, target labeling is a critical step to achieve consistently high signal images.
Typically, fluorescently-labeled cDNA is generated by incorporation of conjugated nucleotide analogs during the reverse transcription process. Depending upon the detection system, labeled markers can be either radioactive, color matrix or florescent. Florescence labeling outperforms the other labeling methods because of the versatile excitation and emission wave length. In addition, it has the advantage of not being hazardous. Among the fluorochrome, Cy3 (N, N8-(dipropyl)-tetramethylindocarbocyanine) and Cy5 (N, N8-(dipropyl)-tetramethylindodicarbocyanine) are most commonly used in cDNA microarray applications due to their distinct emission (510 and 664 respectively). Cy5 labeled dUTP and dCTP are less efficient in incorporation during the labeling reaction compared to Cy3 labeled dUTP or dCTP and they are more sensitive to photo bleach because of their chemical structure. Therefore, labeling bias needs to be accurately analyzed and results should be normalized according to standard normalization procedures.
Target labeling can be divided into two major categories: direct fluorescence incorporation and indirect fluorescence incorporation. The first category utilizes fluorescence-labeled dUTP or dCTP to partially substitute unlabeled dTTP or dCTP in the RT reaction to generate Cydye-labeled cDNA. This label incorporation method is suitable for cDNA clone microarray using amplified aRNA as templates or oligo array using amplified sRNA as template.
A limitation of direct labeling consists in the fact that fluorescent nucleotides are not the normal substrates for polymerases and some may be particularly sensitive to the structural diversity of these artificial oligonucleotides. The fluorescent moieties associated with these nucleotides are often quite bulky and, therefore, the efficiency of incorporation of such nucleotides by polymerase tends to be much lower than that of natural substrates. An alternative is to incorporate, either by synthesis or by enzymatic activity, a nucleotide analog similar to the natural nucleotide in structure featuring a chemically reactive group, such as 5-(3-Aminoallyl)-2'-deoxyuridine 5'-triphosphate (aa-dUTP), to which a fluorescent dye, such as Cydye, may then be attached [72]. The reactive amine of the aa-dUTP can be incorporated by a variety of RNA-dependent and DNA-dependent DNA polymerases. After removing free nucleotides, the aminoallyl labeled samples can coupled to dye, purified again, and then applied to a microarray [73]. The optimized ratio of aa-dUTP versus dTTP in the labeling reaction should be 2 to 3 respectively.
In theory, indirect outperforms direct labeling by reducing of the cost and maximizing signal intensity through increases in incorporation of fluorochrome or through signal amplification using fluorescence-labeled antibody or biotin-streptavdin complexes. However, more steps are involved in the purification of the labeled target prior to hybridization which make this strategies less frequently used.
RNA amplification protocols
The protocols presented here are routinely used in our laboratory in response to several inquires by interested investigators, The protocol is based on a combination of strategies discussed in the previous section that have been used for RNA amplification that we have applied to optimize TS-IVT following the original Eberwine's RNA amplification protocols.
Material and reagents
Dilute stock solution to the appropriate working concentration.
Primers
Oligo dT-T7 primer (5' AAA CGA CGG CCA GTG AAT TGT AAT ACG ACT CAC TAT AGG CGC T (15) 3') (0.125-0.25 µg/µl for the first round amplification depending on the amount of input total RNA and 0.5 µg/µl for the second round amplification) in RNase free water. Synthesized primer should be SDS-PAGE purified to insure the full length. The concentration of primer is varied according to the starting material used. This promoter sequence is much longer than the consensus sequence defined by Dunn and Studier (1983) and can be purchased from New England Biolabs and Stratagene Inc. In the extended sequence shown here, the consensus sequence is embedded in the between a 5'flanking region that provides space for the T7 RNA polymerase to bind and a 3'-flanking trinucleotide that stimulates transcription catalyzed by the enzyme.
TS primer (5' AAG CAG TGG TAA CAA CGC AGA GTA CGC GGG 3') (0.25 µg/µl) SDS-PAGE purified. According to the Chenchik's [74] data, ribouncleotide GGG at the 3' end should give the best TS effect instead of deoxinucleotide GGG. We have used TS primer with dGGG at the 3' end in multiple experiments and achieved satisfying results. The amount of TS primer used in the second strand synthesis can be varied according to the amount of starting material. We generally use 0.25 µg/µl when 3-6 ug of total RNA used and 0.125 µg/µl when less total RNA used.
Columns
Note: Since the TS primer which initiates the second strand cDNA synthesis is already present in the first strand cDNA synthesis reaction and has been primed to the extended part of the cDNA, no additional primer is required in this step.
Stop reaction with 7.5 µl 1 M NaOH solution containing 2 mM EDTA and incubate at 65°C for 10 min. to inactivate enzymes.
(Reaction can be stopped after this step and the reaction tube can be stored at -20°C.) Double stranded cDNA cleanup (This step is designed to prevent carry over of non-incorporated dNTP, primers and inactivated enzymes into the following in vitro transcription. Keep in mind that although the double stranded cDNAs are stable and will not be affected by RNase contamination, they will be used as template in the IVT reaction which is RNase free.) According to Ambion, the incubation can be interrupted by storing reaction tube at -20°C and resuming the incubation later without losing efficiency.
Purification of amplified RNA
Any manufactured RNA isolation kit can be applied Monophasic reagent such as TRIzol reagent from Gibco-BRL, (Cat#15596) are used here based on the efficient recovery of aRNA (RNeasy mini kit could be used for aRNA purification instead of TRIzol but, in our experience, RNA recovering is about 50% of that recovered with the TRIzol method.).
a. Add 0.5 ml of TRIzol solution to the transcription reaction. Mix the reagents well by pipetting or gentle vortexing.
b. Add 100 µl chloroform. Mix the reagents by inverting the tube for 15 seconds. Allow the tube to stand at room temperature for 2 -3 minutes.
c. Centrifuge the tube at 10,000 g for 15 min at 4°C. d. Transfer the aqueous phase to a fresh tube and add 250 µl of isopropanol.
e. Store the sample on ice for 5 minutes and then centrifuge at 10,000 g for 15 minutes.
f. Wash the pellet twice with 800 µl 70% EtOH g. Allow the pellet to dry in air on ice and then dissolve it in 20 µl DEPC H 2 O h. Measure the quantity of RNA concentration spectrophotometrically.
Second round of amplification
Mix amplified aRNA (0.5-1 µg) in 9 ul DEPC H2O with 1 µl (2 µg/µl) random hexamer (i.e. dN6) and heat to 70°C for 3 min, cool to room temperature. Then add the following reagents: (Note: More than 1 ug of aRNA is not suggested. Too much template in IVT reaction could cause the amplification to reach a plateau with loss of amplification linearity. Because of random primer used here, 42°C in stead of 50°C is used) From here, follow the previously described procedure for second strand cDNA synthesis, double stranded cDNA cleanup. In the second IVT, 40 ul of IVT reaction mixture are suggested to use instead of 20 ul. RNA isolation is followed. Note: The amounts of aRNA used for labeling depends on the size of the array. If the array with 2000-8000 genes, 3 ug aRNA will be sufficient while a larger chip such as 16-20 k will need 6 ug of aRNA. The labeling reaction components do not need to be changed.
Target clean up
Prepare Bio-6 column and run target solution through it. Collect flow through and add 250 µl 1 × TE to it. Concentrate target to ~20 µl using Microcon YM-30 column. | 2014-10-01T00:00:00.000Z | 2005-07-25T00:00:00.000 | {
"year": 2005,
"sha1": "6e3d682f3b0f349cbc377b12396533e334a7e6fc",
"oa_license": "CCBY",
"oa_url": "https://translational-medicine.biomedcentral.com/counter/pdf/10.1186/1479-5876-3-28",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1edfa3b6072d4d5156a86ee63c4d34f33460a176",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
18240145 | pes2o/s2orc | v3-fos-license | Tip110 interacts with YB-1 and regulates each other’s function
Background Tip110 plays important roles in tumor immunobiology, pre-mRNA splicing, expression regulation of viral and host genes, and possibly protein turnover. It is clear that our understanding of Tip110 biological function remains incomplete. Results Herein, we employed an immunoaffinity-based enrichment approach combined with protein mass spectrometry and attempted to identify Tip110-interacting cellular proteins. A total of 13 major proteins were identified to be complexed with Tip110. Among them was Y-box binding protein 1 (YB-1). The interaction of Tip110 with YB-1 was further dissected and confirmed to be specific and involve the N-terminal of both Tip110 and YB-1 proteins. A HIV-1 LTR promoter-driven reporter gene assay and a CD44 minigene in vivo splicing assay were chosen to evaluate the functional relevance of the Tip110/YB-1 interaction. We showed that YB-1 potentiates the Tip110/Tat-mediated transactivation of the HIV-1 LTR promoter while Tip110 promotes the inclusion of the exon 5 in CD44 minigene alternative splicing. Conclusions Tip110 and YB-1 interact to form a complex and mutually regulate each other’s biological functions.
Background
HIV-1 Tat-interacting protein of 110 kDa (Tip110), also known as squamous cell carcinoma antigen recognized by T cells 3 (SART3), was initially identified from a human myeloid cell line KG-1 cDNA library in 1995 [1]. Several important biological functions have been attributed to this gene/protein since its identification. A considerably elevated level of Tip110 is detected in a variety of human cancers [2][3][4][5][6][7][8][9], it has been proposed as a tumor antigen for immunotherapy. In addition, Tip110 binds to small nuclear RNA U6 and regulates eukaryotic pre-mRNA splicing ( [10,11] and our unpublished data). It also preferentially regulates the inclusion of exon 1a and skipping of exon 1b of the OCT4 gene [12]. Alteration of splicing components induced by a mutation in early grey, a Tip110 orthrologue in zebrafish leads to organ-specific defects and embryonic death [13], suggesting an important role of Tip110 in development. Furthermore, Tip110 is directly involved in expression regulation of viral and host genes including HIV-1, androgen receptor, and stem cell factors CYMC, GATA-2, NANOG, and SOX2 [14][15][16][17]. Lastly, Tip110 is shown to interact with ubiquitin-specific peptidases (USP) such as USP4 and regulate protein degradation [18]. It is clear that our understanding of Tip110 biological function is still rapidly evolving.
To further understand the biological function of Tip110, we wished to begin by identifying potential cellular proteins that interacted with Tip110. We took the immunoaffinitybased enrichment approach, followed by protein mass spectrometry. We isolated a total of 13 Tip110-binding proteins. Among them was Y-box binding protein 1 (YB-1). In this study, we characterized Tip110/YB-1 interaction and its impacts on each other's function.
Methods
Cell culture and transfection 293T cells were purchased from American Tissue Culture Collection (ATCC) and grown in Dulbecco's modified Eagle's medium containing 10% fetal bovine serum. The cell line was maintained in 100 IU/ml penicillin-100 μg/ml streptomycin and incubated at 37°C in 5% CO 2 . Cells were transfected using the standard calcium phosphate precipitation method. The chloramphenicol acetyltransferase (CAT) reporter gene assay was performed as described previously [14].
Mass spectrometry 293T cells were mock or pTip110-HA transfected. At 72 hr post-transfection, the cells were lysed with WCEB buffer (50 mM Tris.HCl, pH 8.0, 280 mM NaCl, 0.5% NP-40, 0.2 mM EDTA, 2 mM EGTA, 10% glycerol, 2 mM PMSF and protease inhibitors) and applied to the anti-HA affinity matrix column (Roche). The column was washed with 20 bed-volumes, using washing buffer (20 mM Tris.HCl, pH 7.5, 0.1 M NaCl; 0.1 mM EDTA) and then incubated at 37°C with elution buffer containing HA peptide (1 mg/ml) for 15 min. A portion of the elutes was mixed with 4× SDS-PAGE loading buffer and then resolved by 10% SDS-PAGE, and the remaining elutions were analyzed by LC-MS/MS on a waters Q-Tof Ultima mass spectrometer at the Yale Cancer Center Mass spectroscopy and W.M. Keck Foundation and Biotechnology, followed by an automated Mascot algorithm against the NCBI database.
Immunoprecipitation and Western blot analysis 293T cells were transfected with plasmids as indicated and harvested 72 hr post transfection. The cells were lysed in WCEB lysis buffer. Lysates were cleared of cell debris by centrifugation. Antibodies (1-2 μg/mg protein) were added to the lysates and incubated at 4°C on a rotating device for 2 hr, and protein A agarose beads (Upstate, Temecula, CA) were added to precipitate the complex. Pelleted beads were washed and suspended in 4× SDS loading buffer, boiled and used for SDS-PAGE. The proteins were transferred onto the HyBond-P membrane (Amersham, UK). The membrane was probed with primary antibodies and the appropriate peroxidase-labeled secondary antibody, then visualized with an ECL system. CD44 minigene splicing assay 293T cells were plated on 6-well plates and grown to 60-70% confluence and then transfected with the pCD44-v5 splicing reporter, pTip110-HA, pYB-1 or pYB-1 mutant plasmids. At 72 hr post transfection, total RNA was isolated using Trizol (Invitrogen, Carlsbad, CA), followed by acid:phenol extraction to prevent residual DNA from being used as a PCR template. Total RNA (0.6 μg) was used for reverse transcriptase (RT)-PCR using the Titan one tube RT-PCR system (Roche, Indianapolis, IN) and primers 5′-GAG GGA TCC GCT CCT GCC CC-3′ and 5′-CTC CCG GGC CAC CTC CAG TGC C-3′ and a program of 35 cycles of 94°C for 60s, 61°C for 60s, and 72°C for 90s. The RT-PCR products were separated on a 1% agarose gel.
Data analysis where appropriate, values were expressed as mean ± SD of triplicate experiments. All comparisons were made based on the control using two-tailed Student's t-test. A p value of < 0.05 was considered statistically significant (*), p < 0.01 highly significant (**) and p < 0.001 strongly significant (***). All data were representative of multiple repeated experiments.
Identification of Tip110-interacting proteins
To identify Tip110-interacting protein, 293T cells were transfected with pTip110-HA plasmid. Cell lysates were prepared and passed through a HA-affinity column. Following extensive washes, the bound proteins were eluted and fractionated on the SDS-PAGE. In parallel, cell lysates from pcDNA3-transfected cells were also included. Coomassie blue staining revealed several protein bands whose intensity differed between samples from Tip110-HA and pcDNA3 ( Figure 1A). Those proteins were recovered for mass spectrometric identification. Besides the bait protein Tip110 itself, there were 13 major proteins identified (Table 1). Those included cytoskeletal proteins, heat shock proteins, ribonucleoproteins, skin proteins and two ungrouped protein importin-2α and Y box binding protein 1 (YB-1). The interactions of Tip110 with all of those proteins were further examined and confirmed by immunoprecipitation followed by Western blot analysis (data not shown and see below). In the study, we chose to focus on YB-1 protein, as YB-1 and Tip110 appear to possess several similar functions. First, both Tip110 and YB-1 protein bind to RNA and are involved in post-transcriptional regulation such as pre-mRNA splicing [19,20]. Second, both Tip110 and YB-1 are highly expressed in some cancers [21]. Third, both Tip110 and YB-1 interact with HIV-1 Tat protein and regulate HIV-1 gene expression. Lastly, both Tip110 and YB-1 are regulated by transcription factor c-Myc [16,22]. Thus, all these observations imply that Tip110 forms complex with YB-1 and regulate each other's function and suggest potential physiological significance of this interaction.
Tip110 interacted with YB-1 and its molecular determinants
To confirm Tip110 interaction with YB-1, 293T cells were transfected with pTip110-HA, pYB-1-Myc, or both. Western blot analysis using anti-HA or anti-Myc antibody showed that both Tip110 and YB-1 were expressed in 293T cells ( Figure 1B, top panels). Immunoprecipitation of cell lyastes using anti-HA antibody, followed by Western blotting with anti-Myc or anti-HA antibody showed that YB-1 was detected in the Tip110immunoprecipitates only when Tip110-HA and Myc-YB-1 were co-expressed ( Figure 1B, bottom panels). Immunoprecipitation and Western blotting were performed with the very same antibody to ensure the immnoprecipitation efficiency of the antibody. Similar experiments were also performed with 293T cells transfected with pTip110-HA only. Endogenous YB-1 was also detected in the Tip110immunoprecipitates of Tip110-HA expression cells, but not present with the isotype IgG-immunoprecipitates ( Figure 1C). Those results suggest that Tip110 complexed with YB-1 in vivo.
To further determine the specificity of the interaction, we took advantage of a series of Tip110 mutants that contained deletions of the N-terminal HAT-domain (ΔNT), RRM domain (ΔRRM), NLS domain (ΔNLS), and the Cterminal domain (ΔCT) (Figure 2A) [14]. 293T cells were transfected with pTip110-HA or each of the Tip110 mutants. Tip110 and its mutants were expressed at the expected molecular weights ( Figure 2B, top two panels). Immunoprecipitation with anti-YB-1 antibody, followed by Western blot analysis using anti-Tip110 antibody showed detection of Tip110, ΔCT, ΔRRM and ΔNLS, but not ΔNT in the YB-1 immunoprecipitates ( Figure 2B, bottom two panels). Similarly, we performed binding experiments of Tip110 to deletion mutants of YB-1 lacking the N-terminal (aa1-129, YB-1ΔN) or the C-terminal domains (aa130-324, YB-1ΔC) ( Figure 3A) and found that deletion of the Nterminal domain of YB-1 (YB-1ΔN) prevented its complex formation with Tip110 ( Figure 3B). Taken together, these results further confirmed the complex formation between Tip110 and YB-1 ( Table 1) and suggest that the N-terminal domains of both Tip110 and YB-1 are directly involved in the complex formation.
YB-1 modulated Tip110/Tat-mediated transactivation of the HIV-1 LTR promoter
Tip110 interacts with the HIV-1 Tat viral protein and transactivates the HIV-1 LTR promoter [14,23]. Thus, we chose a LTR promoter-driven reporter gene chloramphenicol acetyltransferase (CAT) to determine the effects of YB-1 expression on this unique Tip110 function. Initial experiments were performed to optimize the input amounts of LTR-CAT, Tat, Tip110 and YB-1 expression plasmids to achieve a lower level of basal level LTR promoter activity Figure 1 Proteomic analysis of Tip110-binding cellular proteins including YB-1. A. 293T cells were transfected with pTip110-HA. pcDNA3 was used in the mock transfection. Seventy-two hours post transfection, the cells were harvested for cell lysates. The lysates were applied to an anti-HA affinity matrix column, Tip110-binding proteins were then eluted from the column and analyzed on 10% SDS-PAGE followed by Coomassie blue staining. B. 293T cells were transfected with pTip110-HA, pYB-1-Myc or both and harvested 72 hr post transfection for cell lysates. pcDNA3 was added to equalize the total amount of transfected DNA. Cell lysates were directly used for Western blotting using anti-HA or anti-Myc antibody (top panels), or immunoprecipitated using anti-HA, followed by Western blotting using anti-HA or anti-Myc antibody (bottom panels). C. 293T cells were transfected with Tip110-HA and immunoprecipitation was performed using an anti-HA antibody or isotype-matched IgG, followed by Western blotting using anti-YB-1 or anti-HA antibody. The proteins were organized into groups based on their functions. and a modest level of Tat transactivation activity on the LTR promoter (data not shown). We then transfected 293T cells with LTR-CAT, Tat, Tip110 and/or increasing amounts of YB-1 expression plasmids. pcDNA3 was used to equalize the total amount of DNA transfected, while pC3-GFP was included in the transfections to normalize the transfection variations. As expected [14,23], Tip110 or YB-1 expression increased Tat-mediated transactivation of the LTR promoter activity (Figure 4). In the presence of Tip110, increased YB-1 expression led to further increase in the Tatmediated LTR promoter activity. To further determine the relationship between the complex formation of Tip110 with YB-1 and YB-1 effects on Tip110/Tat-mediated transactivation of the LTR promoter, we performed similar experiments with both YB-1ΔN and YB-1ΔC mutants. YB-1ΔC mutant expression led to little CAT activity, while YB-1ΔN mutant showed similar CAT activity as the full-length YB-1.
These results suggest that YB-1 expression enhanced Tip110/Tat-mediated transactivation of the LTR promoter and suggest that the complex formation of Tip110 and YB-1 is important for this function. The drastic reduction of the CAT activity in cells expressing YB-1ΔC to a background level suggests that YB-1ΔC may function in a dominant negative fashion.
Tip110 promoted YB-1-mediated alternative splicing of CD44 minigene One of the well-characterized functions of YB-1 is regulation of the alternative splicing of the CD44 gene through interaction with the A/C-rich exon enhancer element [24]. Therefore, we next determined the effects of Tip110 on YB-1-mediated CD44 alternative splicing. We took advantage of a CD44 minigene ( Figure 5A) and performed in vivo RT-PCR-based splicing assay. Initial experiments were performed to optimize the input amount of CD44 minigene, YB-1 and Tip110 expression plasmids to ensure the RT-PCR-based detection of the alternative splicing (data not shown). As expected [24], YB-1 expression led to increased inclusion of the variable exon 5 (V5) of the CD44 minigene ( Figure 5B, top panels). Tip110 expression alone appeared to have the similar enhancement effects. In the presence of YB-1, Tip110 increased inclusion of the V5 exon of CD44 minigene in a dose-dependent manner. Tip110 and YB-1 expression were determined by Western blotting to ensure that Tip110 expression did not alter YB-1 stability ( Figure 5B, bottom panels). We next determined the requirement of Tip110 binding to YB-1 for this function. Similar in vivo splicing assay was performed with Tip110ΔNT and Tip110ΔCT mutants. Compared to the full-length Tip110, Tip110ΔNT and Tip110ΔCT expression alone showed little changes in alternative splicing of the CD44 minigene ( Figure 3C). Interestingly, in the presence of YB-1, Tip110ΔNT also showed little effects, while Tip110ΔCT had considerable increase in V5 inclusion of the CD44 minigene. These results suggest that Tip110 binding with YB-1 plays some roles in YB-1 function.
Discussion
In this study, we took the immunoaffinity approach to enrich the Tip110-binding proteins and identified those proteins by mass spectrometry. This was followed by further characterization of Tip110 interaction with YB-1, YB-1 effects on Tip110-mediated transactivation of the HIV-1 LTR promoter and Tip110 effects on YB-1-mediated alternative splicing of CD44 gene. The results showed that Tip110 bound to YB-1 in a specific manner and the interaction mutually regulated each other's function. There were a total of 13 major cellular Tip110-binding proteins, which were grouped according to their function ( Table 1). The group included cytoskeletal proteins, heat shock proteins, ribonucleoproteins, skin proteins and two ungrouped protein importin-2α and YB-1. Each of those proteins likely contributes to either known or unknown biological function of Tip110. For example, Tip110 binding to cytoskeletal proteins and skin proteins may be involved in a Tip110-related skin keratinization V5 (bottom, -V5). B. 293T cells were transfected with the CD44 minigene plasmid, pYB-1-Myc, pTip110-HA, or both. Input amounts of plasmids were shown at the top, all were in micrograms (μg). Total RNA was isolated and used for RT-PCR. The ratio of V5 inclusion (+V5/-V5) was determined by densitometric scanning and the ImageJ software and was normalized to that transfected with the CD44 minigene only. Expression of YB-1 and Tip110 was determined by Western blotting. β-actin was the loading control. C. 293T cells were transfected with the CD44 minigene, pYB-1-Myc, pTip110-HA, or each of the Tip110 mutants. The V5 inclusion and quantitation were performed as stated above. disorder called porokertosis. A mutation in Tip110 gene has been linked to this disease [25][26][27]. It is conceivable that Tip110 function in RNA metabolism including pre-mRNA splicing could require its interaction with ribonucleoproteins and heat shock proteins. The interaction of Tip110 with improtin-2α could also regulate Tip110 nuclear translocation in different cells and under various physiological conditions. The biological significance of all those interactions clearly warrants further investigation.
We focused on Tip110/YB-1 interaction in this study. YB-1 is a multi-functional protein and plays important roles in transcriptional and translational regulation, DNA repair, drug resistance and stress responses to extracellular signals (for review, see [28]). YB-1 has recently been shown to regulate pre-mRNA splicing [29]. Co-immunoprecipitation studies with tagged proteins and immunoprecipitation of endogenous and exogenous YB-1 with overexpressed Tip110 in 293T cells confirmed this protein-protein interaction ( Figure 1B & C). Although Tip110 and YB-1 both contain RNA binding domains and have been shown to bind to RNA, the binding properties of Tip110 and YB-1 to each other were independent of RNA, as we showed that RNase A1 treatment in the cell lysates and during the immunoprecipitation did not alter their complex formation (data not shown). The specificity of the binding was further supported by the data obtained from mutagenesis analysis and showed that the N-terminal domains of both Tip110 and YB-1 were involved in the complex formation (Figures 2 and 3).
To investigate the functional relevance between the Tip110 and YB-1 interaction, we tested whether this complex formation affects transactivation of the HIV-1 LTR promoter and alternative splicing activity. Interaction of the HIV-1 Tat protein with its cognate RNA TAR is a prerequisite for Tat transactivation. Tip110 and YB-1 both interact with HIV-1 Tat viral protein and potentiate Tat transactivation in LTR-driven reporter gene assays [14,23]. Although YB-1 has only been shown to bind to the TAR region of the HIV-LTR, the synergistic effects of Tip110 are TAR-dependent, as these effects are attenuated by deletion of the TAR sequence from the HIV-1 LTR promoter. Based on these observations, we speculated that the Tip110/YB-1 complex affected the transactivation of HIV-1 LTR promoter activity. The LTR-CAT reporter gene assay showed that a fixed concentration of Tip110 combined with increasing concentrations of YB-1 resulted in a further increase in CAT activity (Figure 4), indicating that the Tip110/YB-1 complex modulates HIV-1 gene expression. The reporter gene assay with YB-1 mutants showed that expression of the YB-1ΔC mutant abolished CAT activity, while expression of the YB-1ΔN mutant enhanced the activation to a greater extent than the full length YB-1. One interesting observation we found during the course of this study was that expression of the YB-1ΔC mutant protein has a negative effect on Tip110 protein expression by Western blot (data not shown), which may explain the decrease in CAT activity. Furthermore, binding of Tat to YB-1 has been mapped to amino acids 75-203 in YB-1, while the YB-1ΔC mutant included amino acids 1-128. Therefore, there is a stretch of 85 amino acids (residues 128-203) that are important for Tat binding, that are deleted in the YB-1ΔC mutant, which may lead to the observed inhibition of CAT activity. Additional studies would be required to determine if the YB-1ΔC mutant could be utilized to impair Tat function and effect HIV-1 gene expression.
Alternative splicing represents an important nuclear mechanism in the post-transcriptional regulation of gene expression. The role of YB-1 protein in the alternative splicing of CD44 is well documented in the literature. YB-1 binds to A/C-rich exon enhancers and stimulates splicing of the CD44 alternative variable exon 4 [24]. CD44 is essential to the physiological activities of normal cells, but they are also associated with the pathologic activities of cancer cells (for review, see [30]). Pre-mRNA from the human CD44 gene undergoes extensive alternative splicing within a cassette of at least 10 exons [31]. Increasing inclusion of these exons has been correlated to cancer and metastasis [32,33]. Here, we utilized a CD44 minigene ( Figure 5A) to determine the physiological function of the Tip110/YB-1 interaction. Our results showed that overexpression of YB-1 and Tip110 together in 293T cells enhanced the inclusion of the variable exon 5 from the CD44 minigene ( Figure 5B). Furthermore, the N-terminal domain of Tip110 (pTip110ΔCT), which is involved in the interaction with YB-1, had higher alternative splicing activity than the C-terminal domain of Tip110 (pTip110ΔNT). These results demonstrate the physiological significance of Tip110/YB-1 complex formation on the alternative splicing regulation of CD44.
Conclusions
There are a total of 13 cellular proteins that was identified by immunoaffinity purification followed by mass spectrometry. Among those is YB-1. Complex formation between Tip110 and YB-1 was confirmed by immunoprecipitation and Western blotting. Both N termini of Tip110 and YB-1 were found to be required for this complex formation. YB-1 expression enhanced Tip110mediated transactivation of HIV-1 LTR promoter, and Tip110 expression increased YB-1-mediated CD44 pre-mRNA alternate splicing. | 2017-08-01T05:02:40.006Z | 2013-07-04T00:00:00.000 | {
"year": 2013,
"sha1": "07a5ba591bd7d0a5042deab885b9ac3c9710690e",
"oa_license": "CCBY",
"oa_url": "https://bmcmolbiol.biomedcentral.com/track/pdf/10.1186/1471-2199-14-14",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "82d36af6da417357b93f4aa5341b14d7cbc5318f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
257102516 | pes2o/s2orc | v3-fos-license | Frequency bin-wise single channel speech presence probability estimation using multiple DNNs
In this work, we propose a frequency bin-wise method to estimate the single-channel speech presence probability (SPP) with multiple deep neural networks (DNNs) in the short-time Fourier transform domain. Since all frequency bins are typically considered simultaneously as input features for conventional DNN-based SPP estimators, high model complexity is inevitable. To reduce the model complexity and the requirements on the training data, we take a single frequency bin and some of its neighboring frequency bins into account to train separate gate recurrent units. In addition, the noisy speech and the a posteriori probability SPP representation are used to train our model. The experiments were performed on the Deep Noise Suppression challenge dataset. The experimental results show that the speech detection accuracy can be improved when we employ the frequency bin-wise model. Finally, we also demonstrate that our proposed method outperforms most of the state-of-the-art SPP estimation methods in terms of speech detection accuracy and model complexity.
INTRODUCTION
Noise estimation is one of the key components to realize singlechannel and multi-channel speech enhancement, most of which rely on the speech presence probability (SPP) to update the noise statistics [1][2][3]. Available noise power spectral density (PSD) estimators also make use of the SPP to decide when to update the noise PSD [4][5][6]. Compared to voice activity detectors (VAD), SPP is a soft-decision approach that depends on the correlation of inter-bands and inter-frames [7]. Accurate SPP estimation can greatly improve the effectiveness of speech enhancement [8,9].
In the short time-frequency transform (STFT) domain, some conventional statistical signal processing methods commonly assume that the spectral coefficients of speech and noise are independent and follow the complex Gaussian distribution [10,11]. Therefore, the SPP can be derived from the a posteriori probability of the time-frequency (T-F) bins of the noisy speech. According to this assumption, [4] applied the minima values of a smoothed periodogram to estimate the SPP which enables the SPP estimation to be more robust under the effect of non-stationary noise. In [5], to achieve a highly accurate SPP estimate with low latency and computational complexity, an optimal fixed a priori SNR was used to guarantee the a posteriori SPP to be close to zero when speech is absent. In addition, [7] takes the correlation of inter-band and inter-frame into account when designing a general SPP estimator.
Recently, deep neural networks (DNNs) have been proven to be effective at processing non-stationary noise, and many novel DNN-based approaches have been proposed to estimate SPP accurately, which have been applied to speech enhancement and speech recognition successfully [12][13][14]. In these methods, recurrent neural networks (RNNs) [15] are commonly used to acquire information from neighboring frames since the frames contain temporal information which can improve the accuracy of SPP estimation. In [14], a bidirectional long short-term memory (BLSTM) was trained by the input features of multi-time frames with all frequency bins to estimate the SPP. In [12], considering the ideal ratio mask (IRM) [16] ranges from 0 to 1 at each T-F bin, they selected different DNN models, such as LSTM, BLSTM, gate recurrent units (GRUs), and bidirectional GRU (BGRU) to estimate the IRM and approximate the SPP. However, the problem that arises here is that as the complexity of the model goes up and more training data is applied to the model, more powerful hardware is required to train the models.
Inspired by conventional SPP estimation methods, our model estimates the SPP based on the correlation of several neighboring T-F bins in contrast to the typical DNN-based SPP estimation approach where all frequency bins are regarded as the input features. This allows us to use DNNs on a one-to-one basis with frequency bins therefore vastly reducing the number of parameters in the model and the amount of computations taking place. In this work, we thus propose a frequency bin-wise SPP estimation model in the STFT domain that relies on using multiple DNNs to estimate the SPP. For our proposed model architecture, the GRU module is used to extract time and frequency information from each frequency bin and several of its neighbors. Additionally, since IRM-based SPP estimation methods may misclassify the T-F bins dominated by non-speech and noise [12,17,18], we choose the a posteriori probability to represent the SPP in the STFT domain.
The work is organized as follows. In Section 2, the problem of frequency bin-wise single channel SPP estimation is formulated. In Section 3, the SPP estimation model with multiple DNNs is designed. In Section 4 and Section 5, the experimental procedures and results are provided, respectively. Finally, Section 6 presents the conclusion. The work can be found on GitHub 1 .
Signal Modeling
For the single channel speech signal x(n), we assume that it is corrupted by the additive noise d(n). That is, in the STFT domain, we can obtain the noisy speech y(n) representation as follows: where k ∈ {0, ..., K − 1} denotes the frequency bin index and K is the number of frequency bins, l ∈ {0, ..., L − 1} denotes the time frame index and L is the number of time frames. With the assumption of a zero-mean complex Gaussian distribution and independence for X and D, we have where E[·] is the statistical expectation operator, φX (k, l) = E[|X(k, l)| 2 ] and φD(k, l) = E[|D(k, l)| 2 ]. The PSD of the clean and the noisy speech can be represented by φX (k, l) and φD(k, l), respectively. In the STFT domain, there exists a correlation between the neighboring T-F bins [7]. Therefore, the SPP estimate can be improved using the correlation. The first step in creating our input signal vector is to obtain a vector corresponding to each individual frequency bin, Each frequency bin vector contains L consecutive time frames, which contain relevant contextual information for the estimation of the SPP. Since RNNs are effective at processing temporal information [19,20], we employ RNNs in this work to extract time correlations from the neighboring time frames.
To improve the SPP estimation accuracy, we take a few neighboring frequency bin vectors into consideration to extract frequency correlations from the input signal matrix. Therefore, the input signal matrix ΦY (k) can be obtained as where I is the number of neighboring frequency bin vectors. Now, the time correlation and frequency correlation of neighboring time-frequency bins can be extracted according to the input signal matrix ΦY (k). In this work, the SPP is represented by the a posteriori probability [5], and the DNN is used to estimate the SPP from the noisy observation.
Since the typical DNN-based approach takes all the frequency bins into account to estimate the SPP, the model complexity may be increased. In this section, we, therefore, design multiple specific DNNs to estimate the frequency bin-wise SPP. Additionally, since the a posteriori probability is derived by the correlation of neighboring T-F bins, the a posteriori probability SPP representation of the clean speech and the noisy speech PSD are used as the training data pairs to train our model.
SPP Estimation Model and Loss Function
To extract the time and frequency correlation of the consecutive T-F bins in the input signal matrix ΦY (k) from the observed noisy PSD φY (k, l), we set K specific DNNs as the regression module. As mentioned in (4), the coefficient of the k'th input signal matrix can be used to train the k'th DNN for the SPP estimate in the k'th frequency bin.
First, to train the DNN model, we choose the log-power periodogram as the input feature [21,22]. Therefore, the input features of each individual DNN are obtained from the log input signal matrix ΦY (k). It can be expressed as To update the DNN parameters, the loss between the target and the estimated SPP is calculated by meansquared error (MSE), i.e., where SPPY (k) = [SPPY (k, 0), ..., SPPY (k, l), ..., SPPY (k, L − 1)] T is the target function. In this work, the a posteriori probability is regarded as the SPP representation, therefore SPPY (k, l) can be represented by where p(H0) and p(H1) denote a priori speech absence and presence probability, ξH 1 is the a priori SNR during speech presence [5].
Model Architecture
In this work, since a GRU can outperform an LSTM both in terms of convergence in CPU time, and in terms of parameter updates and generalization [23], we choose GRUs to design the SPP estimation model. The model training strategy is shown in Fig. 1 and the DNN model is trained by the input features of the logarithmic power spectral T-F bins. The training strategy of the typical DNN-based SPP estimation model in Fig. 1(a) shows that a GRU module is trained using K frequency bins (all frequency bins) and L consecutive time frames. The typical DNN-based model input size is K and, in this work, the size of the hidden layer is the same as the size of the input layer. The proposed training strategy of the frequency bin-wise SPP estimation model is shown in Fig. 1(b). When I neighboring frequency bins are introduced to estimate the SPP of a single frequency bin, the input size is 2I + 1, and one hidden layer is set. The output of each hidden layer state is regarded as the value of the SPP estimate at the current time. Finally, to restrict the output range of the DNN to [0, 1], the output layer is the activation function Sof tplus with a fixed parameter β.
EXPERIMENTAL SETTINGS
In this work, the sub-band DNS dataset is used to train our designed model. During testing, 200 noisy utterances (1.1 hours) and 1800 noisy utterances (1 hour) were collected from the DNS dataset [24], and the TIMIT dataset [25], respectively. Each clean utterance is corrupted by a random noise utterance selected from the noise dataset, each noisy utterance SNR ranging from -5dB to 25 dB. The noise data includes 150 different types of noise taken from Audioset [26] Freesound [27] and Demand datasets [28].
The receiver operating characteristic (ROC) [29] curve is used to evaluate the SPP estimation method performance and the falsealarm probability Pfa = 0.05 given in [7] is used to calculate the speech detection probability, Pd. Additionally, we apply the area under curve (AUC) metric which is derived from ROC and ranges between [0, 1] to represent overall performance. We also adopt the adaptive threshold set to -60 dB below the maximum instantaneous power across all TF bins shown in [7] to distinguish the speech and non-speech bins across all T-F bins of clean speech.
The sampling rate of all utterances is 16 kHz. Hann window is applied to STFT analysis and the length of the time window for STFT is 16 ms and the hop length is 8 ms. We use the mean and standard derivation to normalize the dataset. During training, the Adam optimizer [30] is utilized to optimize the neural network parameters. The learning rate is set to 0.001. Weight decay is set to 0.00001 to prevent overfitting. The parameter will be updated at the 50th and 100th epochs for the implemented DNN models. Pytorch is used to implement the frequency bin-wise SPP estimation model and the reference DNN-based model.
RESULTS AND DISCUSSION
In this section, to prove the effectiveness of our method, a comparison is shown between a typical DNN-based model and our proposed method using ROC curves. Moreover, some numerical results are provided to evaluate the accuracy of the SPP estimators and the model complexity, respectively.
Examination of ROC Curves
To investigate the performance of the proposed method, 200 training utterances (1.1 hours) are used to train our proposed frequency binwise model. In addition, 200 utterances (1.1 hours), 1000 utterances (5.5 hours), and 3000 utterances (16.6 hours) are used to train the typical DNN-based model, respectively. To investigate the effect of using neighboring frequency bins for the proposed method, we set I = 0 (no neighboring frequency bins), I = 1 (with 1 neighboring frequency bin), and I = 2 (with two neighboring frequency bins) to train the frequency bin-wise model. Fig. 2 shows an example of SPP estimation results. A noisy utterance of length 20 seconds and input SNR of 11 dB taken from the DNS dataset, is used for testing by the typical DNN-based SPP estimation model and the frequency bin-wise model. From Fig. 2, we can observe that the typical DNN-based method and the proposed frequency bin-wise method are able to estimate the SPP with similar accuracy. In addition, we also investigate the impact of the training data volume on SPP estimation accuracy for the typical DNN-based SPP estimation model. From Fig. 3, we can find that when we increase training data from 1.1 hours to 5.5 hours and then to 16.6 hours for the typical DNN-based model, there is a gradual increase in AUC but still falls short of our proposed method in terms Pd.
Numerical Results
To evaluate the performance of the proposed method, the speech detection probability and the AUC are calculated from the ROC curves to represent the speech detection accuracy and the effectiveness of the SPP estimation method, respectively. In addition, we also investigate the effect of model complexity on SPP estimation accuracy. Inspired by [31] and [32], we compare our method with the state-ofthe-art self-attention model and, in this work, 3 self-attention heads and 2 encoder layers are used to estimate the SPP. The self-attention model is trained in a typical way where all the frequency bins are treated as input features. During training, the frequency bin-wise SPP estimation model and the self-attention-based SPP estimation model are trained with 1.1 hours of training data pairs. The typical DNN-based model is trained with 1.1 and 16.6 hours of training data pairs, respectively. All training data pairs come from the DNS
Methods
Pd (Pfa = 0.05) AUC IMCRA [4] 0.1183 0.6504 Unbiased [5] 0.3460 0.7348 General [7] 0.1132 0.6229 Self-Attention [31] In Table 1, we show how the proposed model compares to other conventional methods and a few DNN-based methods using Pd and AUC as metrics. The results in Table 1 are obtained from testing using the TIMIT dataset (1 hour).
With 1.1 hours of training data, we can observe that the frequency bin-wise model AUC (0.7986) is lower than the typical DNN-based model and the self-attention-based model, it is still higher than IM-CRA [4] (0.6504), Unbiased MMSE [5] (0.7348) and General SPP estimator [7] (0.6229). Especially, when we set I = 1 and I = 2, the sub-frequency bin-based model achieved higher AUCs of 0.8011 and 0.7988, respectively. For the speech detection accuracy, all the frequency bin-wise models achieved higher speech detection accuracy than other methods and when we take one neighboring frequency bin (I = 1) into account the speech detection probability can reach 0.5038.
According to the results, we can confirm that an increase in model complexity can improve the performance of DNN-based applications, and in this work, the SPP estimation accuracy can also be improved, which is consistent with the experimental results shown in [33]. The reason is that the complex model can extract more global information than the simple model to estimate the SPP from all frequency bins. Additionally, a remarkable improvement in speech detection accuracy appears when we employ our proposed method to estimate the SPP, especially when we set I = 1, the model performance and Pd are improved. The reason for the improved performance could be that the DNNs can extract specific contextual information for each frequency bin which is not possible when I = 0 due to the lack of inclusion of its neighbors.
Finally, by comparing the AUC of different SPP estimation methods, we can observe that all DNN-based models can achieve higher performance of SPP estimation than the conventional methods. For DNN-based SPP estimation models, although all the presented models demonstrate similar performance, the speech detection accuracy is different. Therefore, it can be observed that more details can be detected by the bin-wise model leading to better detection accuracy.
Computational Complexity
To evaluate the complexity of the proposed model relative to its counterparts, we use the number of parameters and floating point operations (FLOPs) as the metrics. For our proposed frequency bin-wise model, the total parameters and FLOPs of all the models are used to represent computational complexity. We use the ptflops 2 python 2 https://pypi.org/project/ptflops/ 3024 3920 library to calculate the total parameters and FLOPs for our method and the reference DNN-based methods. Table 2 shows that our proposed method has fewer parameters and FLOPs than the other methods. The reason is that although we use multiple DNNs to estimate the SPP, each DNN has less input size than the typical DNN-based model. Furthermore, although we introduced the neighboring frequency bins to estimate the SPP in 4.2, from Table 2, we can also observe that the increase in computational complexity is minimal even with the inclusion of additional neighboring frequency bins. From the above experimental results, we can confirm that although increasing the training data and using complex models can contribute to the improvement of the performance of the typical DNN-based SPP model, high computational complexity is inevitable. However, it can be observed that the proposed frequency bin-wise model not only shows an improvement in Pd while maintaining similar performance in terms of the AUC but also reduces the computational complexity while using the same amount of training data.
CONCLUSION
In this work, we proposed an effective frequency bin-wise SPP estimation method that shows good performance with a limited amount of training data while also maintaining low model complexity. Experimental results show that in addition to reducing the model complexity, the frequency bin-wise model also shows better performance even in comparison with the typical DNN-based model that is trained with increasing amounts of training data. The experimental observations involving the inclusion of neighboring frequency bins show that there is an increase in speech detection accuracy as well as the AUC (compared to its counterpart that does not include any neighboring frequency bins) due to being exposed to local contextual information. Since multiple DNNs are employed to estimate the SPP in the STFT domain, the frequency bin-wise model's computational complexity is much lower than its DNN-based counterparts. | 2023-02-24T06:42:41.305Z | 2023-02-23T00:00:00.000 | {
"year": 2023,
"sha1": "2a9e62488214b8faa671caeb22c94ebe2894bbc4",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "2a9e62488214b8faa671caeb22c94ebe2894bbc4",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Engineering",
"Computer Science"
]
} |
10036395 | pes2o/s2orc | v3-fos-license | Digit Sucking Habit and Association with Dental Caries and Oral Hygiene Status of Children Aged 6 Months to 12 Years Resident in Semi-Urban Nigeria
Objectives Non-nutritive sucking (NNS) is a common behavior in childhood. The association between digit sucking, dental caries and oral health has been studied with inconclusive results. The objectives of this study were to determine the prevalence of, and the association between digit sucking, caries and oral hygiene status of children age six months to 12 years, resident in Ile-Ife, Osun State, Nigeria. Methods A cross-sectional study was conducted in Ife Central Local Government Area of Osun State. Data were collected through a household survey using a multi-stage sampling procedure from children between six months and 12 years. Details of each child’s socio-demographic characteristics, digit sucking habits, caries status and oral health status were collected. The association between digit sucking, caries status and oral hygiene status was determined using Chi square and Logistic regression. Results The mean age of the 992 study participants was 5.8 ± (3.2) years. The prevalence of digit sucking, caries and poor oral hygiene were 7.2%, 10.5% and 2.4% respectively. The mean dmft score was 0.22 ± (0.80), mean DMFT score was 0.04 ± (0.30) while mean Oral Hygiene Index score was 1.27 ± (0.73). Digit sucking increased the odds of having caries (OR: 1.28; CI: 0.58–2.81) but decreased the odds of having poor oral hygiene (OR: 0.58; CI: 0.34–1.01) insignificantly. Conclusions Digit sucking was not a significant predictor of caries and oral hygiene status, although the odds of having caries increased while the odds of having poor oral hygiene decreased with digit sucking.
Introduction
A habit is an inclination or aptitude for some action acquired by frequent repetition, showing itself in increased facility to performance and reduced power of resistance [1]. One of the commonest oral habits is sucking, a reflex present at birth, although oral contractions and other sucking reflexes have been observed before birth [2]. Sucking habits could be nutritive (breast and bottle feeding) or non-nutritive.
The commonest form of non-nutritive sucking (NNS) is digit sucking [3,4]. Several studies have evaluated its etiological factors and suggest that fatigue, boredom, excitement, hunger, fear, physical and emotional stress, and insufficient satisfaction of sucking need in infancy are situations that could stimulate digit sucking habits. Sucking may provide happiness and a sense of security when a child faces difficult times [5,6]. It may also give a feeling of warmth and contentment [7].
Detrimental effects of digit sucking include disturbances in arch form, recurrent otitis media, the possibility of accidents, development of latex allergy, tooth decay, oral ulcers and sleep disorders [8]. Others include wrinkled, chapped or blistered fingers, ulceration, corn formation, dishpan thumb as well as reduced peer acceptance [2,9]. Digit sucking may also accompany behaviors like trichotillomania [10].
The association between digit sucking and dental caries has been studied but results have been inconclusive. Yonezu and Yakushiji [11] found that children with finger-sucking habits were more likely to be free of caries by age three years. Their finding was associated with increased inter-dental spacing which resulted from flaring of teeth due to digit sucking. On the contrary, a study conducted in Baghdad reported an increase in caries severity with NNS [4]. NNS was associated with malocclusion, making tooth cleaning difficult and allowing accumulation of dental plaque.
The aims of this study were to determine the prevalence of digit sucking, caries and oral hygiene status in the study population, and the association between digit sucking, caries and oral hygiene status of children from six months to 12 years, resident in Ife Central Local Government Area (LGA) of Osun State, Nigeria.
Study Design
This was a cross sectional study that recruited participants from National Population Enumeration sites in Ife Central LGA. These geographical sites were selected because the participants there were familiar with the conduct of such surveys.
Study Population
Participants were included in the study if they were between the ages of six months and 12 years, living with their biological parents or legal guardians who consented to participate in the study.
Sample Size Determination
The sample size was calculated using Leslie Fischer's formula [12] for study population >10,000. Based on a prevalence of 34.1% of oral habits of children aged four to 15 years old, determined by Quashie-Williams et al [3], a sample size of 1,011 children was necessary to identify 345 children with oral habits, giving a non-response rate of about 10%.
Sampling Technique
The sampling procedure was a (three-level) multi-stage cluster sampling aimed at selecting eligible persons with known probability. Stage 1 involved the random selection of enumeration areas within the LGA.At the sites, every third household on each street was selected. Stage 2 involved listing eligible individuals within households. Stage 3 involved selection of actual respondents for interview. Only children present in the house at the time of study were eligible and one child per household was selected. Details of this sampling technique has been reported by Folayan et al. [13] in an earlier publication from this same database.
Data Collection
Data were collected through face to face interviews using a structured questionnaire. Experienced field workers who had been engaged in past national surveys were recruited and trained on the study protocol. The interviewers collected all information from respondents and submitted to survey supervisors who reviewed the questionnaires. Mothers were requested to respond on behalf of children below eight years, based on evidence that responses of mothers on questionnaires have a higher correlation with children's responses [14]. However, where the mother was unavailable, fathers completed the questionnaires. Each child's socio-demographic characteristics were obtained.
Socio-economic Status
Socio-economic status was determined using an adapted version of a socio-economic index described by Olusanya et al. [15]. The tool has been tested and found valid and reliable in Nigeria [16,17]. Data were collected on educational level and profession of respondent's parents. The mother's level of education was classified as 'no formal education, Quranic or Primary school education' and scored as 2; Secondary school education was scored as 1; and Tertiary education scored as 0. Father's occupation was categorized into three: Civil servants or skilled professionals with tertiary level of education scored as 1; Civil servants or skilled professional with secondary level of education scored as 2; Unskilled, unemployed individuals, students, and civil servants or skilled professional with primary or Quaranic education were scored as 3. The scores for the mother and father were summed to give social classes I-V, where classes I and II represented the upper class, class III the middle class and IV and V, the lower socio-economic class. When a child had lost a parent, the socio-economic status was determined from the living parent.
Digit Sucking Habit
Questions were asked about type and number of digits sucked and the frequency of engaging in the habit. Options were, 'irregularly', 'once a week', '2-3 times a week', 'once a day' and 'several times a day'. Duration of sucking was explored and options ranged from 'less than a minute' to '1-5 minutes', '5-10 minutes', '10-20 minutes', '20-30 minutes', and 'almost continuously'. The intensity of the sucking habit was explored by asking if or not an audible sucking or popping sound was heard while sucking.
Dietary History
Dietary history was obtained with a dietary chart which recorded all study participants meals and snacks, taken both at meal times and in between meals for three consecutive days, including two week days and a weekend or public holiday.
Intra-oral Examination
Intra-oral examination was conducted in the homes of all participants to determine the presence of caries and oral hygiene status. The participants were examined sitting, under natural light, using sterile dental mirrors and probes by trained dentists. The teeth were examined wet and debris removed with gauze where present. Radiographs were not used in this study.
Caries Profile
Caries diagnosis was based on the recommendation of the WHO Oral Health Survey methods [18]. The decayed, missing and filled teeth index (dmft/DMFT) was used. The number of decayed, missing or filled teeth were summed together to give the dmft/ DMFT score for the primary/permanent dentition. For the purpose of analysis, caries status was further divided into caries present or absent.
Oral Hygiene Status
The Oral Hygiene Index-Simplified (OHI-S) described by Greene and Vermillion [19] was used to determine the oral hygiene status. Its components, the Debris Index and Calculus Index, were obtained based on six numerical determinations representing the amount of debris or calculus found on the surfaces of index teeth 11, 16, 26, 31, 36, 46 and 51, 55, 65, 71, 75, 85 in the permanent and deciduous dentitions respectively. Debris and calculus scores were totaled and divided by the number of surfaces scored. Scores were graded as 0.0-1.2 = Good oral hygiene, 1.3-3.0 = Fair oral hygiene and > 3.1 = Poor oral hygiene.
Standardization of Examiners
Clinical investigators were qualified dentists undergoing postgraduate residency training as Paedodontists or Orthodontists who were calibrated on the study protocol and the WHO criteria for caries diagnosis, including the use of the dmft/DMFT index and OHI-S. Training was followed by practice on patients. Each investigator examined and scored children for oral lesions as prescribed in the study protocol. Results were subjected to a Cohen's weighted kappa score analysis to determine intra-and inter-examiner variability. The intra-examiner variability ranged between 0.89-0.94, while inter-examiner variability ranged between 0.82-0.90 for caries detection and OHI-S.
Theoretical Model for Statistical Analysis
A hierarchical theoretical model with the following four blocks was employed for the analysis of predictors for the presence of dental caries: 1) age of child. 2) socio-economic and demographic factors, 3) Oral Hygiene Index and 4) oral habit (Fig 1). Age was considered a potentially confounding factor [20,21], which informed the adjustment of the developed model for this variable. The second block included socio-economic and sex as distal factors in the theoretical model since they could influence oral hygiene status [22] and the digit sucking habits [23,24]. Oral hygiene practice may be a moderator variable of the association between digit sucking and caries [4], so it was included in the third block. In the fourth block, the digit sucking was recorded as the oral habit with the assumption that this variable may influence caries risk.
A hierarchical theoretical model with the following three blocks was also employed for the analysis of predictors of poor oral hygiene: 1) Socio-economic and demographic factors, 2) caries status and 3) oral habit. The fundamentals that informed the developed model for predicting poor oral hygiene were the same for predicting the presence of caries. Socio-economic status, age and sex are potentially confounding factors for caries [21,25,26] and oral hygiene status [22]. In the second block, caries status was included as a moderator variable for oral hygiene [27]. In the third block, digit sucking was recorded as the investigated oral habit that may influence the risk for poor oral hygiene. (Fig 2).
Data Analysis
The ages of the 10 children six to 11months were rounded off to 1 year for ease of data analysis. Descriptive analyses were conducted to determine the prevalence of digit sucking, caries and oral hygiene status. Bivariate analysis was conducted to test the association between dependent variables (presence of caries and oral hygiene status) and the independent variables (child's age, gender, socio-economic status). Where appropriate, chi square tests were conducted.
Logistic regression was used for inferential analysis. The hierarchical modeling started with the first block whose variables were adjusted simultaneously for each other. Only those variables whose p value were <0.4 entered into the subsequent models. Variables of the second block were adjusted simultaneously for each other and for the variables whose p value were <0.4 in the previous step. Digit sucking was included in both models irrespective of the p value in the previous steps because of the need to evaluate the effect of digit sucking on the presence of caries and poor oral hygiene. The significance of each variable was considered at the time of entry in the model (p value 0.05). All other blocks were then added in succession, following the same procedure. The estimated coefficients were expressed as odds ratios (ORs) and their 95% confidence intervals were also calculated. The Hosmer-Lemeshow goodness-of-fit test was done to confirm the consistency of the models. Where data were skewed, the dichotomized version was used. Statistical analysis was conducted with SPSS (version 17.0) for windows, while STATA software (version 10) was used for the logistic regression. Statistical significance was inferred at p 0.05.
Ethical Consideration
Ethical approval was obtained from the Ethics and Research Committee of the Obafemi Awolowo University Teaching Hospitals Complex Ile-Ife (ERC/2013/07/14). Approval to conduct the study was also obtained from the Local Government Authority. The study was conducted in full compliance with the study protocol. Written informed consent was obtained from the parents of study participantsafter duly explaining study objectives, risks and benefits, voluntary nature of participation and freedom to withdraw at any time. All children aged eight to 12 years also provided written assent. Efforts were made to minimize risks such as loss of confidentiality and discomfort, to participants. All data were collected without the identifier (names and addresses) of participants. Participants experienced no direct benefit and no compensation was paid. However they were given token gifts of stationery or a small tube of toothpaste containing 1450ppm of fluoride. The fluoride level of drinking water sources in most LGAs in the South West geopolitical zone of Nigeria where Ile-Ife is situated is between 0.00pm-0.30ppm [28] highlighting the need to promote further use of topical fluoride. None of the gifts exceeded a value of $0.50.
Results
Only the data of 992 children recruited for the study were complete enough for analysis. This represents 90.1% of the proposed 1,011 study participants. None of the children recruited refused to participate. Participants included 508 (51.2%) boys and 484 girls (48.2%) with a mean age of 5.83 ± (3.15) years. There were 497 (50.1%) study participants in the 1 to 5 year age group and 495 (49.9%) in the 6 to 12 year age group, with mean ages of 3.15 ± (1.35) and 8.53 ± (1.90) years respectively. Table 1 shows the socio-demographic and digit sucking profile of study participants. Seventy one (7.2%) of them had digit sucking habits. Fifty five (77.5%) engaged in thumb sucking habits while 16 (22.5%) sucked other digits. The majority of children with digit sucking habits (56.3%) fell into the 1 to 5 year age group. The prevalence of digit sucking was highest among the 2 year olds and least among the 7 and 12 year olds. There were no significant differences in the proportion of children aged 1 to 5 years and 6 to 12 years (p = 0.27), male and female participants (p = 0.37) and children in the different socio-economic classes (p = 0.40) who sucked their digits. Table 2 highlights the caries profile of participants. One hundred and four children (10.5%) had dental caries. Significantly more females than males (61.5% vs 38.5%; p = 0.01), and more children in the 6 to 12 years than the 1 to 5 years age group had caries (71.2% vs 28.8%; p = 0.00). There was no significant difference in the proportion of children from each of the socio-economic strata who had caries (p = 0.13).
Digit Sucking Habit and Caries Profile Of Study Participants
The dmft score ranged from 0 to 8 with a mean score of 0.22 ± (0.80). There were 192 unrestored carious teeth, nine missing and three filled primary teeth. The DMFT score also ranged from 0 to 4 with a mean score of 0.04 ± (0.30). There were 29 unrestored carious teeth, three missing and two filled permanent teeth. Only eight (11.3%) of the 71 children with digit sucking habits had dental caries. There was no significant difference in the proportion of children with or without digit sucking habits who had caries (11.3% vs 10.4% p = 0.82). (Table 1) None of the children with digit sucking habits had missing or filled teeth.
Digit Sucking Habit and Oral Hygiene Status of Study Participants
The mean OHI-S score was 1.27 ± (0.73). Mean OHI-S score was significantly better in the 1 to 5 years age group compared with the 6 to 12 years age group (0.98 vs 1.56; p < 0.001). No significant gender difference was observed in the mean OHI-S scores (1.28 vs 1.27; p = 0.73).
There was a significant difference in mean OHI-S scores (1.07 vs 1.29; p = 0.02) of digit and non-digit suckers. Table 3 highlights the association between oral hygiene status of study participants and dependent variables. The distribution of OHI-S scores was not significantly different across gender (P = 0.89) and socio-economic strata (P = 0.29).
Digit Sucking Habit and Caries Status of Study Participants
The association between the type of digit sucked, habit severity, caries and oral hygiene status was further explored. This revealed that there was no significant association between the type of digit sucked (p = 0.45) or severity of digit sucking (p = 0.53) and caries presence. Neither was there a significant association between the type of digit sucked (p = 0.32) or digit sucking severity (p = 0.79) and oral hygiene status. The odds of having caries also increased for children with middle (OR: 1.68; 95% CI: 0.95-2.94) and low socio-economic status (OR: 1.58; 95% CI: 0.87-2.85) when compared with those with high socio-economic status. Children with poor oral hygiene (OR: 2.51; 95% CI: 0.85-7.44) compared with those with good oral hygiene, and children with digit sucking habits (OR: 1.28; 95% CI: 0.58-2.81) compared with those without digit sucking habits also had increased odds of having caries. However, these findings did not reach statistical significance. Table 5 highlights the results of the logistic regression to determine the predictors of poor oral hygiene. The Hosmer-Lemeshow goodness-of-fit test confirmed the consistency of fit of the model (p = 0.24). Age and presence of caries were the only significant predictors of poor oral hygiene. Children aged 1 to 5 years had reduced odds of having poor oral hygiene when compared with children aged 6 to 12 years (OR: 0.27; 95% CI: 0.21-0.36; P<0.001)., Children with caries had increased odds of having poor oral hygiene when compared with children without caries (OR: 1.66; 95% CI: 1.07-2.59; P = 0.03). Female children (OR: 0.85; 95% CI: 0.65-1.11) and children who were digit suckers (OR: 0.58; 95% CI: 0.34-1.01) had decreased odds of having poor oral hygiene when compared with male children and non-digit suckers respectively. These findings were however, not significant.
Discussion
This study is the first to determine the prevalence of digit sucking habit, caries and oral hygiene status of children at a population level in Ile-Ife. The prevalence of digit sucking, caries and poor oral hygiene among the study population was low. Digit sucking was not a predictor of caries and poor oral hygiene status. It however increased the odds of having caries and good oral hygiene in the study population, though these findings were not significant. Being a female and aged six to12 years were significant predictors of the presence of caries for the study population. Having caries and being between six and 12 years old were also significant risk factors for poor oral hygiene. The use of a household survey for study participants' recruitment made the findings of the study generalizable to the study environment. This is because the recruitment method increased the probability of including children targeted for the study from all the socio-economic strata in the study population, irrespective of their ability to be enrolled in school or not. The robust analytical approach also reduced the chances of spurious inferences. However, the recruitment of study participants from enumeration sites familiar with research studies inherently introduced a bias into the study sample. Despite this limitation, the study was able to provide very useful information on oral health status and its relationship with digit sucking habits.
One of the highlights of the study is the low prevalence of digit sucking in the study population when compared with prior reports. This supports previous suggestions of variability in the prevalence of NNS habits in different cultures [29][30][31]. Proffit [32] noted that the prevalence of oral habits is lower amongst less cosmopolitan communities like our study community, where children had ready access to their mothers' breasts for a long period and rarely suck other objects. A second highlight is the insignificant difference observed in the proportion of children who were digit suckers across the different age groups, gender and socioeconomic strata. This is unlike a prior report of a decline in digit sucking habit with increasing age [24]. More girls than boys were reported to continue digit sucking after beginning school [32]. Adair [23] also reported a higher prevalence of digit sucking habits in children with working class mothers. Children have to compete for their mother's limited time and may resort to digit sucking for comfort. The low prevalence of digit sucking among the study population may have made it difficult to pick up age, gender and socioeconomic differences in the sub-analysis conducted.
Third, the non-statistical association between digit sucking and caries observed in this study differed from prior findings of Yonezu and Yakushiji [11]. Similar to the reports of Misbah [4], we observed that digit sucking increased the odds of having caries although our results did not reach statistical significance. We however did not observe a significant association between digit sucking and the risk of poor oral hygiene, a probability that Misbah [4] alluded to. We suggest that digit sucking may have been protective due to increased salivary flow resulting from the habit. Future longitudinal studies may however help increase understanding on the association between digit sucking, caries and oral hygiene status. Fourth, the low caries prevalence in this study is similar to previous reports in many sub-Saharan African countries [33]. We found age and gender to be predictive factors for caries. Age is a known risk factor for caries [34,35]; as age increases, caries risk increases. The relationship between age and caries risk had been reported in prior studies in our study environment [36,37]. However, the finding that females have increased risk for caries as shown by this study and other previous studies [26,38,39] is still debatable. A few studies had highlighted an increased risk for caries in males [37]. However, increasing evidences seem to show that females are truly at increased risk for caries. Differences in salivary composition, salivary flow rate, hormonal fluctuation, dietary habits and genetic variations increase the risk of females for dental caries [40].
Despite the low caries prevalence, the range of the dmft and DMFT reported in this study implies that there is the need to actively identify children at risk for caries and manage them. Oziegbe and Esan [41] reported higher ranges of dmft/DMFT for children in the same environment as ours. Their study however included children aged 4 to 16 years, which meant that they had more children whose teeth had been exposed to the oral environment for a longer duration, than the children in our study population. Increased duration of tooth exposure to the oral environment implies that teeth face greater risk of acidic assault and caries formation [34,35]. The mean dmft score of our study population was also higher than the mean DMFT, a finding that has also been observed by others [42].
Fifth, caries presence and being 6 to 12years old were predictive factors for poor oral hygiene. This has been earlier reported by Gopinath et al [27]. Worse oral hygiene status in older children may be due to poor oral hygiene practices as children become independent of parental oral hygiene supervision [43,44]. This underscores the importance of using various motivational methods to promote oral hygiene practices in adolescents. The risk of caries is heightened when the oral hygiene is poor [36,45].With the outcome of this study showing that caries presence increases the odds of having poor oral hygiene, the prospect of a cycle of caries and poor oral hygiene being set up throughout the life course of children in the study environment, if specific interventions are not instituted in adolescence to improve the oral hygiene, is high.
In conclusion, though the findings of this study have increased public information about the associations between digit sucking, caries and oral hygiene status, it was not able to provide conclusive evidence on the association between the variables. The non-significant association between digit sucking, caries and oral hygiene status, and prior reports, on the detrimental effect of digit sucking on oral health [8] makes it important to promote discontinuation of the habit as soon as feasible. | 2018-04-03T00:15:40.975Z | 2016-02-18T00:00:00.000 | {
"year": 2016,
"sha1": "e202bc56a83a21fd4ce010e5ee30e2d6302b741e",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0148322&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e202bc56a83a21fd4ce010e5ee30e2d6302b741e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
212709156 | pes2o/s2orc | v3-fos-license | What hinders congenital ectopia lentis patients’ follow-up visits? A qualitative study
Objectives The aim of our study is to give insight into congenital ectopia lentis (CEL) patients’ care-seeking behaviour and explore the factors affecting their follow-up visits. Design Cross-sectional study; in-depth and face-to-face semistructured interview. Setting A large-scale ophthalmology hospital in China. Participants 35 patients with CEL and their parents from May 2017 to August 2017. Main outcome measures Themes and categories. The interviews were audio-recorded, transcribed verbatim, coded and analysed using grounded theory. Data collection was closed when new themes did not emerge in subsequent dialogues. Results The factors affecting the timely visits included insufficient awareness of CEL, shame on hereditary disease, lack of effective doctor–patient communication, lack of reliable information online and daily stressors. Conclusion Continuing medical education of severe and rare disease, reforming the pattern of medical education, constructing an interactive platform of the disease on the internet and improving healthcare policy are effective ways to improve the diagnosis and treatment status of CEL in China.
The description of the limitations of this study is however lacking. Other than this which should be completed, this is a worthwhile contributive paper Very accessorily, a typo mistake at line 16
VERSION 1 -AUTHOR RESPONSE
Reviewer: 1 Please state any competing interests or state 'None declared': None declared (√) Please leave your comments for the authors below Comments 1. The key words are not MeSH words (√) 2. Page 5, line 20,21 -'Previous research has shown that the various clinical manifestations of CEL has increased the difficulty of diagnosis'please reference this (√) 3. Table 1: Education of parentsis it the education of the father or the mother? No matter the father or the mother, we registered the higher academic of them. 5. In your discussion, you must write the percentage of patients with ectopia lentis who have cardiovascular problems and reference it. CEL is a rare disease, and the interval between early lens dislocation and cardiovascular disease is long, so no data has been reported up till now. Our research is under way to get this data. Please leave your comments for the authors below This is a very instructive paper on compliance issues. The description of the limitations of this study is however lacking. Other than this which should be completed, this is a worthwhile contributive paper.
GENERAL COMMENTS
The study addresses an important issue and the design is appropriate. I have several comments regarding the write-up (below).
Title: The first part of the title is a question. It should end in a question mark, not a double colon.
Abstract: The objective isn't clear. The objective apparent in the title and the results is more precise than that currently outlined in the abstract. This should be changed.
Intro -p5; para beginning line 20 (final para): it sounds here as though it's a mixed methods study, but this isn't initially clear. Instead, the abstract indicates that it's a qualitative study and then this paragraph initially suggests that it's a quantitative study, only mentioning the qualitative portion at the end. The study design needs to be immediately clearer both in the abstract and here.
Intro -By the end of the introduction it's unclear who the participants arethe abstract indicates that these are patients and their families but the strengths/limitations and introduction indicates that it's professionals. This needs to be clear and consistent throughout.
Methodsthe process of thematic analysis should be explained more clearly.
Resultsthe introductory paragraph to the results section feels misplaced. This is not the results of a qualitative analysis so it's unclear where it's come from. It's inappropriate to report the percentage of patients with a certain view in a qualitative study. It's also inappropriate to interpret the findings in the way that has been done, when it is not a theme. The opening of the results should be focused on giving an overview of the themes that were identified.
Results -page 8, line 19: why does this quote refer to 'my eyes' when the participant seems to be talking about his daughter's health problem?
Theme 3.1 seems to comprise two things: lack of awareness and also avoiding seeking treatment due to stigma/embarrassment/shame or anxiety. It should be two separate themes or the theme should be reconceptualised/renamed to reflect its contents.
Themes 3.2 and 3.4 open with an overview of previous literature rather than a discussion of the findings of this study. This kind of overview belongs in the intro or discussion, not the results.
Results -page 11, line 4: It's not appropriate to report percentages in qualitative research.
VERSION 2 -AUTHOR RESPONSE
1. Title: The first part of the title is a question. It should end in a question mark, not a double colon. Done.
2. Abstract: The objective isn't clear. The objective apparent in the title and the results is more precise than that currently outlined in the abstract. This should be changed. Done.
3. Intro -p5; para beginning line 20 (final para): it sounds here as though it's a mixed methods study, but this isn't initially clear. Instead, the abstract indicates that it's a qualitative study and then this paragraph initially suggests that it's a quantitative study, only mentioning the qualitative portion at the end. The study design needs to be immediately clearer both in the abstract and here. Done. I'm sorry that we didn't make our purpose clear. We've reorganized the expression. This study is only a qualitative study, not a mixed methods study.
4. Intro -By the end of the introduction it's unclear who the participants arethe abstract indicates that these are patients and their families but the strengths/limitations and introduction indicates that it's professionals. This needs to be clear and consistent throughout. Done. I'm sorry that we didn't make our purpose clear. The participants are patients and their families.
5. Methodsthe process of thematic analysis should be explained more clearly. Done.
6. Resultsthe introductory paragraph to the results section feels misplaced. This is not the results of a qualitative analysis so it's unclear where it's come from. It's inappropriate to report the percentage of patients with a certain view in a qualitative study. It's also inappropriate to interpret the findings in the way that has been done, when it is not a theme. The opening of the results should be focused on giving an overview of the themes that were identified. Done.
7. Results -page 8, line 19: why does this quote refer to 'my eyes' when the participant seems to be talking about his daughter's health problem? I'm sorry that we didn't make our purpose clear. "My eyes" in this quote means "his daughter's poor vision". 8. Theme 3.1 seems to comprise two things: lack of awareness and also avoiding seeking treatment due to stigma/embarrassment/shame or anxiety. It should be two separate themes or the theme should be reconceptualised/renamed to reflect its contents. Done. We have divided it into two separate themes.
9. Themes 3.2 and 3.4 open with an overview of previous literature rather than a discussion of the findings of this study. This kind of overview belongs in the intro or discussion, not the results. Done. | 2020-03-15T13:03:26.304Z | 2020-03-01T00:00:00.000 | {
"year": 2020,
"sha1": "37c7ca60622bbce39460d3af248b8a7717491a43",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/10/3/e030434.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "29d17709a2717077387fab88890c2f595e522329",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221082098 | pes2o/s2orc | v3-fos-license | Networked Technopolitics: immigrant integration as city-branding
The article explores the role of network-led policymaking with a focus on immigrant integration. Drawing on the EUROCITIES Integrating Cities Charter, it sheds light on how immigration-related diversity governance plays a part in the city-branding strategies. The relevance of policy advocacy through the lens of cosmopolitan urbanism is instrumental for studying the governance of migration and diversity in the age of integration paradigm. Contemporary local policymaking in immigrant integration shaped by city-to-city cooperation tell us about policy models associated to cities’ image. Therefore, city branding strategies framed on behalf of networked technopolitics represent a challenging way to study the immigrant integration approach. This exploratory study is based on desk research with an emphasis on literature review and documentary analysis.
Introduction
Cities, towns and municipalities, rather than countries, are increasingly becoming the major actors of immigrant integration policies. Concurrently, growing concerns over the social inequality and a lack of inclusive communities in western societies have stimulated policies that are aimed at promoting active forms of immigrant integration. The interplay between migration and inequality has highlighted the impact of uneven urban development on people on the move. Inequality in income, earnings, and wealth concerns in a globalised world assess the connection between the economy and political regimes (Milanovic 2016;Picketty 2014). Transnational migration is clearly intertwined with a wider picture in the making of social inequality (Faist 2016;Black et al. 2005). The link of inequality and migration 'suffers from profound methodological nationalism' (Bastia 2013). For Bastia (2013), the economic and social inequality analyses within migration study scholarships had under-explored the place-based context of inequalities, privileging data on social inequality and stratification collected mostly at a national level. It is important then to look at local integration policies.
Although the point is not to make a claim in favor of a 'methodological localism' (Zapata-Barrero et al. 2017), it is necessary to highlight the policy responses that have attempted to address the link between inequality and migration on a local socio-spatial scale (Cassiers & Kesteloot 2012). In European cities, attention has been paid to inequality and immigration (Trenz & Triandafyllidou 2017;Zapata-Barrero et al. 2017;Alba & Foner 2015;Hackett 2013). Recent literature on inequality in the face of greater ethno-racial diversity, for example, highlights the relations between immigration strategies, policies and forms of integration governance (Duszczyk, Pachocka & Pszczółkowska 2020). A significant part of global urban injustice in its contemporary form builds on international migration (Schiller & Çaǧlar 2009). Diversity, equality and social inclusion have become an integral part of local integration policies against social inequality. Cities are not only developing but also sharing practices and initiatives towards immigrant integration. This approach may be viewed as part of the wider 'local turn' in migration-related diversity governance (Zapata-Barrero et al. 2017), which establishes a breeding ground for city-to-city cooperation.
Many cities are now taking a long-term approach to address the socio-economic differences that target immigrant populations, focusing on issues of diversity and nondiscrimination. Such a long-term approach may be associated with local integration policies that offer 'concrete strategies and measures of urban development' (Hillmann 2018, p. 87). This view suggests that integration policies are linked to urban development approaches under a variety of migration-led regeneration initiatives (Hillmann 2018). Integration policies are embedded in a larger dynamic urban hierarchy in today's globalisation, which clarifies how global integration of cities involves international migration of high-skilled and lowskilled workers (Sassen 2005). It is a strategic component of cities in the world economy that creates novel forms of power and politics. In the European context, Carmel and Cerami and Papadopoulos (2011) have pointed out the co-existing modes of integration and segregation, in which labor market conditions and welfare provisions play an instrumental role in how immigrants' inclusion and integration are shaped, so that 'the interaction of welfare regime, informal and formal labour markets (and their relationship), and immigration and citizenship regimes combine to form distinct national migrant integration regimes' (Papadopoulos 2011, p. 42).
Immigrant integration relates to city-branding strategies. City-branding is a process by which the 'city is given ontological status as a "personality" with identity and values' (Stiegel & Frimann, 2006). It is instrumental to look at processes and dynamics of how immigrant integration is impacting city-branding strategies. The extent to which integration policies play a part in city-branding strategies matters because this tells us about how international migration became a local government effort. In this wider context of migration-related diversity governance, many transnational city networks are emerging (Oomen 2018). The EUROCITIES Integrating Cities Charter is an illustrative case of a capacity building initiative program with a network-oriented approach that enables city-to-city cooperation. This case informs us about the role of policymaking communities and how networks of policy advocacy incorporate a rhetoric of neutrality establishing standardised policy objectives. This exploratory study is based on desk research with an emphasis on review of the literature and documentary analysis. It covers a broad range of research reports, articles, policy briefs, case study findings, social media, webinar and other documents, and draws attention to city-branding strategies in local immigrant integration policies under networked technopolitics.
In this article, I argue that the EUROCITIES Integrating Cities Charter (hereafter the Charter) is not merely capacity-building and cooperation-inducing network; it is also a political regulation strategy, as it explicitly calls for adopting and implementing a specific integration model in response to immigrant-related inequality. Integration policies at the local level are turned into what EUROCITIES has been calling a 'good governance' approach, shaping signatory cities' integration models. At stake is the 'fast-traveling policy models' (Peck & Theodore 2015) and its implications for city-led solutions to immigration and diversity management. The networked technopolitic help us to have a better understanding of contemporary migration governance. I structure the article as follows. The first section introduces the Charter, outlining how this policy network works to foster immigrant integration model shaping policies and practices at the local level. The second section explores the intersectional approach between inequality and migration that frames the integration as part of the cities' response to global dynamics of social inequality involving immigration. The third section examines the branding literature in urban studies (Dinnie 2011;Vanolo 2017) looking at cosmopolitan urbanism (Sandercock 2006). Cities are creating an urban brand of immigrant integration, which is based on (imported or borrowed) policy models and designs. Finally, future directions for research are suggested before concluding remarks.
Immigration Governance in a Moving World
Immigrant integration involves several complexities including challenges to governance and policymaking through a set of institutional interactions that stretch across scales and can be traced analytically in documents, discourses, practices, regulations. The question of how to improve the 'management' of immigrants as a local level effort has been mobilising great power between European cities. City-to-city cooperation can be seen through the 'transnational city networks' fostering immigrant integration. Oomen (2019) talks about immigration-related 'transnational city networks' exploring the impact of these arrangements into local governments. These networks provide support and guidance and they are facilitating the mutual exchange of information based on their experience in integrating refugees and migrants. Importantly, 'networks allows cities to decouple their local policies from those developed nationally, and their pragmatic, symbolic, and jurisgenerative activities enable them to better manage migration, to contest the current migration regime, and to even modify it' (Oomen 2019, p. 22).
Among other 'transnational city networks', the Charter, launched in 2010, includes 39 signatory cities addressing immigrant integration policies and practices. City-to-city cooperation lies at the heart of this network and serves as the basis for influencing the decision-making process at the subnational level regarding immigrant integration. It aims to frame city-level policies and practices for diversity management based on research evidence and policy model adoption that targets non-discrimination and equal opportunities for city residents, particularly immigrant needs (EUROCITIES 2017(EUROCITIES , 2013(EUROCITIES , 2012. It builds on policy learning, standards monitoring, evaluation and implementation procedures to integrate newcomers characterised by the city-to-city cooperation between localities experiencing common issues on migration-related diversity (EUROCITIES 2017(EUROCITIES , 2013(EUROCITIES , 2012. This policy network establishes key features for policy design and initiatives to assess in standardised terms of quality and performance with the purpose of enhancing migration diversity management outcomes. By supporting city-level responses to migration-related diversity governance issues, it puts the city-to-city cooperation with focus on monitoring and peer learning at the centre of the local immigrant integration policy model. By analysing the Charter, Moloney and Kirchberger (2010) argued that peer reviewing and benchmarking are the key tools for mutual learning that reveal a mobilising strategy, coordinating efforts. Their analysis found that the Charter pushes an agenda on immigrant integration policies and allows cities to play a meaningful role in the network, and consequently, the network itself also has significant influence over a city's agency.
The Charter has been disseminating good practices of comparable policies based on benchmarking approach between signatory cities. It monitors and evaluates local policy performances and its achievements. This process of peer reviewing and benchmarking assumes a discourse of political neutrality which applies adequate instruments and standardised procedures informed by an evidence-based policy-making agenda. This policy network defines a model for inclusive city building. Such a model identifies priority areas to improve policy integration and draws from practical solutions and challenges faced by the signatory cities, including stakeholders such as non-profits and local communities organisations. As a policy network, it recognises the crucial role of actors at the local level and how cities are the front lines of immigrant integration. It establishes shared policy objectives to create a model of inclusive urban policies and local services to newcomers. Within the 'integrating cities' rhetoric, it sets out a vision over long-term steps in the realm of migration-related diversity policies towards coordinated approaches. The signatory cities receive recognition for their efforts to create inclusive cities. Hambleton (2014) argued that the Charter might be seen under the local integration strategies by which guidelines offered a common background of policy commitment and practical lessons learned. This enabled mutual learning as a promising model for an inclusive response to the context of inequality through urban placemaking of diverse communities (Hambleton 2014).
The Charter has influenced the local approach to integration in different ways. For example, 'Nuremberg, who signed the charter in October 2015, uses the charter as a benchmark to evaluate its policies' (EUROCITIES 2015, p. 16). Aspects of the Charter included into the political representatives' discourse shows that Helsinki 'promoted the charter within the city and the broader metropolitan area through speeches and official presentations at seminars and other events' (EUROCITIES 2013). Further: Ghent used the charter as a basis for the Action Plan on Diversity 2020 which sets the objective of 30% of employees with a migrant background. It has also been used to raise awareness on implementing equal opportunities and poverty reduction in local policy with training for current employees. Leipzig used the charter to foster the integration process in general and to promote the implementation of measures, actions and recommendations. In Oslo, the charter underpins city policies, especially for the OXLO guide for equal access to services and the OXLO Business Charter (EUROCITIES 2018, p. 18).
The Charter plays an important role in shaping local integration policies. The implementation of these policies involves a branding-strategy with emphases on diversity and welcoming as part of the city's identity and reputation. There is a branding aspect of the process that has become one significant component of policy shift in integration discourses and practices. The policy network adds 'value' to the local public policymaking. And the policy network calls for greater regulation of immigrant integration policies at the local level to ensure program effectiveness, which remains a task deeply structured in technocratic approaches to policy design in the areas of cooperation and sharing. Examining the Charter, we understand how immigrant integration policies respond to immigration-related inequality.
Intersecting Migration and Inequality
The rising social and economic inequalities across the world remains a central issue at the level of the nation-state (Milanovic 2016). As Kerbo (2003) indicated, inequality encompasses class, race and gender and it is also about a condition of unequal access to resources related to broader processes of social division, hierarchy, and social differences. Inequality does not reduce an apolitical distribution of wealth and income. Rather, it is socially constructed through social relations of exploitation, contradictions and political power. Within this debate on global inequality, some studies focused on Western European countries indicated the fundamental link between inequality, labour market changes, and financial globalisation (Heath & Cheung 2007;Long 2014). Long (2014) considers the impact of migration on inequality and challenges the assumption of migration as a driving force to inequality. For her, migration had potentially important effects to address certain features of the rising inequality in Western societies. She outlined the potential benefits of labour migration and labour mobility to address social inequality in which the exclusion of immigrants from the formal labour market is implicated as one key feature.
The international migration movements are part of a wider frame of uneven wealth distribution that contributes to greater social inequality between the richest and poorest countries. The relation between migration and inequality may consider not only an economic perspective but often includes a traumatic context of social changes, such as authoritarian political regimes, civil war, persecution, environmental hazards. As migration occurs in deep poverty and violence, the migrant labour exploitation in Global North countries is framed through precarious lifeworlds, in which immigrant experience of vulnerability becomes subject to neoliberal work and welfare regimes, engendering global inequality dynamics (Lewis 2015). Forced migration has played an important role in inequality studies. Crucial to an understanding of international migration movements is the awareness that the precarious life worlds of migrants are linked to geopolitical dynamics of uneven urban development under neoliberal globalisation.
Inequality demonstrates an instrumental economic side, through which it reveals the social logic of exploitation. Income inequality holds social implications. Inequality is not a subject outside the historical processes of wealth accumulation and political power established by dispossession, displacement, and abandonment. Social inequality concerns the ways in which income inequality implies power relations through which people are organised in terms of domination and oppression. This means that the logic of social inequality is inherently a political matter related to social vulnerabilities. More recently, the debate about social inequality has been concerned with migration-development interplay. The intersectional perspective on migration-development considers how internal and cross-border migration can be analyzed through the lens of urbanised globalisation.
For Bastia (2013), the perspective of intersectionality informs us about how oppression and privilege outline different categories of disadvantage related to social inequality. Social inequality as a dynamic process is integrated with gender, class, ethnicity, and race issues. Such a viewpoint has considered inequality not only as 'historically grounded' but also assumes a 'context-specific analysis of social relations of difference to avoid depoliticising and simplifying complex realities' (Bastia 2013, p. 245). The inequality-migration interplay leads to another central point about the way in which migration can have a negative or positive correlation with inequality. According to Bastia (2013), the relation between migration and inequality is always contextual, so that it is necessary to explain which type of inequality is at stake. She highlighted the importance of inequality and transnational migration beyond income-based measures. Having said this, inequality is closely connected to organising positions of privilege where people gain and maintain material and non-material unequal distributions of wealth, rights, and recognition. The migration-development nexus lies in an analysis in which social inequality refers to sites where multiple forms of oppression or privilege intersect, including material and social measures (Bastia 2013).
Policymaking in immigration, integration and social cohesion continue to frame key debates on social inequalities and have more significant leverage in integrating immigrants into the economic and social spheres. These policies are formulated, operationalised and implemented in line with development theories and broader ideological perspectives, focusing on the role of immigrants in their host communities. To think about the Charter, the migration-development and social inequality perspectives have helped to understand the way inequality shapes immigration and integration policies. Under the Charter, cities are seen as part of a wider diversity-management governance that uses a bottom-up approach for managing social inequalities, emphasising a welcoming city image to bolster local economies through a combination of local-based practices and policy coordination between signatory cities. An intersectional view of social inequality is also significant for migrant integration policies. The Charter has made it a priority to address immigrant integration as a social and economic task ranging from labour market issues to anti-discrimination policies to enhance the public perception on migration and diversity. In this way, immigrant integration is part of the response to social inequality.
Networking, integrating and branding
Integrating immigrant newcomers as part of the nation-building processes in Europe have been debated for a long time (Favell 2003). The nationalised idea of integration is about an inclusion of a newcomer immigrant or a member of a marginalised group into a 'society' while it is ultimately bounded by a nation-state in terms of a national identity project (Favell 2003). The goal of integration policies involves the management of 'ethnic' immigrant populations, who are non-Europeans. It is a matter of government practices and discourse, an ideal linked to the nation-building process under the growing condition of cultural diversity (Favell, 2003). There has been a great debate on the role of local authorities and different local actors in the immigrant integration approach. In view of this, cities have been playing an important role in integration policies (Zapata- Barrero et al. 2017). According to this view, the governance of migration-related diversity has its clearest expression at the city level, where conflicts arise from social inequalities, power relations and racism.
Schinkel's critique of immigrant integration clarifies the way integration discourses reveal forms of imagining society. It talks about integration and the unit, social cohesion, and importantly, the matter that is jeopardising it (2017, 2018). Schinkel argued that society vis-àvis 'immigrant community' 'only ever exists through an active work of the difference, separating a supposed "inside" from an "outside", of circumscribing -by means of a work of power-knowledge -who and what is and is not "part of society"' (2018, p. 9). Schinkel (2017) argues that integration is both the political way of separating populations and identities as 'others' and afterwards of determining how to include them into a prescriptive and standard account of society. On the one hand, immigrant integration becomes a tool for 'othering' people through its forms of governing that are constructed on group-based differences and forms of identity in social life (Schinkel 2018). On the other hand, how local governments deal with issues of immigration, diversity, and integration is essential for economic growth and urban development: 'Diversity, if managed well through immigrant integration, is seen as an opportunity to create and foster an image of the city, which makes it attractive for foreign investment, tourism, and increased consumption' (Hadj Abdou 2019, p.5). Diversity is central to understanding the extent to which the governance technique of immigrant integration and the knowledge produced around it: 'tend to reproduce and reorganise 'race' in the city through processes of racialisation and a re-making of the racial subject' (Hadj Abdou 2019, p.6).
The Charter acts as an illustrative example of this governance model of ethno-cultural diversity for cities. This immigrant integration model considers the migration-inequality nexus at the local level through policy network embedded in the cosmopolitan approach. For Sandercock (2006 p. 39), cosmopolitan theorising helps us to clarify the discourse 'around managing our peaceful coexistence in shared spaces'. Rather than insisting on fostering some 'sense of belonging and the imperative of peaceful coexistence ', Sandercock (2006) argues that cosmopolitan urbanism might be based on agonistic democratic politics. Cosmopolitan urbanism has a political dimension regarding the right to difference in the city dealing with conflicts not accommodating them for the sake of economic strength of the city (Sandercock 2006). It is concerned with the (utopian) politics of an urban imaginary structuring political life and managing cultural diversity (Sandercock 2006). The Charter may be associated with a wider branding strategy of cities, which has been translating a cosmopolitan imaginary into discourses and practices of migration-related diversity governance. This underlines that cosmopolitan urbanism works as the background of the city branding strategies to promote a model to shape policy practices for managing immigrant integration under the umbrella of migration-related diversity policies.
The strategies and tactics of city branding influence governmental choice in migrationrelated diversity governance. Integrating becomes a brand for cities, which mostly take a rights-based perspective regarding institutional discourse, practices, and policies that shape the urban imaginary of cities. Place branding is related to international politics in the global policy process (Van Ham, 2008). Branding-strategies lead cities to adopt new strategies and tactics in delivering a message that involves values, goals and desires through propaganda and public diplomacy (Van Ham, 2008). Place branding is arguably, at its most basic level a political value-set that is concerned with the practice of image-making and reputation management (Van Ham, 2008). This proves useful in understanding the relevance of the governance of migrant-related diversity. At the broadest conceptual level, city branding strategies are also an aspect of soft power related to policy processes of immigrant-related diversity.
Soft power concerns the nature of power relations and it results from a voluntary process of regulation, co-option, and cooperation between actors. Nye (1990) coined the term 'soft power' to analyze the American international politics after the Cold War, characterised both by attraction and persuasion. Soft power influences decision-making and shapes policies and policy regimes and its 'softness' describes the processes of governing without explicit coercive tools (Nye, 1990). It implies, somehow, a kind of 'free and voluntary' joint action among social actors (Nye 1990). In a more recent view on soft power, Nye (2019) extended the concept by including non-coercive methods of negotiation including public diplomacy, global image and influence. For Hayden (2012), soft power is articulated in public diplomacy in the international system. Both as discourse and practice, soft power puts communication and information at the heart of public diplomacy (Hayden 2012). However, soft power is no longer only a matter of the country's national level shaping the agency of political actors via foreign policy tools. Indeed, subnational soft power and local public diplomacy cities are at the forefront on a wide range of policies in democracies and we need to understand and advance the critical role of local governments in networked field of transnational political action, such as city-to-city cooperation.
City-to-city cooperation is often intimately related to the various processes that exemplify soft power in shaping cities' images. This cooperation involves branding-strategies as a way for rethinking policymaking through policy models. Policy models embedded in both networks and within multiple local contexts have been articulated in western European city-regions. From the perspective of the soft politics of city branding, the significance of horizontal networking, transnational linkages and city-to-city cooperation initiatives among local and regional states and other non-central governments and non-for-profit organisations in navigating and negotiating difference becomes clear. It is no surprise, therefore, that city branding influences policymaking processes. Bookman (2018) notes that cities as branded entities gain distinction in terms of urban culture, social life, and consumption, and this is the case for many postindustrial urban regeneration strategies. Research has shown how the discourses of cities and local governments are working for the promotion of immigrant entrepreneurs and fostering a welcoming environment in terms of the use of public space and city services (Aytar & Rath 2012). For cities experiencing 'super-diversity', migration, ethnic minority groups and cultural diversity become key themes for a local agenda with policies of integration: 'an awareness of the new super-diversity suggests that policy-makers and practitioners should consider new immigrants' plurality of affiliations' (Vertovec 2018, p. 117), not just their ethnicity. Jordan and Schout (2006) argued that EU policy coordination was based on a networked governance structure that was less dependent on the power of hierarchical rulemaking authorities. Importantly, networked governance relies on the cooperation through joint action. In doing so, networked governance frames a political, procedural and legal regulatory arrangement that shapes policy formulation and implementation (Jordan & Schout 2006). This regulatory arrangement can be considered not only as networked but also as an experimentalist form of governance that focuses on soft power and voluntary commitment (Maggetti 2015). This leads to the last point. Place branding strategies play an important role in the 'soft power' dynamics over policymaking via 'policy models' as a result of advocacy and persuasion. For Peck and Theodore (2015), the policymaking field has become influenced by 'best practice' and 'paradigmatic models'. This means that policy change often is driven by models of best practice, which is 'those social practices and infrastructures that enable and sustain policy "mobility," which enable the complex folding of policy lessons derived from one place into reformed and transformed arrangements elsewhere' (Peck and Theodore 2015, p. xvii). According to this view, a policy transfer process carried out through 'discursive frames and institutional frameworks perform[s] a "preceptorial" function, licensing some cognitive and political behaviors, shaping policymaking imaginaries, and enabling certain patterns of "learning," while disciplining or even excluding others' (Peck & Theodore 2015, p.27).
Many cities seeking to change their urban brand are supported by policy networks. In immigration-related diversity governance, the role of place branding can be seen as a transnational regulation strategy. The action of branding cities as integrative cities sets out political values and an aspirational framework for immigrant integration as it may and should become. Although not from a top-down perspective, this policy network provides practical tools and conceptual insights for migration and diversity policies. These practical insights contribute to the city's image, and immigrant integration is viewed as a relevant element of branding. An immigrant integration policy aligned to the policy network standards has a significant identity-shaping function over signatory cities in such a way that it cannot be understood as merely a technical process or considered an allegedly neutral tool or approach. Transnational city network characterised by 'fast policy' (Peck & Theodore 2015) follows the perspective of post-political modes of governing. Mouffe (2016) notes that under the postpolitical order, the conflictual dimension of democracy is addressed by 'good governance'. Seen from this perspective, public policy models using tools of soft power are described at a surface level as taking an evidence-based approach. Yet, in reality, this approach is what is driving changes in local policy and at the same time defining what renders this policy a (politically) successful response to local conditions.
In recent years, the rise of the interplay between migration-related issues and local policymaking processes has become more apparent in the European literature, exploring 'whether and to what extent a trend towards convergence can actually be identified' (Caponio & Borkert 2010, p. 21). City branding can help us with this issue. For Gebhardt (2014, p. 12), 'many cities are leading the way in setting up local institutions and services to reflect the diversity of those they serve, and in promoting an inclusive local identity'. As noted by Gebhardt (2014), this is the case for Copenhagen, in which the city's strategy of becoming Europe's most inclusive city included projects, training and campaigns promoting the institutionalisation of diversity in various fields of the city's services. Drawing on Amsterdam and Rotterdam, Belabas, Eshuis & Scholten (2020) argue that 'the institutional embedding of place branding influences policy goals and content' (p.3). The point is not only bringing the city's residents together but also how to benefit from place branding to push the city-making process to greater innovation, greater competitiveness, and greater prosperity: [P]lace branding does not only fulfil an 'internal' function of representing the entire urban population and helping all residents to identify with their city, but also an 'external' function, oriented towards businesses and tourism. Branding policies often function as a tool to increase economic development and international competitiveness instead of enhancing social cohesion or providing a shared sense of belonging amongst residents (Belabas, Eshuis & Scholten 2020, p. 4).
These cases illustrate the promotion of diversity and equality in local policies combined with efforts of (re)branding a city's identity. These cities demonstrate commitment to managing immigrant-related diversity through services, policies, awards, campaigns, posters, and festivals. Branding-strategies are therefore to be encouraged as a tool guide for city's government to approach the analysis, formulation, evaluation and measurement of performance that informs decision-making in order to foster economic and social benefit from the immigration and integration policies adopted. Ultimately, the Charter reinforces the 'fast-traveling practices' (Peck and Theodore 2015) of how integration in society should be done, in its simplest formulation, with immigrant integration as a matter of policy model adoption. By influencing the policymaking process and merging a set of political values at the local level, transnational city networks play an important role in a city's brand and how it should be implemented and formulated to achieve the desired goals of the network. This also opens a channel for exchange of ideas, information, and techniques offering a roadmap for policymaking within immigration-related diversity governance. On top of these considerations, benchmarking, best practices, and peer-to-peer learning are a set of 'soft tools' shaping (local) policymakers and stakeholders, which carries within its regulatory practices, somewhat depoliticised forms of othering groups of people.
Concluding remarks
The EUROCITIES Integrating Cities Charter helps us understand how a 'transnational city network' shapes local policy responses, particularly through the lens of 'fast policy'. It highlights the need for analyzing the link between city branding-strategies and policy change. Further, through the field of immigrant integration policies, it demonstrates that it is vital to interrogate the significance of transnational city networks and the apparent policymaking consensus that shapes the city's image. In a global economy which has driven cities to compete for global investment and tourism, which are framing inequalities in the urban setting, scholarly work to further explore the immigration-integration-city's image matrix remains to be done.
The challenge is to find ways of studying how transnational city networks are handled by local government agencies and their street-level bureaucrats and shaped by immigrant organisations and grassroots organising groups standing up for rights. It is time to acknowledge that more research and informed debate on immigrant integration and its normalization of racial otherness is needed to generate new, interdisciplinary insights about policy models. Therefore, policy networks are set to play a key role in (re)producing models and making it vital for critical migration scholars to build knowledge about issues of international migration, societal diversity as well as related public policies. In these discussions, it is relevant to understand to what extent transnational city networks can foster transformative, community-driven alternatives to immigrant integration. As is clear, there is room for analyzing how themes such as participatory arrangements, accountability and social control can shed light on immigrants' political participation in the local policymaking arena, and how we read this process of institutional and policy change through the lens of deliberative democracy. We might challenge the binary divide of integration versus segregation.
It is therefore prescient to interrogate migration policy and foster an interdisciplinary research agenda on new municipalism, institutional learning, spaces of social innovation and experimentation, which underpins the complexities of and the implications for urban agency, city-making and (im)migration governance in globally linked city-regions. Work within these perspectives deals with theories from a range of disciplines and involves methodologies used in this field (e.g. action research, participant observation, longitudinal ethnographic research and so on). At a time when debates on immigrants' rights, refugees and policy responses continue to spread hostility toward or about people on the move, such an interdisciplinary research agenda can provide much-needed clarity and important analysis which will be of interest to city-makers, policy makers, activists and other stakeholders, including but not limited to the fields of urban studies, city planning and social protection studies.
The commitment of a city to immigration and diversity governance cannot be detached from its branding-strategies. Transnational city networks play an instrumental part in influencing a city's image and its surrounding discourse. Although forged and configured as a post-political condition, immigrant integration models relocate political power and decisionmaking at the local level; they consitute a political program intricately enmeshed with cities. Peck and Theodore (2015) argue that 'networked technopolitics' shapes contemporary policymaking and, by so doing, it calls for democratic deliberation and popular control. Following this line of thought, I have argued that the 'transnational city network' based on 'fast policy' is also a governance technique. While others might argue that traveling (that is, transferable) models of policy development and technocratic designs of immigrant integration are examples of how transnational politics can be involved in the production of cosmopolitan urban spaces, I suggest that we perhaps need to recast our understanding of city-branding within immigration governance under a heading of 'networked technopolitics' (Peck and Theodore 2015). Taken from this perspective, the governance technique of immigrant integration reveals how the ethnic difference of the cosmopolitan 'Other' is managed through its connectedness to transnational networks, and this is far from a matter of technical knowledge for policymaking. | 2020-07-23T09:03:15.399Z | 2020-07-20T00:00:00.000 | {
"year": 2020,
"sha1": "61cb0f0733b95e9909dddde397aeaa2b95407abb",
"oa_license": "CCBY",
"oa_url": "https://epress.lib.uts.edu.au/journals/index.php/mcs/article/download/6966/7513",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a57179c13fd6c20b17fa307cf3409d315f67d3ae",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Sociology"
]
} |
14345802 | pes2o/s2orc | v3-fos-license | A Trans-Theoretical Approach to Physical Activity Profile in General Population of Mashhad
Regular physical-activity is necessity for a healthy lifestyle. Despite public health efforts, a minority of population are involved in healthy levels of physical-activity. This study provides evidence about exercise patterns and predictors of Mashhad-Iran population according to TTM change stages. In this cross-sectional study, we surveyed a total number of 564 participants from Mashhad in 2014 by using stages of change questionnaire. Analysis showed 23.4% of participants were in pre-contemplation stage, 18 in contemplation, 24.6% in preparation, 8.10% in action, 14.4% in maintenance and 11.5% were in termination phase. Age, gender, BMI, alcohol consumption, sleep duration, having compeer and encouragement were identified as predictors of pre-contemplation stage. Genders, having company and using bicycle for transportation were predictors of termination phase. Tailor interventions based on the predictors to enhance the physical activity among specific subgroups would be of interest.
Introduction
Regular physical activity enhances public health status and contributes to prevention of chronic diseases. In particular, obesity, type II diabetes mellitus, coronary heart disease, atherosclerosis, hypertension, osteoporosis, and various kinds of cancers can be prevented by appropriate physical activity (WHO, 2010;Rehn, Winett, Wisløff, & Rognmo, 2013;Kruk & Aboul-Enein, 2007;Li & & Siegrist, 2013). Physical activity can significantly decrease overall mortality rate in general population, since a large number of deaths occurred each year are due to inactivity (Lee & Skerrett, 2001;Wen, Wai, & Tsai, 2011).
In 2008, United States Department of Health and Human Services developed the physical activity guideline, recommending adults with 18 or more years of age to have at least 150 minutes of moderate-intensity (5 periods in a week, each last for at least 30 minutes) or 75 minutes of vigorous-intensity physical activity in a week (Committee PAGA, 2008). However, many studies have indicated that the majority of society in developed countries does not adhere to the recommended guidelines regarding effective and beneficial physical activity (Craig, Mindell, & Hirani, 2008;Pleis & Lucas, 2009). This lack of physical activity seems to be more prevalent in Iranian population, since approximately 80% of Iranians are not sufficiently active (Sheikholeslam, Mohamad, Mohammad, & Vaseghi, 2004).
An evidence based underlying mechanism for this sedentary lifestyle is behavioral stages of changes. Transtheoretical model (TTM) is a framework, which categorizes people concerning their readiness and willingness to change. TTM introduces six stages of change, including precontemplation, contemplation, preparation, action, maintenance, and termination. TTM successfully addressed the concerns over exercise behavior changes in different populations and brought interventional strategies for each stage (Prochaska & Velicer, 1997). The change process in physical activity behavior is linked to individuals and environmental interactions. Researchers suggest that behavioral processes are utilized to intervene in later stages and as the stage promote self-efficacy of individuals increases (Rhodes, Plotnikoff, & Courneya, 2008;Mori et al., 2009).
Methods
In this cross-sectional study, we surveyed a total number of 564 participants from Mashhad, Iran in 2014. Mashhad is the second populous city in Iran and is the capital of Razavi Khorasan Province. It is located in the north east of the country close to the borders of Afghanistan and Turkmenistan. Its population was 2,772,287 at the 2011 population census. Housing the holy shrine of eighth Shia Imam, Mashhad receives millions of pilgrims each year. For data collection we referred to public transport stations, public parking lots, car parks of shopping centers, banks, hospitals and universities all around the city. Parking of the holy shrine was also a place for sampling collection procedure.
Survey was done using a checklist and stages of change questionnaire (Marcus, Selby, Niaura, & Rossi, 1992). The checklist included the socio-demographic characteristics and possible related factors of physical activity behavior. The questionnaire consisted of six questions with yes and no answers, according to six stages of change. Stages of change refer to a person's readiness to engage in regular exercise. Someone in pre-contemplation (pc) does not exercise and is not planning to start exercising within 6 months. A contemplator does not exercise but is planning to start within 6 months. A person in preparation is planning to start exercising within 1 month and has taken some initial steps toward it. Someone in action has been exercising for less than 6 months. A person in maintenance has been exercising for 6 months or more and finally the person in termination stage, will never leave exercising (Reynolds, Spruijt-Mtz, & Unger, 2008).
We used Persian translated version of the questionnaire, being valid and reliable before (Moattari, Shafakhah, & Sabet Sarvestani, 2013). Demographic information, including age, sex, education level, job status, history of smoking and drug or alcohol abuse were asked in the checklist. A total number of 564 questionnaires were completed.
Ethics Committee of Mashhad University of Medical Sciences approved the study. The interviewers explained the objectives of research for participants and assured them about the privacy of their personal data and after getting the consent, they filled the questionnaires. SPSS 11.5 software (SPSS Inc., Chicago, Illinois, USA) was used for all statistical analyses. Standard descriptive statistics were applied to describe the pattern of the data. Chi-square test was used to examine the significance of the association between categorical data. Normality of the data was checked with Kolmogorov-Smirnov test. ANOVA and Kruskal-Wallis tests were applied in normal and non-normal distributions respectively. Logistic regressions were used to predict the factors' influence on marriage instability. All tests were 2-tailed, and probability values 0.05 were considered significant.
Result
We had 564 participants in our research. The data showed that 316 (56.3%) of participants were male and 245 (43.7%) were female. Average age of the participants was 28 years with maximum of 84 and minimum of 11 years. 256 (45.5%) of participants were single; among them 19 (3.4%) were divorced and seven (1.2%) were widowers, 306 (54.5%) of participants were married. Among the respondents 136 (26.8%) of them were jobless/housekeeper, 233 (45.9%) were employed and 139 (27.4%) were students. 3 participants (0.5%) were illiterate, 211 (37.9%) had nonacademic education and 343 (61.6%) had academic education. Frequency distribution of participant's demographic characteristics and factors related to respondent's physical activity are shown in Table 1. Analysis showed that 124 (23.4%) of participants were in pre contemplation stage, 95 (18%) were in contemplation, 130 (24.6%) were in preparation, 43 (8.10%) were in action, 76 (14.4%) were in maintenance and 61 (11.5%) were in termination phase. Frequency of individuals in each stage separated by gender is shown in Table 2 which indicates a statistically significant difference between men and women in this frequency (p-value<0.05). Demographic characteristics and several related factors to physical activity are listed in Table 3 by different stages of trans-theoretical model in exercise patterns. Our analysis showed that average age, sleep duration, hours of sitting position, distance to nearest sport club (meter) and alcohol consumption were significantly different between stages of change for exercising( p-value < 0.05). To predict concerning factors related to physical activity behavior, logistic regression by backward stepwise method applied. Negative state of variables considered as reference. Age, gender, BMI, alcohol consumption, sleep duration, transporting by bicycle, having company and encouragement were identified as predictors of pre contemplation stage (Table 4). Gender, having compeer, transport by bicycle were predictors of termination phase (Table 5).
Discussion
Regular physical activity contributes to a healthier lifestyle in all age groups and its positive effects on health are well documented (Emdadi, Nilsaze, Hosseini, & Sohrabi, 2007;Bouchard, Shephard, & Stephens, 1994 This study showed that a large number of people (more than half) were in the inactive stage (pre-contemplation, contemplation and preparation). These findings are consistent with other studies that most of the adolescents, students and adults in their research were in the early stages (Kim, 2007;Sharifirad, Charkazi, Tashi, Shahnazi, & Bahador, 2011;Dumith, Gigante, & Domingue, 2007). In Jordan and Courneya studies, however, most of the participants were in action and maintenance stages (Jordan, Nigg, Norman, Rossi, & Benisovich, 2002;Courneya & Bobick, 2000) which may be due to the fact that in developed countries regular physical activity is well organized and exercise equipment are more accessible. It is necessary to provide stage-matched intervention www.ccsenet.org/gjhs Global Journal of Health Science Vol. 7, No. 7;2015 for improving physical activity and information about the benefits of exercise and risks of sedentary life for the individuals in earlier stages especially in pre-contemplation.
Our analysis showed that gender is a predictor of physical activity behavior. However, the relationship between the physical activity stages of change and gender was not similar to other studies (Moattari et al., 2013;Emdadi et al., 2007;Jordan et al., 2002;Irwin, 2004). In this study, women were more frequently in earlier stages and were among the least active participants. This can be due to cultural limitations and lack of adequate space and facilities for women in these communities. In our study, the number of men was higher than the number of women in all of the stages except for preparation. This may be because of the smaller proportion of women and so in regression analysis male gender was predictor for both pre-contemplation and termination phases. In one survey, there was not any difference between men and women regarding stages of change but that study only included vigorous activity (Boutelle, Jeffery, & French, 2004).
One of the predictors of pre-contemplation stage in this study was age and by increasing age people intended to be inactive. Most studies found that the proportions of older people are higher in the pre-contemplation stage and lower in the maintenance stage (Dumith, Gigante, & Domingue, 2007;Kearney, Graaf, Damkjaer, & Engstrom, 1999). However El-Gilany and Garber showed in his study that age was a non-significant independent variable for physical activity patterns. The effect of aging on physical activity patterns may be explained in part by increased level of chronic disease and disability in older adults (El-Gilany, Badawi, El-Khawaga, & Awadalla, 2011;Garber et al., 2008).
As BMI increased, participants were more likely to be in the pre-contemplation stage. Overweight individuals have been shown to be less physically active than normal weight subjects (Garber et al., 2008;Kearney et al., 1999). In Kearney research, the results for BMI differ from previous studies and showed individuals with higher BMI were more likely to present intention to begin, suggesting that being overweight may provide a motivation for a program of weight loss involving increased physical activity (Kearney et al., 1999).
We found that alcohol consumption had a significant relation with pre-contemplation stage and those drinking alcohol were more frequent in that stage but there was not any relation between smoking and this phase. Other studies found that the percentage of current smokers is higher in the pre-contemplation stage and lower in the maintenance stage than for the non-smokers or former smokers (Varo Cenarruzabeitia, 2003), but in Kearney study there was not any significant difference in numbers of smokers and alcoholics between pre-contemplation and maintenance stages (Kearney et al., 1999).
Having the support of family, encouraging and compeer for exercising was a key element in determining physical activity patterns. People with such support had the higher proportion in termination stage than pre-contemplation like another study (Boutelle et al., 2004); however, this finding was not consistent with Kearney survey (Kearney et al., 1999). Social environmental factors can help or hinder physical activity. The effect of these factors on physical activity habits includes the attitudes of family, peers, and health professionals. Support of the spouse and their attitude can be even more important than the participant's (Dishman, Sallis, & Orenstein, 1985).
Relationship between sleep duration and physical activity was one of the other findings of this study. People with longer sleep duration had a higher percentage in earlier stages and it was found one of the predictors of pre-contemplation stage. Long sleep is associated with low physical activity, which is a strong predictor of death but the association between long sleep duration and death is not fully understood. Ballavia showed in their study that long sleep duration was associated with higher mortality risk and shorter survival only among participants with a low level of physical activity (Bellavia, Åkerstedt, Bottai, Wolk, & Orsini, 2014).
In this study, educational level, job, marital status, size of family, transportation mode and health status did not predict pre-contemplation and termination stages of exercise. This suggests that each group was as likely to begin an exercise initiation program and this is not limited to those with specific jobs and who can afford health clubs or those who are the member of a specific group of people such as different educational level, health status, and family size.
prepared for action) (Prochaska & Marcus, 1994). However, current research suggests that since a high percentage of people are in the pre-contemplation stage, like preparation stage, specific action directed towards earlier stages is needed. In an attempt to be more successful with interventions and promotion of physical activity, it is important to perform studies to recognize people's difference in their motivation to become active and to tailor the counseling message according to the individual's readiness for change.
Strengths of this study are as follows: -To our knowledge, this is the first study to investigate the Stages of change for physical activity and related factors in Iranian adults. Most studies on this issue were performed for specific populations.
-The interviewers were trained for the questionnaires application and were not aware of the purpose of the study -We did not have a random sampling but we do our best to have representative samples by collecting them throughout the city.
It is important to keep the following limitations in mind: first; interpretation of the relationship between the physical activity stages of change and the determinants should match the study design and because of cross-sectional design that all variables were measured simultaneously, their association does not necessarily establish causation. Second; data are subjective and self-reported, actual physical activity patterns was not measured in our study. It is possible that the self-reporting nature of physical activity in this research allow over-reporting of exercising. Third; influence of seasonality on exercise patterns is another issue to be considered. The level and the motivation to involve in exercise tend to be higher in spring and summer, the time that our survey was done. The comparison of the results is difficult sometimes, because definition of physical activity and stages of change is different in some studies. Longitudinal designs are recommended in order to examine the stability of different physical activity predictors across time and remove the seasonality effect.
Conclusion
The results of this study indicated that the majority of the people are in the sedentary stages and do not meet the recommended levels of exercise behavior. According to established benefits of regular physical activity, it is necessary to tailor intervention to enhance the physical activity among individuals. Factors identified as predictors of physical inactivity such as age, sex, BMI and family and friend support should be taken into account in the design of interventions.
Bases on TTM research for sedentary, subjects move from inactive stages to active stages with emphasizing on the personal, long-term benefits of physical activity, a decrease in the barriers and cons of exercise will be useful in facilitating adoption exercise. | 2017-03-31T00:23:35.889Z | 2015-03-26T00:00:00.000 | {
"year": 2015,
"sha1": "4c38b03315b7e863393e0a2451ede496401991b4",
"oa_license": "CCBY",
"oa_url": "https://www.ccsenet.org/journal/index.php/gjhs/article/download/46880/25543",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "2068da5a7930887189d0a29ae225ee650062ba6b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
164994451 | pes2o/s2orc | v3-fos-license | The effect of dynamic relationship between domestic market and world market on stock returns volatility
The purpose of this study is to analyze the effect of dynamic relationship between domestic market and world market on stock returns volatility. To test this purpose, we use generalized autoregressive conditional heteroscedasticity (GARCH) model. In addition, dynamic relationship between domestic market and world market was measured by the dynamic conditional correlation (DCC) model. While conditional variance as measure for stock returns volatility was generated from variance equation of GARCH(p,q) procedure. The daily market indices data and their stocks prices over period from 01:01:2003 to 30:12:2016 were taken from China, Philippines, and world stock markets. Furthermore, the causality effect was also analyzed during the global financial crisis for period of 01:03:2008 to 31:03:2009. As the result, this study empirically suggests that the dynamic relationship between each domestic market and world market has a positive effect on stocks returns volatility. This evidence occurs in both overall sample period and during the global financial crisis period.
Introduction
The issue of capital market integration has received a lot of attention from academics. In contrast to research on the effect of the capital market integration on stock returns volatility which is still relatively rare, including for emerging markets such as Philippines and Chinese capital markets. The integration could be reflected by the relationship among capital market returns movements. Theoretical framework of international portfolio diversification states that the higher the level of similarity of returns movements among capital markets, the lower the benefits of international investment activities. This is because efforts to minimize portfolio variance or stock returns volatility would no longer prevail.
Linkage between capital market integration and returns volatility has a varied explanation. One is that capital market liberalization often increases the relationship between local market returns and world market, but does not encourage local market volatility [1]. Another explanation of integration in the form of capital market liberalization that influences returns volatility is that integration raises a group of new investors who are mostly foreign institutional investors from developed markets. They have decisions based more on rational investment analysis and have strategies that focus on fundamental valuation factors so that the possibility of volatility is reduced [2]. On the other hand, an open stock market could also be entered by uncertainty reflected in increased stock returns volatility from another stock market which is integrated. IOP Conf. Series: Earth and Environmental Science 255 (2019) 012052 IOP Publishing doi:10.1088/1755-1315/255/1/012052 2 Financial liberalization by developed countries which is a stage of strengthening financial integration tends to reduce volatility and improve the level of informational efficiency. Another benefit is the decrease in the company's cost of capital which ultimately could strengthen economic growth. However, in the short term, financial liberalization is often accompanied by a wave of crisis. Many stock markets receive systemic impacts and volatility spillover due to the delivery of information from other markets [3] Consequence of financial liberalization and integrated financial markets into the world financial system is an increase in capital flows that have a real impact on the financial market economies of developing countries. On the one hand, it has a positive impact, namely this capital flow provides funding for many domestic investment projects because of insufficient domestic capital deposits. Capital flows are also an instrument in developing the domestic capital market. However, large-scale capital flows contain certain risks to recipient countries, especially when their financial systems are not sufficiently advanced and domestic macroeconomics and financial policies are weak or inconsistent [4].
Domestic investors who trade in markets that are integrated with the global financial system should pay more attention to the development of other capital markets, especially the most dominant markets. They will attempt to keep abreast of developments that occur in developed markets. However, for investors who only have low ability, they will carry out a minimum cost strategy to keep up with the developments. When the financial crisis and volatility occur in developed markets, they become panicked and quickly sell their shares. Their actions in the domestic market due to responding to what happened in the international market caused higher volatility in the domestic market. Such contagion effect have been evidenced in study of [5] The basic theory of studies on international capital market integration is the modern portfolio theory fostered in the international context which contains securities diversification placed not only in the domestic capital market. The integration of capital market could make easier for international investors to allocate funds and have securities. Furthermore, the integration could make easier to create the expected returns and minimum risk. Therefore, the implementation of capital market integration is important for investors and policy makers.
For investors, the main consideration that is interesting lies in its implications on international portfolio diversification [6]. The integration influences the opportunity of international portfolio diversification which provide investors to allocate their capital efficiently [7]. For policy makers, the integration could help investors to expand base and reach of financial products so as to strengthen the domestic capital market to compete globally. In an effort to reduce the possibility of asymmetric shock, integrated financial markets could protect financial stability [16], develop economic capacity to withstand the shock and moderate the risk of financial transmission [18].
Determining how the capital market is indicated as an integrated market is by testing the level of arbitration activity in the long run. Capital market is integrated means that arbitration makes the capital market moves together in the long run and there are limited opportunities to create more than normal returns through international portfolio diversification. In contrast, the capital market is not integrated menas that arbitration activity would not direct the capital market to move together in the long term and there is potential long-term returns through international portfolios diversification [7] A number of studies show that Asian markets usually have low exposure to global factors and low integration with the western economy [8,9] . In the last decade, China and ASEAN countries have become an important part of the international portfolio of fund managers, IOP Conf. Series: Earth and Environmental Science 255 (2019) 012052 IOP Publishing doi:10.1088/1755-1315/255/1/012052 3 because it helps them to diversify their portfolios which could reduce the portfolio risk [10]. Empirical study of [11] investigate dynamic convergence process among capital markets in China and ASEAN-5 countries using recursive cointegration analysis. The results show that six capital markets have more than one cointegration vector from 1994 to 2002. In general, regional financial integration between China and ASEAN-5 has gradually increased. The error correction coefficient between China and Indonesia is negative, while among China and the other countries is insignificant.
Financial literature examining on the effect of capital market integration on returns volatility shows different findings. The first findings show that market integration could increase volatility [12,13] . The opposite findings, such as [14] state that financial liberalization does not have significant impact on volatility. In addition, [15] conclude that capital market integration does not cause excessive volatility in emerging markets and volatility decreases gradually due to the influence of financial liberalization. Furthermore, [16] state that the expansion of the investor base due to rising levels of liberalization led to a reduction in the total volatility of stock returns. [19] state that if the national capital market is not perfectly positively correlated, investors could be able to reduce their portfolio variance without sacrificing their expected returns by diversifying international portfolios. Strengthening capital market integration by developed countries has a number of benefits, including risk diversification and reduced cost of capital. In addition, some previous researches report that financial liberalization tends to reduce returns volatility and improve the level of informational efficiency in emerging markets. These benefits will ultimately help strengthen a country's economic growth.
Materials and Methods
This empirical research observes companies listed on Philippines and China capital markets. Both capital markets are well-known as segmented capital markets from the global market and included in emerging markets which have different characteristics with the others in Asian region. The unit of analysis used is the daily stock price and market index with the sample period from 1 January 2003 to 30 December 2016. The samples are issuers that have higher level of liquidity and larger market capitalization on the Philippines Stock Exchange (PSE) and Shanghai Stock Exchange (SSE). As a proxy for world market index, we use the MSCI All Countries World Index.
The first step in analyzing this research is to determine the measure of relationship level between domestic market and world market. This measure is produced by the dynamic conditional correlation (DCC) model between each capital market observed with the world market. The DCC model was first proposed by [20] and had been applied by prior studies, [21] and [22], among others. The relationship level then acts as an independent variable for the returns volatility of the GARCH(p,q) model.
Returns volatility is obtained from the second equation of GARCH(p,q) model in the form of returns variance. Employing GARCH(p,q) model, it could be seen the significance of the influence from the relationship level between a domestic capital market and world market on returns volatility. Volatility is the spread of all possible outcomes in uncertain variables [23] The measure of returns volatility in this study uses returns conditional variance. In addition to this measure, there are a number of alternative measures for volatility. In finance field, volatility often refers to a standard deviation (σ) or variance (σ When the measures of returns volatility were compared, the time series model considered more sophisticated is the ARCH family. This ARCH model is extensively reviewed by [24] and [25]. Unlike the standard deviation models, ARCH models formulate the returns conditional variance through the maximum likelihood procedure. The first example of ARCH model is ARCH(q) proposed by [26]. Furthermore, GARCH(p,q) models proposed by [27]. Empirical findings indicate that GARCH is a more parsimony model than ARCH, and GARCH(1,1) is the most popular structure in most financial time series data. In addition, EGARCH, TGARCH which is similar to GJR-GARCH, QGARCH, and various other nonlinear GARCH models were found. Mathematically, role of the relationship level as an independent variable on the returns volatility proxied by the variance was expressed by the GARCH equation.
The variance equation in GARCH(1,1) model used in this research added by dynamic conditional correlation (DCC) variable is expreesed as follows: : variance of GARCH(1,1) model as a proxy for returns volatility. DCCR Wi : dynamic conditional correlation of returns between domestic capital market i and the world market W as a proxy for the relationship level.
Result and Discussion
To examine the effect of dynamic relationship level between domestic capital market and world market on stock returns volatility, the first step is to determine the values of dynamic correlation during the observation period. These values then become an independent variable for stocks returns volatility in the GARCH model. For the overall sample period on Chinese capital market, the statistical estimation produces variance equation of GARCH(1,1) model as follow: σ 2 t = ***0.002 + ***0.053 ε 2 t-1 + ***0.942 σ 2 t-1 + ***0.864 DCCR CN,t The dynamic conditional correlation (DCC) coefficient is positive at 0.864 and significant at the 1% level with the daily sample period is 3375 observations. Estimation for the Chinese capital market indicates that the value of DCC coefficient as a proxy of relationship level is positive and statistically significant on conditional returns variance. This means that the level of relationship between Chinese capital market and world market has a positive effect on stocks returns volatility in this capital market. In other words, the higher the integration levels of Chinese capital market towards world market, the greater the returns volatility.
Variance equation of GARCH model for Philippines capital market is expressed as follows: t-1 + **2.808 DCCR CN,t Estimation result appears that DCC coefficient is positive at 2.808 and significant at the 5% level. Estimation result obtained in the Chinese capital market during the global crisis period indicates that the value of DCC coefficient is positive and statistically significant on conditional returns variance. This means that the level of relationship between Chinese capital market and world market has a positive effect on the volatility of stock returns. In other words, the higher the level of integration of Chinese capital markets towards world market, the greater the returns volatility.
Subsequent analysis was on the Philippines capital market during the global crisis period. The variance equation of GARCH model for this market is expressed as follows: σ 2 t = ***0.349 + **0.259 ε 2 t-1 + ***3.775 DCCR PL,t Estimation equation from the GARCH(1,0) model on Philippines capital market shows that the value of the DCC coefficient is positive at 3.775 and statistically very significant at the 1% level to the conditional returns variance. This means that during global crisis period the higher the level of Philippines capital market integration towards world market, the greater the returns volatility.
The level of market relationship was analyzed its effect on the stock returns volatility applying GARCH(p,q) technique. Returns variance, as a measure of returns volatility, is generated by this technique and become a dependent variable in the second equation of GARCH(p,q) model. So, the returns variance in the regression analysis of GARCH(p,q) model is predictive values.
Estimate equations statistically suggest that dynamic conditional correlation, as a measure of the relationship level, has a positive effect on returns variance. This statistical evidence is the result of regression analysis using GARCH(p,q) model conducted through four tests. The details are testing of both Chinese and Philippines capital markets for the entire sample period and for the GFC sample period. The result indicates that the relationship level generally has a positive effect on returns volatility.
The positive direction of coefficient on the relationship level is in accordance with the existing logical framework that the higher the level of relationship among capital markets, the higher the returns volatility. However, this logical framework contradicts the theoretical framework of market integration in the process of financial liberalization which fundamentally the integration should has a negative effect on returns volatility. Several prior studies report that strengthening financial integration tends to reduce returns volatility and to improve the level of informational efficiency.
The findings of this study are similar to arguments of [12] and [13] which reveal that market integration with its liberalization process could increase volatility. The different findings are found in the study of [14] which states that financial liberalization, as part of financial integration in terms of eliminating institutional restrictions, has no significant impact on volatility. The opposite findings are found in the conclusion of [16] sugesting that the expansion of the investor base due to rising level of liberalization causes a reduction in the total volatility of stock returns. In addition, [15] In addition, [14] provides information about the level of integration should have a negative impact on the returns volatility. They argued that implementation of financial liberalization, as one of the stages of financial integration realization, by developed countries had a number of benefits, including the application of risk diversification, reduction in cost of capital, and informational efficiency. These benefits have great potential to help strengthen economic growth. Implementing such policies in emerging markets has a number of consequences.
Conclusion
Statistical testing step applying GARCH(p,q) technique on the influence of dynamic conditional correlations on the conditional returns variance generates similar result for the overall sample and during the global financial crisis periods. It suggests that the relationship level of a capital market to global market has a positive influence on the volatility of stock returns. Employing the stocks prices and market indices, this evidence occurs in both Chinese and Philippines capital markets.
This conclusion contradicts the theoretical framework of market integration in the process of financial liberalization which fundamentally should negatively affect returns volatility. In other words, strengthening financial integration tends to reduce the returns volatility. Another explanation supporting this framework suggests that integration raises a group of new investors. Most of them are institutional foreign investors from developed markets. They have decisions based more on rational investment analysis and have strategies that focus on fundamental valuation factors so that the possibility of volatility decreases.
In contrast, evidence of this study affirms another theoretical framework of volatility spillover and contagion risk. It states that volatility spillover and contagion risk could occur among capital markets which could increase returns volatility. Capital markets are often accompanied by a wave of crises and they receive systemic impacts and volatility spillovers due to the delivery of certain market information from other markets. Therefore, the level of capital market integration during a crisis period has a positive effect on returns volatility. At the time of crisis, when the level of integration is higher, then it was followed by the higher stock returns volatility. Higher relationship of returns between a domestik capital market and world market could increase stock returns volatility and this is interpreted as evidence that supports the contagion hypothesis. | 2019-05-26T14:31:46.019Z | 2019-05-10T00:00:00.000 | {
"year": 2019,
"sha1": "00bd1c9ebdef75af875d618e634b20b349c9cf6c",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/255/1/012052",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "3150367323d0ca435e1e8d86eef44f72a797f219",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": [
"Economics",
"Physics"
]
} |
119111578 | pes2o/s2orc | v3-fos-license | The effect of bars and transient spirals on the vertical heating in disk galaxies
The nature of vertical heating of disk stars in the inner as well as the outer region of disk galaxies is studied. The galactic bar (which is the strongest non-axisymmetric pattern in the disk) is shown to be a potential source of vertical heating of the disk stars in the inner region. Using a nearly self-consistent high-resolution N-body simulation of disk galaxies, the growth rate of the bar potential is found to be positively correlated with the vertical heating exponent in the inner region of galaxies. We also characterize the vertical heating in the outer region where the disk dynamics is often dominated by the presence of transient spiral waves and mild bending waves. Our simulation results suggest that the non-axisymmetric structures are capable of producing the anisotropic heating of the disk stars.
INTRODUCTION
The random velocities of disk stars in the solar neighborhood are known to increase with time since the 1940's and this fact has been verified by recent Hipparcos observations. The analysis of the Hipparcos data (Binney et al. 2000) reveals that the stellar velocity dispersions increase continuously (rather than episodically as claimed by Edvardsson et al. (1993)) with time which is the well-known disk heating problem in our Galaxy. The phenomenon of disk heating is not specific for our Galaxy since the heating of stellar disks is also known to persist in external galaxies such as in NGC 2985, NGC 2460, and NGC 2775(Gerssen et al. 2000Shapiro et al. 2003). Although mechanisms have been identified for the disk heating, a full understanding has yet to be achieved. It is known that disk heating is anisotropic; that is, disk stars are preferentially heated along the plane in comparison to its perpendicular direction. Thus, the main problem in disk heating is twofold. The first is the requirement to explain the continual heating of disk stars especially in the case of isolated galaxies for which tidal heating or external perturbations can be ruled out. Secondly, such a mechanism must be anisotropic in nature. Taken together, disk heating remains a fundamental problem in galactic dynamics.
Historically, Spitzer & Schwarschild (1951, 1953 first showed that the secular increase in the stellar velocity dispersions could arise as a result of the scattering of disk stars with giant molecular clouds (GMCs) based on two dimensional calculations. Subsequently, there has been considerable development and progress in understanding the disk heating problem in galaxies. For example, Lacey (1984) extended the previous theoretical work to a three dimensional disk and found that the velocity dispersions increase with time, t, as t γ where γ = 0.25. In addition, it was demonstrated that transient stochastic spiral arms (Barbanis & Woltjer 1967;Carlberg & Sellwood 1985) could also produce secular heating of the disk stars, however, such effects were only effective in the plane of the galaxy. The effect of giant molecular clouds in combination with the effect of stochastic spirals could scatter the stars off the plane (Jenkins & Binney 1990) to account for both the radial and vertical heating. Alternatively, using an extensive orbit analysis of the stars subject to a 3D galactic bar potential, Pfenniger (1984Pfenniger ( , 1985 has shown that a large fraction of the phase space is chaotic and suggested that all the disk stars trapped in the chaotic phase space would be heated up in all directions. Other possible candidate heating mechanisms include massive dark halo objects, e.g. black holes or dark clusters of mass ∼ 10 6 M ⊙ (Lacey & Ostriker 1985;Carr & Lacey 1987) or a combination of giant molecular clouds and halo black holes (Hanninen & Flyn 2002. Within the framework of standard cosmological models, a spectrum of subhalos and their substructures could heat up the disk (Sanchez-Salcedo 1999; Ardi et al. 2003). However, Font et al. (2001) found that the subhalos are not efficient perturbers for disk heating because the orbits of the subhalos rarely take them near the disk. Finally, satellite infall onto the galactic disk could produce an abrupt heating of the disk (Quinn et al. 1993) or moderate heating could arise due to sinking satellites on prograde orbits (Velazquez & White 1999). The many possible heating mechanisms of the disk stars has been reviewed by Pfenniger (1993). Note that while most mechanisms produce radial heating of the disk stars they have not been shown to provide a satisfactory explanation of the detailed nature of vertical heating. In fact, studies focusing on the vertical heating are comparatively lacking.
To be more precise, a generic and robust internal source remains to be identified which could be responsible for the continual vertical heating of the disk.
Here, we primarily concentrate on the issue of vertical disk heating. Galactic disks often consist of various non-axisymmetric patterns such as bars, spirals, warps, corrugations, and rings. These patterns may be growing or transient and, in general, may be time-dependent.
It is natural to ask whether such patterns contribute to the disk vertical heating. That is, what is the nature of vertical heating due, for example, to a growing bar? How does a growing bar heat the disk stars in the vertical direction? Heating due to non-axisymmetric patterns are important for isolated galaxies as even for the Milky Way, there are prominent and characteristic signatures of non-axisymmetric patterns (Dehnen 1998) in the phase space in the solar neighbourhood.
In this paper, we explore possible mechanisms giving rise to continuous heating of the disk stars in the vertical direction. In particular, we show that stellar bars that are formed nearly spontaneously in disk galaxies are capable of efficiently scattering stars in the vertical direction. Since transient spiral arms are often associated with bars, they also contribute to the disk heating. In order to understand the nature of vertical heating of the disk stars, we have performed a large number (statistically meaningful) of N-body simulations of model disk galaxies constructed over a wide range of parameter space.
The paper is organized as follows. Section 2 describes the galaxy models used in the simulation. A description of the N-body simulation and the physical basis for the choice of our parameter space is presented in section 3. The heating model used to interpret the numerical simulations is described in section 4. Section 5 describes the heating due to non-axisymmetric structures. The results on the correlation studies are presented in section 6. The discussion and conclusions are presented in section 7 and 8 respectively.
GALAXY MODELS
The construction of a very thin equilibrium disk model involves a non-trivial procedure in N-body simulation. However, using the self-consistent bulge-disk-halo model of Kuijken & Dubinski (1995, hence KD95) we are able to construct extremely thin equilibrium model disks (with the ratio of scale-height to scale length ∼ 0.01). Their prescription provides nearly exact solution of the collisionless Boltzmann and Poisson equations which are suitable for studying disk stability related problems, allowing one to construct a wide range of initial models from a large parameter space. All the components in our models are active (i.e., the gravitational potential of each component can respond to an external or internal perturbation) and, hence, provide a realistic representation for the evolution and structure of the galaxies. Below we briefly describe each component of the model separately for the sake of completeness. For more details, the reader is referred to KD95.
A spherical live bulge is constructed from the King model (Binney & Tremaine 1987) and the corresponding distribution function (DF) is given by Here, the bulge is simulated using three parameters, namely the cut-off potential (Ψ c ), central bulge density (ρ b ) and σ b governing the central bulge velocity dispersion. The depth of the potential well is measured by Ψ 0 . An axisymmetric live dark matter halo is simulated using the distribution function of a lowered Evans model and is given as otherwise.
(2) The velocity and density scales are given by σ h and ρ 1 respectively. The halo core radius R c and the flattening parameter q together with ρ 1 are contained in the parameters A, B, and C. All the simulated halos are oblate in shape and kept at a constant value q = 0.8 for simplicity.
The disk distribution function is constructed using the approximate third integral given by E z = 1 2 v 2 z +Ψ(R, z)− Ψ(R, 0), the energy of the vertical oscillations. This third integral is approximately conserved for orbits near the disk mid-plane. The radial density of the disk is approximately exponential with a truncation and the vertical density is chosen to depend exponentially on the vertical potential Ψ z (R, z) = Ψ(R, z) − Ψ(R, 0). The volume density of the axisymmetric disk is given by governs the vertical structure of the disk, erf c is the complementary error function. In the above equation, M d is the disk mass, R d is the scale length and h z is the scale height.
There are 13 parameters required to construct a particular galaxy model. The galactic disk produced in this manner remains in equilibrium as long as the simulation runs. The dark matter halo generated using the lowered Evans model (Evans 1993) has a constant density core which is probably appropriate for the low surface brightness galaxies which we simulate. Each model galaxy is constructed such that an initially almost flat rotation curve is produced. The disk outer radius (R out ) is fixed at about 6.5R d and a truncation width ∼ 0.3R d is adopted within which the disk density smoothly decreases to zero at the outer radius.
N-BODY SIMULATION
The primary aim of the present study is to achieve an understanding of the nature and origin of vertical heating of disk stars. In order to gain the necessary insight, we perform a large number of simulations of isolated galaxies. The models are evolved for a sufficiently long period of time so as to examine the nature of vertical heating in different spatial regions of the disk.
Using the KD95 method, we build the initial conditions for all the model galaxies. The method is not fully self-consistent because the disk distribution function is not known exactly. Each model galaxy consists of a bulge, disk and dark matter halo as described above. Since the initial conditions are constructed based on distribution functions, they are suitable for studying the long-term evolution of the non-axisymmetric patterns in the disk. Building initial galaxy models with prescribed galaxy properties such as the Toomre Q(r) = σ r (r)κ(r)/3.36GΣ(r), ratio of dark-to-disk mass (M h /M d ) or ratio of velocity dispersions (σ z /σ r ) is not straightforward because the bulge and the halo models are not derived using their mass profiles but from their distribution functions. Hence, the relation between the Toomre Q parameter and M h /M d or between σ z /σ r and M h /M d is not known from the outset. Here, σ r , σ z , κ, and Σ denote the velocity dispersion in the radial and vertical direction, epicyclic frequency, and surface density of stars at a given radius respectively.
The disk, bulge and halo parameters are chosen such that an almost flat rotation curve is produced in the outer region. We assume a scale length for the disk to be R d = 4.0 kpc. The radial and vertical forces are normalized such that it produces the circular velocity V c = 200 kms −1 at about 2 disk scale lengths. The unit of time is 2.1 × 10 7 yr. The initial disk scale height (h z ) varies from 40 pc to 400 pc in our sample of model galaxies. The softening length for the disk particles is 10 pc, for the bulge particles 40 pc and for the halo particles the softening length is 36 pc. A total of 2.2 × 10 6 to 1.0 × 10 7 particles has been used to simulate the model galaxies. The dark matter halo mass varies from 2 to 40 × M d which gives, on average, a mass resolution of ∼ 10 5 M ⊙ . We have observed that with such a mass resolution the shot noise in the simulation reduces considerably and bending instabilities do not grow sufficiently to induce an outer disk warp during the evolution. Each model galaxy in our simulation has been evolved with the Gadget code (Springel et al. 2001) which uses a variant of the leapfrog method for the time integration. The forces between the particles are calculated basically using the BH tree (with some modification) algorithm with a tolerance parameter θ tol = 0.7. The integration time step is ∼ 0.4 Myr and each model galaxy is evolved for more than 5 Gyr in our simulation. The orbital time scale at the disk half mass radius is ∼ 294 Myr for the model mk97 (see Table. 1). For convenience, we describe the parameter space for the simulations below.
3.1. Selection of the parameter space A wide range of parameters in the 3-D space spanned by [Q,σ z /σ r ,M h /M d ] is considered. The exact relationship between these parameters is dependent on how an active dark matter halo interacts with the other active disk and bulge components. However, we notice that if the central radial velocity dispersion is held constant, there appears to be a positive correlation between the Toomre Q parameter and the dark halo-to-disk mass ratio (M h /M d ) as expected from a simple understanding of the disk dynamics with a rigid dark halo. The initial thickness of the disks (h z /R d ) in our sample of model galaxies ranges from thin (h z /R d ∼ 0.1 or 0.15) to superthin (h z /R d < 0.1). To give an example, the value of h z /R d ∼ 0.07 in the superthin galaxy UGC 7321 (?). We explore a wide region of the parameter space spanned by Q, M h /M d , σ z /σ r (see Fig. 1) and provide, here, the physical basis for our choice. Each model galaxy has a distinct Q and σ z /σ r radial profile; we quote the values of Q and σ z /σ r at the disk half mass radii (see Table 1 for representative models). Specifically, most of the galactic disks are vertically cold, but radially hot because we primarily aim to simulate low surface brightness (hence LSB) galaxies where the rotation curve is normally dominated by the dark matter halo (de Blok et al. 2001). In fact, many of the rotation curves in our sample (e.g., Fig. 2) are such that the dark matter dominates the disk rotation curve from the center of the galaxy. It is also known that the stellar disks of LSB galaxies are thinner than their high surface brightness (hence HSB) counterparts (Bizyaev & Kajsin 2004). LSB galaxies are known to be poor in star formation activities despite the presence of neutral hydrogen gas as in otherwise normal galaxies. This may indicate that these galaxies are dynamically hot (with high Q as most of the galaxies are in our sample) to prevent them from disk instabilities. Even if the disks suffer from instabilities, the resulting non-axisymmetric features could be young. In fact, the low metallicities, high gas-to-star mass ratios, and blue colors of most LSB galaxies indicate that these systems are probably younger than their HSB counterparts (Vorobyov et al. 2009).
HEATING MODEL
We use the following simple expression (eq. 4) to fit all our simulation data, treating this functional form to quantify the effects of heating. It will be applied to the central region and to the outer region of the galaxy.
(4) In the above equation, σ 0 and σ 1 are independent of time, and τ orb is the orbital time scale measured at the disk half mass radius. α is the heating exponent and is the logarithmic time derivative of the vertical velocity dispersion α = d ln σ z /d ln t. We determine the time evolution of σ z in the inner region (< 1R d ) and in the outer regime (∼ 4R d ) of the disk. The definitions for the two regions of the disk are maintained throughout the paper. The heating in each of these regions of the disk, characterized by apparently distinctive dynamical behaviors, is described by the resulting heating exponents given as α in and α out respectively. Higher values of α imply that the rate of heating is steeper as the galaxy evolves, whereas lower values indicate a gradual heating process. A thorough knowledge of these values are crucial for understanding the type of heating processes at work in the galaxy and possibly for identifying the mechanism responsible for the heating. The fitting is performed using a robust nonlinear least squares fitting algorithm due to Levenberg and Marquardt (Markwardt 2009;More 1978). The best fit model parameters are chosen based on the minimum value of the chi-square (χ 2 min ). Using the resulting parameters, an useful estimate for the time scale of heating can be obtained from the above formula and this can also be compared with the relaxation time scale for the model under consideration.
Let us consider that in time τ h , the change in the vertical velocity dispersion ∆σ z ∼ σ 0 (r) at a particular radius in the disk. Then differentiating eq.[4], it can be shown that We evaluate this time scale for different models presented here. In comparison, the energy relaxation time scale for the N-body model with a total number of particles N , size R s and softening ǫ s can be written following Hunag et al. (1993) as where we have used the crossing time scale τ cross = Rs 2πR 1/2 τ orb , assuming that the circular velocity V c remains almost flat beyond R 1/2 ; R 1/2 denotes the halfmass radius. Considering N = 5 × 10 6 , R s = 10, R 1/2 = 2.5 and softening parameter ǫ s = 0.003, we have τ relax = 5.6 × 10 4 τ orb . Previously, disk heating has been modelled as a diffusion process in the velocity space. If the diffusion coefficient D z remains constant in time, it can be shown that the vertical velocity dispersion evolves (Lacey 1991) as Note that the above formula is applicable to the radial dispersion also. The exponent γ = 1/2 denotes heating due to the halo black holes (Lacey & Ostriker 1985). In addition to eq. [4], we also apply the diffusion model to fit our simulation data to some models for comparison and evaluate the corresponding diffusion coefficient D z . Let us consider that τ d is the time scale over which the change in the vertical velocity dispersion ∆σ z ∼ σ z0 . Then τ d can be obtained as follows We denote τ d as the diffusion time scale. In general, τ d and τ relax are comparable. Internal evolution of a disk galaxy is driven by various non-axisymmetric perturbations such as bars or spirals which redistribute the energy and angular momentum within the disk and/or between the halo and the disk. The time scale for this process is longer than the dynamical time scale (τ dyn ) and is usually known as the secular evolution time scale (τ sec ) (Kormendy & Kennicutt 2004). τ sec could be a few to few 10s of orbital time scales (τ orb ). In ordering these time scales, τ dyn < τ orb < τ sec < τ d τ relax . We will use these definitions and compare the relevant time scales in the following to determine the relation between τ h and the other time scales.
NON-AXISYMMETRIC STRUCTURES AND DISK HEATING
It is well established that stars undergo continuous heating in both the radial and vertical directions in galaxies as they age. Although there has been substantial progress in understanding the radial heating, there is little progress for the vertical heating. Spiral structures have played a significant role in the studies of disk heating in the radial direction. We note, however, that a steadily rotating quasi-stationary spiral pattern heats the disk stars only negligibly (Binney & Tremaine 1987). In this case, sufficient heating of disk stars requires the spiral structures to be stochastic in nature. Transient, swing-amplified spiral density waves (Fuchs 2001) or multiple spiral density waves (Minchev & Quillen 2006) have recently been shown to be effective in the radial heating of disk stars.
The primary mechanisms suggested for the vertical heating of the disk stars in early works range from a combination of stochastic spirals and giant molecular clouds (Jenkins & Binney 1990) to resonant heating of the trapped stars in the chaotic phase space in the presence of a 3D bar potential (Pfenniger 1985). More recently, other mechanisms have been suggested including a bending instability of the entire disk or of the bar (Sotnikova & Rodionov 2003) and repeated impact of globular clusters (Vande Putte et al. 2009), although it produces only a small change in the vertical velocity dispersion. Disk galaxies are rich in non-axisymmetric structures, which may form either by means of mergers, tidal interaction or internal instabilities. Examples include bars, transient spirals, rings (e.g. NGC 1433, NGC 6300 Buta et al. 2001), lopsidedness (Saha et al. 2007), bending waves manifested in the form of a warp (e.g. ESO121-G6, NGC 4565 Saha et al. 2009) or corrugation (e.g. in IC 2233 Matthews & Uson 2008;Saha et al. 2009, in NGC 5907) of the disk mid-plane. These nonaxisymmetric instabilities often drive the disk away from equilibrium and are generally time dependent. Each of these features are, in principle, a potential source of heating of the disk stars. Since our understanding of the interactions of these features with the surrounding live dark matter halo is not well understood, it is difficult to disentangle the signatures of each of these nonaxisymmetric modes on the net heating of a group of stars. However, some progress has been made especially for our Galaxy in which it has been shown based on the analysis of the Hipparcos data that there exists a gap due to the 2 : 1 resonance with the bar in the Galactic UV plane (Dehnen 2000).
In the following, we study the nature of the vertical heating of the disk stars due to bar growth and transient spirals. We also briefly discuss the effect of halo noise and the corrugation and/or bending waves on the net heating process. We carry out a series of N-body simulations to first systematically identify the sources of disk heating in our model galaxies which are free from tidal interaction. About 70 simulations have been performed covering a wide range of the above mentioned simulation space and a statistical attempt to characterize and understand the nature of vertical heating in various regions of the disk has been carried out.
Vertical heating due to bar growth
Although most of the galaxies in our sample are radially hot, they form a bar at the center during its time evolution. Some of the model galaxies form a bar within about 3 to 4 rotation time scales, while for others it takes more than about 10 rotation time scales to reach nearly the same amplitude. It is found that a bar forms even for galaxies characterized by Q ∼ 3.7 (model mk55) in our simulation, implying that it is difficult for a rotating disk within a live dark matter halo to avoid forming bars at the center. Hence, bar formation appears to be quite common and generic in disk galaxies and a live dark matter halo plays a significant role. As shown by Athanassoula (2002) a live dark matter halo supports, rather than suppresses, the bar instability in the disk. In contrast, bar formation in the disk could be suppressed in the presence of a non-responsive rigid dark matter halo (Hohl 1976). The growth of the bar depends on the initial condition and grows via non-linear processes as the disk stars interact with the live dark matter halo particles. The nature of this interaction is yet to be completely understood although recent N-body simulations by Dubinski et al. (2009), Sellwood & Debattista (2009), and Klypin et al. (2009) have revealed insights on the nature of bar dynamics in disk galaxies. For our choice of parameters and initial conditions, we do not find any unique trends for the bar growth in our simulation. The sample of galaxies in our N-body simulations differs from typical galaxy models studied previously (where Q is normally held constant throughout the disk or Q ≤ 2.0 in general). Overall, we observe that the growth of bars in our sample falls under two broad distinct categories. In the first category, the bar grows quickly to a peak amplitude within ∼ 5 rotation time scales which is about an order of magnitude less than τ sec and nearly reaches saturation or starts growing again (for convenience, we call these type-I bar).
In the second category, the bar continues to grow but slowly as compared to type-I and does not show any tendency to saturate (we call these type-II bar). Type-II bars grow on a secular evolution time scale to reach the same amplitude as in type-I. These two main features of bar growth are illustrated in the appropriate figures below, although there are some intermediate cases present in our simulation. In many cases, the bar undergoes the well known buckling instability (Combes & Sanders 1981;Pfenniger & Friedli 1991;Raha et al. 1991;Debattista et al. 2006;Martinez-Valpuesta et al. 2006, and references therein) following which the bar takes the form of a peanut and/or X shaped bulge. The buckling instability driven by the anisotropy in the velocity dispersions is an important phase of the bar evolution as it leads to the formation of a pseudo-bulge (Kormendy & Kennicutt 2004) directly influencing the secular evolution of disk galaxies. However, the condition for the onset of this instability and the exact underlying cause of this phase remains a matter of further investigation. For example, recent work by Martinez-Valpuesta & Shlosman (2004) has shown that the buckling instability weakens the bar and the resulting peanut shaped phase lasts for several Gyrs in their simulation. We also find that the peanut shaped phase of the bar is long lasting, typically surviving till the end of our simulation (> 5 Gyr) producing boxy/peanut/X-shaped structure in the central region. The morphological evolution of the disk in model mk97 is depicted through Fig. 3 and Fig. 4. These snapshots are taken at time T = 0 and T = 5.4 Gyr which denote the beginning and the end of the simulation respectively. The initially axisymmetric disk evolves to form a bar which eventually transforms into a peanut morphology. Inspection of the evolution of the bar-amplitude reveals a diverse behavior while it is in the peanut phase; in some cases the amplitude continually grows and in others, it saturates. While the bar strength continues to increase, the evolution of its pattern speed is not always correlated in the sense that the pattern speed does not always decrease. In particular, the bar pattern speed in some cases remains nearly unchanged with time while its amplitude grows (as in mk17, mk33, mk104). A similar anti-correlation between the bar growth and pattern speed evolution has been discussed by Valenzuela & Klypin (2003) and more recently by Villa-Vargas et al. (2009). However, there are also the 'normal' cases where the pattern speed decreases with time, while the bar grows to a higher amplitude, by losing angular momentum to the halo via dynamical friction. We notice that the bar growth is slow (typically a factor of 5 to 10 less) in cases where the pattern speed is observed to remain nearly constant in time. Overall, the emerging diverse behaviour of the bar characteristics precludes detailed understanding for inferring gross physical properties (e.g. bar size, velocity dispersions) of the disk based on a few simulation results. As a consequence, the results of a large number of simulations have been used to approach the problem in a statistical sense.
For each of the model galaxies, we compute the m = 2 Fourier component (A 2 ) of the particle distribution from each N-body snapshot using the following formula: In the above equation φ j (r, t) is the phase of the j th particle at position r and time t and N r represents the number of particles used. Since A m is a complex number, the modulus of the m th Fourier component is obtained by 2009) and the corresponding phase is given by φ m = 1 m tan −1 ℑ(A m )/ℜ(A m ), (m = 0). In general, the radial variation of A 2 shows a pronounced peak corresponding to a bar in the central region of the disk and often a second peak indicating the presence of a spiral perturbation in the outer parts of the disk. The variation of the peak amplitude of A 2 in the inner region with time reveals the growth of a bar in the galaxy. The amplitude A 2 is normalized to the amplitude of the axisymmetric mode (m = 0 Fourier component). In all the subsequent figures, we present the smoothed normalized A 2 . For a wide range of model bars, the following linear regression model is fit to the time evolution: where the slope β is the logarithmic time derivative of the bar amplitude. The linear model in the log-log plot translates to the bar amplitude evolving as A 2 (r, t) = A 0 2 (t/τ orb ) β . In the following, β b represents the growth rate of a bar and β s represents the growth rate of a spiral (in the outer region).
The time evolution of the normalized bar amplitude in model mk97 is illustrated in Fig. 5. The bar amplitude nearly saturates through a series of high frequency oscillations while undergoing the buckling phase. Such oscillations have also been reported in previous simulations by Valenzuela & Klypin (2003) and Villa-Vargas et al. (2009). Our estimate shows that the typical period of such oscillations is ∼ 1/2 × τ orb or less than τ orb in general. This roughly implies that the amplitude oscillates with a frequency close to κ, the radial epicyclic frequency. The vertical velocity dispersion (σ z ) of the stars in the inner region of the disk is plotted as a function of time in Fig. 6 for model mk97. From Fig. 6, it can be seen that σ z changes its slope at ∼ 12 rotation time scales, nearly following the overall amplitude evolution of the bar. The best fit parameters of the heating model (eq.[4]) are σ 0 = 0.09, σ 1 = 1.7 × 10 −4 and α in = 2.31. Using these parameters in eq.(5), we obtain τ h = 10.56 × τ orb ≪ τ relax . The heating time scale due to the bar growth is found to lie in the range τ dyn < τ h ≪ τ relax . We have also applied the diffusion model given by eq.(7) to the inner region of mk97 and find that the heating model given by eq.(4) provides a better fit than the diffusion model. In particular, the diffusion coefficient D z = 5.9 × 10 −4 and γ = 109 are found to be absurdly large. The resulting time scale τ d = 1.5 × 10 3 τ orb .
Interestingly, the temporal variation of σ z does not reflect any such oscillation while the radial velocity dispersion undergoes oscillation with nearly the same frequency as the bar amplitude beyond about 10 rotation time scales.
The growth of a bar in model mk112 is similar to that in model mk97, although the disk in model mk112 is hotter (Q = 2.47) than mk97. Fig. 7 shows the normalized bar amplitude and the time evolution of σ z in the disk inner region is shown in Fig. 8. The best-fit parameters from model mk112 are found to be similar to mk97.
In the case of model mk107, the bar does not reach saturation and continues to grow as shown in Fig. 9. The vertical velocity dispersion in the inner region of the disk grows more gradually (Fig. 10) following the evolution of the bar. Note that the overall increase in the vertical velocity dispersion is less than that of model mk97 and mk112. We find that the heating model given by eq.(4) better approximates the simulation data than the diffusion model. The resulting diffusion coefficient D z = 2.2 × 10 −4 is similar to model mk97 and τ d = 4.4 × 10 3 τ orb . For models mk97, mk107 and mk112, we find that, in general, the diffusion model overestimates the dispersion values in the inner region of the disk.
In all our simulations, the vertical velocity dispersion increases following the growth of the bar. Note that all the three galaxy models (e.g. mk97, mk107 and mk112) undergo a phase of buckling instability during their evolution, but the disk stars are continually vertically heated prior to and during/after the buckling instability phase. This fact suggests that the vertical heating of the disk stars in our simulations is not strictly related to the bar buckling phase (e.g. the disk in model mk17 does not go through buckling instability within the simulation time period but still shows continuous heating in the vertical direction). However, we note that the rate of vertical heating during the bar buckling phase is generally higher than in phases without such buckling.
In the presence of a rotating bar (m = 2) potential, the planar motion of the disk stars can be coupled with the vertical oscillation (parametric resonance) at the location of the vertical resonances ν/(Ω(r) − Ω b ) = m/n, where ν is the vertical frequency, Ω b is the bar pattern speed, n is the number of vertical oscillations. For m = 2 perturbation, the n = ±1 represent the vertical Lindblad resonances. For the n = ±2 resonances, the retrograde orbits in the inner region of the galaxy couple efficiently with the vertical motion through the 'Binney instability strips' (Binney 1981). Pfenniger (1985) has shown that indeed these 2 : 1 resonances inside the bar play a significant role in trapping stars, leading to rapid diffusion in the vertical direction. In the presence of a growing bar potential, the disk stars are subjected to a complex instability and the main effect of such an instability is to promote fast diffusion of the low angular momentum, high z amplitude and energetic stars to the nearby halo. The instability is shown to grow as the bar perturbation grows. Basically, the presence of a growing bar potential (i.e. a time dependent potential in the galaxy) breaks the time invariance symmetry (because bar growth is an irreversible proprocess) and hence the jacobi integral which in turn enhances the ergodicity considerably allowing the stars to visit most of the chaotic phase space (Pfenniger 1984(Pfenniger , 1985. This provides a basic understanding of the heating mechanism in the presence of a growing bar. Of course, in N-body simulation the nature of bar growth is quite diverse as mentioned in the beginning of this section. The resonant heating of the disk stars in the pres- ence of a rotating (with changing pattern speed) growing (in amplitude) bar potential is probably more complex, leading to shifting locations of the resonances in the inner part of the galaxy and at the same time promising in the context of vertical heating in the disk. It is natural to examine how the vertical heating of the disk stars is correlated with the growth of bar, the nature of the vertical heating due to a growing bar, and the typical exponent for the vertical heating. In section 6, we seek the possible correlations between the bar growth rate (β b ) and the inner heating exponent (α in ).
Outer disk heating and transient spirals
Strong two-armed spirals are difficult to excite and form in a radially hot disk galaxies. Using the Fourier decomposition of the disk surface density, however, many transient spirals (often with no steady pattern speed) are detected in our model galaxies. These transient spirals are often diffuse and generally weak in radially hot disks ( Fig. 5 and Fig. 7) as compared to the relatively strong spiral formed in the radially cold disk model mk107 (see Fig. 9). These spirals are primarily confined to the outer region of the stellar disk, and we examine whether they play a role in the vertical heating of the disk stars in this region. In Figs. 11, 12 and 13, we present the time evolution of the vertical velocity dispersion in the outer region of the disk for models mk97, mk112 and mk107 respectively. The slope of the heating curve in model mk107 (concave) is quite different from the other two models (convex) presented here. In the later stages of the evolution, σ z in model mk107 increases faster than that in mk97 and mk112 (where σ z nearly saturates). Using the best-fit parameters of our heating model fitted to the outer region of mk97, we find that τ h = 100τ orb where σ 0 = 0.025, σ 1 = 8.6 × 10 −3 and α = 0.42. Applying the diffusion model in the outer region of mk97, we obtain a diffusion coefficient D z = 6.35 × 10 −8 and γ = 0.214 and the corresponding time scale τ d = 1.3 × 10 4 τ orb , which is similar to the relaxation time scale (τ relax ) quoted in section 4. In the case of model mk107, the best-fit parameters from our heating model yields τ h = 33.7τ orb . The diffusion model in this case gives D z = 8.48 × 10 −4 and τ d = 1.15 × 10 3 τ orb . These time scales indicate that the process of vertical heating in the outer region of the disk is slower in comparison to the inner region. We find that in the presence of a relatively strong spiral in the outer region, disk stars are heated to a greater degree in the vertical direction and the heating rate is found to be faster. The actual heating time scale, τ h , (in Gyr) in mk107 is lower by a factor of 6 in comparison to models mk97 or mk112. Although, the exact mechanism through which stars are vertically heated in the outer region is not clear, growing spirals definitely play a significant role.
The outer region of the stellar disk is not just simply described by the transient spirals, as corrugation waves or mild warps of the disk midplane are often present. It is known that the transient spirals can heat the disk efficiently in the radial direction, but poorly in the vertical direction. However, GMCs could redistribute the random energy of the stars in the vertical as well as in the radial direction through scattering processes. In the absence of GMCs or massive objects (such as halo black holes or dark clusters) in the outer parts of the galaxy, one seeks a process whereby the heating due to spirals or some other feature in the disk is redistributed in the vertical direction. Possible candidates, for example, include corrugation waves, large scale warps of the stellar disk or, in general, vertical motion of the disk stars coupled with the transient spirals. The growth and development of these bending waves or rather warps is known to a large extent to depend on the noise in the N-body system. We have verified that as the number of particles is increased (say by a factor of 10) in the system, the growth of these bending waves is significantly reduced because of the substantial decrement in the Poisson noise arising from particle discreteness. With an average mass resolution of 10 5 M ⊙ , only weak bending waves or mild warping of the stellar disk are present; comparatively strong warps can arise with a poorer mass resolution 10 6 M ⊙ . Thus, it is unlikely that the bending waves alone can heat the disk stars in the outer region with higher mass resolution. However, the possibility exists that the non-linear coupling between the weak transient spirals and the weak bending waves (Masset & Tagger 1997) could redistribute energy in the vertical direction. As mentioned earlier, an interesting behaviour about the bars in radially hot disks inside a live dark matter halo was noticed. It was found that the type-I and type-II bars trigger transient spirals with distinctive characteristics in the outer disk. In the case of a type-II bar, the transient spiral continues to grow in amplitude, and this process continues as the bar grows (see Fig. 9). This is found to be the case even in one of our hottest disk models (mk55 for which Q = 3.73, see Table 1). Because of the growing transient spirals, we find that the radial heating dominates over the vertical one throughout the disk. On the other hand, the amplitude of the spirals nearly saturates in the case of models mk97 and mk112 (see Fig. 5 and Fig. 7) where the bar is of type-I. Note that in this case, the radial heating nearly saturates throughout the disk (see Fig. 14 and Fig. 15). Overall, there is a parallel development prevailing in the inner and the outer regions of the disk whereby a bar and spiral grow respectively.
Anisotropic heating
An important aspect associated with the heating of disk stars is its anisotropic nature as can be inferred from the fact that the observed ratio of the vertical to radial velocity dispersion i.e. σ z /σ r < 1 (as in NGC 488 Gerssen et al. 1997), as found to occur for the solar neighbourhood in the Galaxy. This fact raises many questions. For example, why do disk galaxies have σ z /σ r < 1? Can the heating mechanism preserve an initial σ z /σ r value? What causes the stars to be heated anisotropically?
To gain some insight into these questions, we illustrate in Fig. 14 and Fig. 15 the evolution of the radial and vertical velocity dispersions in the inner and outer region of the model galaxy mk112. It can be seen that the overall nature and the rate of heating is different for the radial and the vertical directions. In the inner region of mk112, the vertical heating is more effective than the radial heating and in the outer regime the radial heating saturates while the vertical heating continues slowly. We find this trend to be the case for most of the galaxy models which are radially hot and in which the bar evolves to saturation. On the other hand, for model mk107 which is relatively cold (Q = 1.66), the radial heating dominates over the vertical heating throughout the disk due to the presence of a relatively strong spiral perturbation. It is clear that the non-axisymmetric structures (such as bars
Halo noise
The role of a live or responsive dark matter halo has been emphasized by Athanassoula (2002) in the context of bar formation in N-body simulations. Unlike the case of a rigid dark matter halo, a live halo facilitates the growth of the bar in an N-body disk, likely through resonant interactions with the disk particles. On the other hand, the finite number of particles employed in the Nbody simulation can be an important source of disk heating as well. Because of the finite number of particles, two-body relaxation processes (Velazquez 2005) can be effective in heating the disk stars efficiently. It is highly desirable to eliminate or, at least, minimize the Poisson fluctuation in the halo particle distribution by increasing the number of halo particles employed in the simulation as permitted by the available computational resources. To quantify this effect, we performed four additional simulations on the galaxy model dmA (see Table 1) with different number of halo particles (N h ). There is an order of magnitude variation in the number of halo particles between models with the poorest resolution (0.5 million (M) particles) and the highest resolution. The intermediate two simulations use 2 and 3 million halo particles. In Fig. 16 and Fig. 17, we show the four vertical heating curves in the inner region and in the outer region respectively. It is found that the vertical heating curves nearly converge for models with N h > few × 10 6 . The convergence on the number of particles in our N-body simulation is in accordance with the conclusions reached in the recent work by Dubinski et al. (2009) in the context of bar formation and evolution. Comparison of Fig. 16 and Fig. 17 reveals that the effect of halo noise is more important in the central region than in the outer region. In order to quantify this trend, the time scales τ h in the inner and the outer region were determined. We find that an order of magnitude increase in the halo particle number increases τ h by at least a factor of 10 in the inner region and ∼ 2 in the outer region. In contrast, we find different behaviour when the temporal variation of the radial velocity dispersion in the disk was examined. Because these models are dominated by spiral arms in the outer region and we find that by increasing the number of halo particles (from 0.5 M to 5 M) the spiral arms became even stronger. This resulted in a lower radial dispersion in models with lower number of halo particles compared to the models with higher number of halo particles. In the case of model mk112, the spiral arms are rather weak and the radial heating of the disk stars could have resulted from the halo noise. However, we find that the radial dispersion remains nearly constant over more than 10 rotation time scales (see Fig. 14 and Fig. 15). Based on these facts, it is possible to conclude that the noise due to the halo (with N h > few million particles) is neither effective nor the dominant source of vertical heating in the outer region of the disk.
In summary, we find a positive correlation between the growth of the bar at the galactic center and the vertical heating taking place in the central few kpc region. The formation of a bar is almost unavoidable in a stellar disk embedded in a live dark matter halo under the wide range of physical parameter space that has been explored. Bar formation occurs even in a very hot galaxy with its time for onset delayed at higher numerical resolution. In ad- dition, analysis based on the four simulations on model dmA reveals that the strength of the spiral arms also increases with higher mass resolution. It has been noted that the vertical heating exponent is generally larger in the outer regions of a galaxy in the presence of strong spirals. Hence, the transient spirals in the 3D disk which are common in our simulations not only play a significant role in radial heating, but also the vertical heating of disk stars.
CORRELATIONS
In the absence of a clear understanding of the physical processes responsible for the vertical heating of disk stars, we approach the problem by seeking possible correlations between the average physical properties (e.g., bar strength versus vertical heating rate) from our fairly large sample of N-body galaxies. To obtain a measure of the strength and slope of possible relations, we carry out a statistical correlation, measuring it between two variables x and y by the Pearson's product-moment correlation coefficient defined as follows: In the above equation, cov represents the covariance and δ denotes the standard deviation of the particular variable in the subscript. The summation is carried over the sample size N s . According to the Cauchy-Schwarz inequality the correlation can not exceed 1. If the value of the correlation is positive, it indicates an increasing linear relationship and in the case of negative value, it denotes a decreasing linear relationship.
In Fig. 6, Fig. 8, and Fig. 10 the time evolution of the vertical velocity dispersion in the inner region of the disk is illustrated. In comparison with the evolution of the bar, it appears that the vertical velocity dispersion of the stars roughly follows the time evolution of the bar. Thus, for individual galaxies, there is a trend for the vertical heating, and it is interlinked with the growth of the bar. However, the exact relationship between the two and the reason for the vertical heating of the disk stars closely following the growth of the bar is unknown. In the absence of a definitive answer, we seek to determine whether the trend found in individual galaxies is generic. For example, does it depend on the initial condition, and is this trend robust when examined on a large sample of galaxies with different initial conditions? We note that these are difficult questions to answer because galaxies with varying initial conditions evolve quite differently. As pointed out recently by Sellwood & Debattista (2009) a live dark matter halo interacts with the disk in a more complicated way than previously envisioned.
In Fig. 18, the dependence of the vertical heating exponent (α in ) in the inner region of the galaxy on the growth rate of the bar (β b ) is illustrated for our sample of model N-body galaxies. The figure clearly shows that the heating exponent in the inner region of the disk is strongly positively correlated with growth rate of the bar with a value of Corr(α in , β b ) = 0.647. As pointed out earlier, when examined on a case by case basis, greater vertical heating results in galaxies which host stronger bars (e.g., models mk56A, mk52, mk112). Our analysis suggests that as the strength of the bar increases, the disk stars continue to be heated vertically, irrespective of the bar's pattern speed evolution. The heating exponents tend to be higher in models with comparatively less massive dark matter halos. However, for more massive halos, the vertical heating exponent (α in ) tends close to unity. On the other hand, in the absence of a strong bar, the heating exponent is lower, α in ∼ 0.5.
A weak correlation is found between the growth of spirals and the heating exponent in the outer region as can be seen from Fig. 19 corresponding to a value of Corr(α out , β s ) = 0.177. Its low value indicates that there may be more than one mechanism operating. Although the correlation between the heating exponents and the growth exponents for the spirals is not strong, higher exponents are associated with the presence of stronger spirals (e.g., model mk43, mk55, mk64) and the overall vertical heating is low when the spiral arms are more diffuse and weak (e.g., models mk5, mk51A, mk48). The vertical heating in the outer region is less extensive than the central region. The heating exponents in the outer region in many models are very different from α out = 0.5 indicating that the disk stars are not heated purely by massive objects in the halo or by the transient spirals, however, the exact source(s) which are responsible for the vertical heating in the outer region have yet to be identified.
On the other hand, we find that there is a relatively strong correlation (see Fig. 20) between the strength of the bars and the strength of the spirals in the disk with a value of Corr(β b , β s ) = 0.565. This correlation indicates that in the presence of a bar there is always at least a weak spiral present in the outer disk. This may also suggest that such weak transient spirals are triggered by the presence of a bar in the disk central region. However, the vertical heating in the inner region and the outer region is not well correlated. As the correlation between the two is very weak Corr(α in , α out ) = 0.10, the source of vertical heating in the inner and the outer region of the disk is likely to differ.
DISCUSSION
We have studied the nature of vertical heating of the disk stars in a large sample of radially hot galaxies as a function of halo mass and σ z /σ r spanning a wide range. For an individual galaxy, we find that the stars in the in-ner region are heated vertically as long as a bar continues to grow. Such correlation is quite generic and found to be robust when examined on a larger sample of model galaxies constructed from different initial conditions. Bars form in galaxies with diverse initial conditions but do not evolve in a comparable fashion likely reflecting their non-linear interaction with the surrounding dark matter distribution. However, we find that on a broad category there are two kinds of bar: type-I (growing rapidly and approaching saturation) and type-II (growing slowly with no evidence of saturation) as mentioned in section 5.1. Heating due to the growth of a bar differs from that due to a diffusion process since it occurs on a time scale of a few rotations in the disk central region. The diffusion time scale in a stellar system is similar to the relaxation time scale because diffusion mainly causes the stars velocity to change via two body encounters. Thus from our analysis, it can be said that the diffusion approximation is likely to be inappropriate for the vertical heating of disk stars in the central region.
Growth of boxy/peanut bulge
The boxy/peanut bulges are commonly believed to originate from the galactic bar via the vertical heating (Combes et al. 1990;Pfenniger & Norman 1990;Pfenniger 1993;Athanassoula 2005;Athanassoula & Martinez-Valpuesta 2009, and references therein) and references therein). The growth of a bar in the N-body simulation with a live halo is a complex process because of the non-linearity in the growing process and its interaction with the dark matter halo particles. The growing bars in our simulation are, at times, associated with a nearly constant pattern speed and, at other times, with a decreasing pattern speed. Such a growing bar is likely to drive the inner disk to a non-equilibrium state and depending on its strength and pattern speed, the inner stellar disk may evolve on ∼ a few dynamical time scales. Fundamentally, the growth of the bar facilitates the capturing of the disk stars into its various vertical resonances, e.g. the 1:1; 2:1; 4:1 vertical ILRs (inner Lindblad resonances) etc. which are often present in the bar region of the disk (Combes et al. 1990). As shown by Pfenniger (1985), the 4:1 vertical resonance becomes stronger as the bar grows, capturing disk stars to form the boxy bulge seen in many N body-simulations, although such studies do not explicitly include the interaction of the disk stars with a live dark matter halo. Furthermore, the presence of weakly dissipative orbits in a barred galaxy potential is shown to produce appreciable z-amplification as resonances are crossed, hence facilitating the growth of boxy bulges (Pfenniger & Norman 1990). For example, a simple Hamiltonian model of the growth of a bar perturbation (Quillen 2002) illustrated the effect of resonant capture of the stars into the 2:1 vertical ILR, lifting them out of the disk plane to explain the growth of the boxy/peanut bulge. One common result amongst these studies is that the vertical resonances (when present) are efficient in capturing disk stars, facilitating their motion perpendicular to the disk plane, essential for the growth of a boxy bulge. We note that, in this context, the disk stars in our simulation are not only vertically heated at the resonance locations but throughout the bar region of the disk and often with similar magnitude. This could imply either the resonant heating is not the only mechanism responsible for the overall vertical heating or broad resonances reflecting strong chaos are present throughout the bar region (Pfenniger 1985). Although a growing bar with/without a decreasing pattern speed could promote resonance sweeping through the bar region, it appears unlikely for the disk to accommodate the later scenario making it unclear how the non-resonant stars are vertically heated. In any case, it would be useful to have further insight into how the non-resonant stars might be contributing to the growth of the bulge.
Superthin galaxies
The disks of many galaxies in our sample are superthin and are dark matter dominated. From the galaxy formation point of view, it is extremely important to understand how these superthin galaxies are formed and how they evolve. Can they preserve their initial superthinness? In the hierarchical structure formation scenario, a galaxy grows via a large number of mergers and as a result, the N-body simulations of galaxy formation underestimate the number of thin galaxies. So it remains a puzzle how superthin galaxies remain superthin. Assuming hydrostatic equilibrium, it can be shown that the disk half-thickness h z ∝ σ z /(Gρ mid ); where ρ mid is the mid-plane volume density. Because of various unavoidable heating processes σ z would always increase with time and hence the disk thickness (h z ). Thus, either superthin galaxies somehow maintained their initial thickness or they evolved from an even thinner state to the present day superthin state. In the absence of environmental influences, the evolution of these galaxies would depend on the non-axisymmetric structures produced in the disk through internal instabilities. Our simulation shows that the non-axisymmetric structures such as bars or spirals are spontaneously formed in the disk and heat the disk stars in their respective regime of dominance. We briefly mention here that normally in radially hot and vertically very cold galaxies (e.g. mk25, mk27, mk48 for which initial h z /R d ∼ 0.01 − 0.02), the disk thickness remains in the superthin regime with the final half thickness h z /R d ∼ 0.06 after 5 to 6 Gyr of evolution. Most of these galaxies form a type-II bar but there are exceptions. If the initial thickness is greater than 0.03 or 0.04, then the final thickness evolves to be in the 'thin' (h z /R d > 0.1) regime.
Common heating mechanism?
The vertical heating in the outer region is comparatively more complicated because of the presence of spiral arms and, at times, mild warps or corrugations. The presence of these perturbations complicate the identification of the heating mechanism since more than one may operate simultaneously. It is known that a stationary spiral pattern does not lead to heating (except at resonance locations) and transient spirals heat more effectively in the radial direction. However, our analysis indicates that the transient spiral arms also grow (e.g., model mk107) i.e. they are time dependent and, hence, a similar situation may prevail in the outer disk as in the central region. In both regions, the disk stars respond to the time-dependent potentials. Thus, a common mechanism where the growing bars and spirals interact resonantly with the dark matter halo particles and contribute to the vertical heating in their respective regions of dominance may be operating.
CONCLUSIONS
Our investigation of the 70 N-body models of disk galaxies reveals a positive correlation between the growth of a bar and the vertical heating of the disk stars in the central region. A growing bar seems to contribute significantly to the vertical heating of the disk stars. Overall, the heating exponent α in ≥ 1 for the various galaxy models studied. The disk stars in the central region are generally heated in the vertical direction by a factor ∼ 3 to 4 above their initial values over 5 to 6 Gyr.
We find that the transient spirals are always present whenever a growing bar is present at the center. Most of these spirals are weak and diffuse, and it is likely due to the fact that the disks in our simulations are relatively radially hot. The numerical results show that the amount of vertical heating in the outer region is lower compared to the inner region with the disk stars in the outer region generally heated vertically by a factor of ∼ 2 above their initial values over a time period of ∼ 4 Gyr (if there is a growing spiral present) or more (otherwise).
From the analysis of our simulations, we find that, in general, in radially hot galaxies the vertical heating in the central region dominates over radial heating and in the outer region the relative importance is reversed. In contrast, radial heating dominates over the vertical heating throughout the disk for radially cold galaxies. We conclude that heating due to non-axisymmetric structures appears to be most promising in the context of disk heating problem in general.
Our simulation results suggest that there is likely a common physical process through which the disk stars are heated vertically, which is active throughout the disk from inner to the outer region. Such a process should be investigated in detail by studying the vertical motion of the disk stars in the presence of a time dependent perturbing potential which could arise due to a growing bar or spiral arms inside a live dark matter halo. | 2010-08-04T14:27:10.000Z | 2010-08-04T00:00:00.000 | {
"year": 2010,
"sha1": "ada9ee1f760bd1fcc1d3d54c3ae236747752cd94",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1008.0787",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ada9ee1f760bd1fcc1d3d54c3ae236747752cd94",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
225618227 | pes2o/s2orc | v3-fos-license | Research Progress of Electromagnetic Interference Suppression in Power Converters
In this paper, the related concepts of Electromagnetic Interference (EMI) and Electromagnetic Compatibility (EMC) were expounded, the research progress of EMI suppression methods for power converters was summarized, and the future development trend of EMI suppression for power converters was pointed out, for example, EMI suppression method of changing circuit topology, EMI suppression method for improving control strategy, EMI suppression method of optimized driving circuit.
Introduction
Power converter can be defined as a multi port circuit composed of semiconductor switching devices and auxiliary devices (capacitors, inductors, transformers, etc.). Its main function is to realize the energy conversion between two or more subsystems in the expected way according to the preset performance index. Generally, power converters can be classified according to the types of electrical subsystems they are connected to, such as DC-DC converter, DC-AC converter, AC-DC converter and AC-AC converter. With the development of power semiconductor device technology, thyristors, bipolar transistors, power MOSFETs and other devices have promoted the improvement of circuit topology of low and medium power converters (several w to dozens of KW). In the field of high voltage and high power applications, the invention of these devices also gave birth to some new topologies that can make full use of the characteristics of the devices. EMC of power converter is a comprehensive subject, involving device physics, circuit theory, electromagnetic field theory, testing technology, etc. At present, the research on EMC of power converter is still in the primary stage. As far as EMI is concerned, there is no essential difference between EMI generated by power converter and that in other power electronic equipment or systems. However, due to the topology and components of power converter, its electromagnetic interference has its own characteristics. The power semiconductor device in power converter works in the switch state, which determines that its voltage and current waveforms have wide-band spectrum, which is the source of electromagnetic interference. With the improvement of the performance of power converter, the switching frequency and power density of the converter are increasing, resulting in more and more complex electromagnetic environment inside the equipment. The research on EMC of power converter has become one of the hotspots in the field of power electronic technology. In this paper, the related concepts of EMI and EMC were expounded, the research progress of EMI suppression methods for power converters was summarized, and the future development trend of EMI suppression for power converters was pointed out. The template is used to format your paper and style the text. All margins, column widths, line spaces, and text fonts are prescribed; please do not alter them. You may note peculiarities. For example, the head margin in this template measures proportionately more than is customary. This measurement and others are deliberate, using specifications that anticipate your paper as one part of the entire journals, and not as an independent document. Please do not revise any of the current designations. Since Faraday proposed the law of electromagnetic induction, human society has entered a new stage of electric power application. The invention and application of various power electronic equipment have greatly promoted the development of society. Some equipment need to radiate electromagnetic energy to the environment in normal operation, such as radio equipment, radar equipment, navigation equipment, etc.; others unintentionally radiate electromagnetic energy to the environment when completing normal tasks, such as ignition system, converter and some control equipment. The sum of electromagnetic phenomena existing in the working environment constitutes the electromagnetic environment, which is an important part of modern society, and also the basis of analyzing electromagnetic interference and electromagnetic compatibility. Electromagnetic disturbance refers to any kind of electromagnetic phenomenon that may cause the performance degradation of a device, equipment or system in the electromagnetic environment. The degradation of device, equipment or system performance caused by electromagnetic interference is called electromagnetic interference (EMI) [1] - [4]. According to the definition, whether it is a complex system or a simple system, electromagnetic interference must have three elements:
Related concepts of electromagnetic interference and electromagnetic compatibility
a. Interference source refers to the device, equipment or system generating electromagnetic disturbance; b. Sensitive equipment refers to the device, equipment or system affected by electromagnetic disturbance; c. Coupling channel refers to the transmission path of electromagnetic disturbance from interference source to sensitive equipment. d. Generally, according to the propagation mechanism of electromagnetic interference, electromagnetic interference is divided into two categories: e. Radiated electromagnetic interference, which is caused by electromagnetic wave propagating through space, such as broadcasting, radar equipment, signal generator, ignition system, etc; f. Conduct electromagnetic interference, which is caused by the coupling of wires, capacitive devices or inductive devices, such as various oscillators, relays, semiconductor devices, etc. The response characteristics of devices, equipment or systems to electromagnetic disturbance are called electromagnetic sensitivity. The simultaneous interpreting mechanism of electromagnetic disturbance can be divided into radiosensitivity and conduction sensitivity. According to the definition of electromagnetic interference and electromagnetic sensitivity, electromagnetic compatibility (EMC) refers to the electromagnetic interference that devices, equipment or systems can work normally in their electromagnetic environment and cause to other equipment in the environment within the allowable range.
Research status and development trend
There is no essential difference between the EMI produced by power converter and that produced by other power electronic equipment or systems. The generation of EMI needs three elements: EMI source, EMI propagation path and sensitive equipment. Therefore, the research on the methods of EMI suppression can start from the three elements of EMI, that is, reducing the intensity of the interference source and cutting off the transmission path. EMI filtering technology is an EMI suppression method based on cutting off the transmission path, which is one of the most important and effective means to suppress conducted electromagnetic interference. However, with the development of power converter to high frequency, miniaturization and high power density, EMI filter is not suitable for the current development trend due to the limitation of volume and weight. Theoretically, as long as the EMI emission intensity of the interference source is reduced, the EMI of the system can be effectively reduced. The voltage and current waveforms of power semiconductor devices, i.e. switching waveforms, are the source of electromagnetic interference in the converter. So we can change the circuit topology, improve the control strategy and optimize the driving circuit to affect the switching waveform, so as to suppress the electromagnetic interference.
EMI suppression method of changing circuit topology
The main idea of improving the circuit topology is to eliminate the common mode voltage of the converter output through symmetrical structure, and reduce the level of conducted interference emission on the input side of the device due to the halving of the voltage change rate on the switch device. In the early research, Julian led scholars proposed a three-phase four leg scheme to eliminate the output common mode voltage of three-phase power converter according to the "circuit balance principle" [1] - [3] , as shown in figure 1. The basic idea of the method is to use an additional "auxiliary phase" to make the three-phase system circuit symmetrical to the ground potential, and by adjusting the switch sequence, the sum of the four bridge arm output phase voltage is zero as far as possible, so as to realize the common mode voltage is zero completely. Compared with the traditional three leg power converter, its common mode EMI can be reduced by about 50%. Manjrekar [4] and Rao [5] proposed a scheme to eliminate the common mode voltage caused by the zero switch state by adding auxiliary zero state switch. This method of auxiliary zero state synthesizer is very attractive in economy, and it can also eliminate the common mode voltage of induction motor side. Fig. 1 Four-leg inverter with second-order filter and motor load Compared with the traditional power conversion, although the three-phase four leg and auxiliary zero state synthesizer can eliminate or reduce the common mode voltage of the system, their modulation strategies will reduce the system voltage utilization. Therefore, Haoran Zhang [6] - [8] and other scholars proposed a double bridge power converter for eliminating the common mode voltage and shaft current of the motor. It can generate standard balanced excitation of three-phase double winding induction motor by controlling double bridge power converter, and cancel common mode voltage through balanced excitation (magnetic system), so as to eliminate shaft voltage, shaft current and fully reduce leakage current and EMI emission intensity. In order to eliminate the common mode current of PWM motor drive system, based on the common mode voltage compensation technology, Consoli [9] and other scholars proposed a common mode current elimination technology applied to the common DC bus of multi drive system composed of two or more power converters. This method is a new PWM modulation strategy which makes the common mode voltage change synchronously by controlling the state sequence of the two converters on the basis of the proper connection of the two power converters. To sum up, changing the circuit topology can effectively suppress the common mode EMI in the power converter, but it also complicates the original converter topology. At the same time, new control strategies need to be developed according to the new topology. Because the two-level PWM modulation strategy will inevitably make the output of power converter contain common mode voltage, some scholars propose some new modulation strategies which can eliminate or reduce the common mode voltage based on improving the inverter control mode or strategy. In the early research, Taipei scholar yen Shi Lai [10] - [11] proposed the space vector modulation technology. This method uses the characteristics that different combinations of vector states will affect the output common mode voltage of the power converter. Two opposite direction vector "flyback" methods are used to replace the role of zero vector, so as to reduce the common mode voltage of the system and achieve the purpose of suppressing conducted EMI. However, broe [12] and other scholars put forward a space vector modulation method of synchronous change of switches on the rectifier side and the inverter side, which can avoid generating common mode voltage pulse with the same size as the DC bus voltage; the Korean scholar hyeoum Dong Lee [13] has changed the space vector modulation method of full control three-phase rectifier / inverter, which is based on the movement of non-zero vector position to reduce the output common of the system Based on the principle of the number of mode voltage pulses and the action time, the common mode voltage can be reduced. In addition, some scholars propose to apply spread spectrum technology to EMI suppression of power converter. As early as the mid-1990s, some scholars used the spread spectrum modulation technology in communication system to reduce EMI emission. In the early research on power converter EMI suppression by spread spectrum technology, TSE [14] and other scholars applied a random carrier technology to the conducted EMI suppression of off-line switching power supply, and made theoretical analysis in the time and frequency domain. Santolaria [15] and other scholars have theoretically analyzed the influence of switching frequency range and modulation profile on the suppression effect of switching frequency modulation EMI. Gonzalez [16] and other scholars applied the periodic switching frequency modulation technology to the conducted EMI suppression of power converter, and pointed out that the random frequency modulation can only evenly distribute the spectrum energy in the spectrum range, while the periodic frequency modulation can control the bandwidth of EMI energy distribution. In recent years, santolaria [17] and other scholars have analyzed the influence of switching frequency modulation on the output voltage of power converter. Dousoky [18] and other scholars have put forward some new spread spectrum schemes and made theoretical analysis. The conducted EMI suppression of DC-DC power converter is realized by FPGA chip.
EMI suppression method of optimized driving circuit
Because the DV / dt and di / dt caused by the high-frequency switching action of power semiconductor devices in power converter are too large, their size and high-order harmonics directly affect the emission intensity of EMI system. Moreover, for common switching devices, the magnitude of DV / dt and di / dt at the switching moment is affected by the gate driving waveform and gate stray capacitance [19]. Therefore, if we only consider from the perspective of reducing EMI emission intensity, we can reduce DV / dt and di / dt by selecting appropriate circuit topology and control strategy, so as to reduce EMI emission intensity of the system. Lobsiger [20] and other scholars put forward a new IGBT driving circuit. Through a PI controller, the di / dt and DV / dt of IGBT can be controlled respectively. The drive circuit uses closed-loop control to compensate the nonlinearity of IGBT, and can adjust the slope of voltage and current according to the set value. IDR [21] and other scholars put forward a new active grid voltage control method. By directly changing the shape of the grid input voltage, the di / dt at on and DV / dt at off are controlled. The drive circuit can reduce the overshoot current caused by the reverse recovery of the continuous current diode in the opening phase, and also reduce the output voltage oscillation when the diode is turned off. The gate control signal may be realized in a very simple way with a voltage divider of ratio as indicated on Fig. 2 Fig.2 Gate control circuit (with V =15V, Rp = 10, and Rg = 15). Costa [23] and other scholars analyzed the switching waveform (voltage or current waveform of switching device) in time and frequency domain, and pointed out that the high-frequency electromagnetic interference intensity generated by the switching waveform is closely related to the number of its conductibility. Assuming that the switching waveform has k-order derivative at most, the slope of the asymptote of the high frequency spectrum is -20 (K + 1) dB / Dec. The active voltage control (AVC) proposed by Palmer [24] and other scholars can effectively control the transient waveform of IGBT collector voltage, making the collector voltage waveform follow the predefined reference signal. According to the research results of Costa, Palmer and other scholars, Patin [25] and other scholars proposed a Gaussian switching waveform with infinite order derivative, and a closedloop gate drive (clgd) with similar structure to AVC to control the drain source voltage shape of power MOSFET. The simulation results show that the EMI generated by drain source voltage of power MOSFET is significantly reduced under the control of closed-loop gate control circuit compared with hard switch. However, the switch waveform does not meet the continuous condition. Yang [26] and other scholars optimized the AVC circuit proposed by Palmer, shaping the emitter voltage waveform of IGBT into the Gaussian switch waveform proposed by Patin, and the EMI suppression effect is significant. The current research focuses on the closed-loop gate drive circuit for IGBT. The switching on and off process of power MOSFET and IGBT are basically similar. However, due to the current tailing phenomenon of IGBT, the switching time of IGBT is longer, resulting in the lower frequency of IGBT driving pulse, which makes it easier to control IGBT in the active area. The power MOSFET has higher switching time and frequency, and higher on resistance, which makes it difficult to design the closed-loop gate driver circuit for power MOSFET. Generally speaking, this method has a good effect of suppressing EMI, but compared with hard switch, its switching time is longer, resulting in increased loss; moreover, due to the nonlinear problem of semiconductor devices, the stability design of driving circuit is very important; the voltage utilization rate is reduced, and the efficiency of converter is reduced.
Conclusion
With the development of power electronic technology, all kinds of power electronic equipment are widely used in various fields of modern society. With the increasing power density and switching speed of power electronic devices such as power converters, the volume of equipment and system is decreasing, and the problem of electromagnetic interference is becoming increasingly prominent. EMI filter technology is one of the most important and effective means to suppress conducted electromagnetic interference, but with the development of power converter to high frequency, miniaturization, high power density and other directions, passive EMI filter is limited by volume and weight, and does not adapt to the current development trend. Active power filter (APF) has a broad application prospect because of its active cancellation technology. The methods of reducing EMI by reducing the interference source, such as changing the circuit topology, improving the control strategy and optimizing the driving circuit, have become a research hotspot. | 2020-07-23T09:09:37.101Z | 2020-07-01T00:00:00.000 | {
"year": 2020,
"sha1": "5b9f4f5f2da39d5bc2fea24faada99b3e1706dbf",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1585/1/012035",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "3514300ec9a62db1b057d3e8f2e0395a70844109",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
148794318 | pes2o/s2orc | v3-fos-license | Structural Investigation of the Short Dark Triad Questionnaire in Polish Population
Narcissism,Machiavellianism and psychopathy are commonly referred to as the Dark Triad of personality. In the current study, we examined the structure of the Dark Triad measured by the Polish version of the Short Dark Triad (SD3). The study was conducted with 1012 individuals in Poland. The analyses were performed in four steps: (1) the external validity of the SD3 was tested to provide evidence that SD3 is a valid measure of the three dark traits; (2) the structural validity of the SD3 was tested using competing models in confirmatory factor analyses; (3) the structure of narcissism was tested; and (4) the combined bifactor model of Machiavellianism and psychopathy was tested. The results support the differentiation of the Dark Triad into a Dark Dyad (Machiavellianism and psychopathy) and narcissism, which can be used in further theoretical work and new operationalization of the Dark Triad.
it is not surprising that some new traits, such as everyday sadism (Buckels et al. 2014), emerge as another dark personality. In contrast, some researchers have argued for reducing the Dark Triad into the Dark Dyad by excluding narcissism (Egan et al. 2014;Kowalski et al. 2016). Although the research on the Dark Triad is flourishing (Furnham et al. 2013;Paulhus 2014), few studies have investigated the structure of the Dark Triad and assessed it critically (Persson et al. 2017). Thus, the main objective of the current paper is an in-depth investigation of the Dark Triad's structure.
Narcissism can be characterized as significantly exaggerated self-esteem and beliefs about being special, involving constant preoccupation with ideas about unlimited success, strength, beauty or love (Emmons 1987). The most often used questionnaire to measure narcissism is the Narcissistic Personality Inventory (NPI, Raskin and Hall 1979). Contrary to Dark Triad assumptions on the socially aversive character of its elements, narcissism has adaptive facets that are generally socially accepted, e.g., narcissistic leadership abilities (Ackerman et al. 2011). Recently, Back et al. (2013), in their own process model of narcissism, described two social strategies that serve to maintain a grandiose self-enhancing admiration strategy and self-defensive rivalry strategy. Whereas narcissistic admiration leads to social potential and represents the assertive and grandiose aspects of narcissism, narcissistic rivalry leads to social conflict and represents the antagonistic and exploitative aspects of narcissism.
Machiavellian personality was first described by Christie and Geis (1970). In short, Machiavellianism involves a cold and cynical worldview, a lack of emotionality, strategic planning and manipulative behaviors (Rauthmann and Will 2011). In a comprehensive review of Machiavellian personality, Jones and Paulhus (2009) argue that individuals with this personality trait have superior impulse regulation but do not have special cognitive abilities. To measure Machiavellianism, Electronic supplementary material The online version of this article (doi:10.1007/s12144-017-9653-1) contains supplementary material, which is available to authorized users. Christie and Geis (1970) developed five versions of the MACH questionnaire, the MACH-IV being the most widely used version (Jones and Paulhus 2009).
Psychopathy could be characterized as impulsive and thrillseeking, in addition to having low empathy and anxiety (Paulhus and Williams 2002). The most widely used instrument to diagnose forensic psychopathy is the Psychopathy Checklist (PCL). However, Levenson et al. (1995) argued that some dimensions of psychopathy are exclusive to a clinical group; therefore, the measurement of psychopathy through PCL as a subclinical trait may not be appropriate. In response, they developed the Levenson Self-Report Psychopathy Scale (LSRP) which measures primary psychopathy, operationalized in a way that is similar to the PCL (e.g., lack of remorse, callousness, and manipulativeness) and secondary psychopathy, understood as impulsivity, intolerance of frustration, quick-temperedness and lack of long-term goals (Levenson et al. 1995).
Short Dark Triad
Until recently, most studies measured the Dark Triad traits using independent measures. Jones and Paulhus (2014), relying on a review of the literature, developed the initial pool of 41 items that covered key aspects of each Dark Triad trait and then reduced the number of items. Finally, they proposed a measure called the Short Dark Triad (SD3) that comprised 27 items measuring Machiavellianism, narcissism and psychopathy (Jones and Paulhus 2014). Because all three traits were introduced into a single measurement instrument, the structure of the Dark Triad became a more important issue than it had been when these traits were measured using different measures originating from different models. Jones and Paulhus (2014) tested the structural validity of their measure in two studies: in the first study, they conducted exploratory factor analysis (EFA), on the basis of which they reduced the number of items from 41 to 27. In the second study, they used exploratory structural equation modeling (ESEM), and they reported the fit indices of the model. Hu and Bentler (1999) have suggested a cutoff value for a good model fit for a comparative fit index (CFI) above .90 and for a root mean square error of approximation (RMSEA) less than .06. This suggests a good model fit for Jones and Paulhus as their values were acceptable (CFI = .93; RMSEA = .04). However, it is worth noting that the structure of narcissism alone was relatively independent from the other two traits, i.e., having only one high cross-loading onto Machiavellianism, whereas four of nine psychopathy items exhibited significant cross-loadings onto the Machiavellianism factor. Jones and Paulhus (2014) have also reported results of confirmatory factor analysis (CFA); however, the model fit indices were rather poor (CFI = .82; RMSEA = .07). To summarize, both ESEM and CFA suggest that there are problems with the SD3 structure.
To date, only two studies (Pabian et al. 2015 in English and Atari and Chegeni 2016 in Farsi) have investigated the psychometric structure of the SD3 apart from Jones and Paulhus (2014). Pabian et al. (2015) and Atari and Chegeni (2016) reported model fit indices obtained by CFA at the boundary of the acceptable model fit (CFI = .90; RMSEA = .045; CFI = .84; RMSEA = .048, respectively); however, Pabian et al. (2015) excluded five items (in which the factor loadings were weaker than .30) from the analysis to improve the model fit. Atari and Chegeni (2016) on the basis of EFA also removed seven items from the questionnaire because the original measurement model of the SD3, as proposed by Jones and Paulhus (2014), was poorly fitted to the data (CFI = .73; RMSEA = .057). Such modifications of the model suggest the continued existence of problems with the structure of SD3.
Problems with the differentiation between Dark Triad factors are reflected also in the correlations between traits reported in other studies using SD3. The published intercorrelations are as follows: (1) Jones and Olderbak 2014). It is worth noting that these correlations are between summated scores, while those between latent variables in CFA (reported only by Pabian et al., 2015) seem to be much higher: r = .86 between Machiavellianism and psychopathy, r = .65 between Machiavellianism and narcissism and r = .51 between narcissism and psychopathy.
Based on the correlation coefficients from independent studies, one can conclude that narcissism is quite independent from the other two Dark Triad traits, while Machiavellianism and psychopathy are highly correlated.
Current Study
The current paper aims to investigate the factorial structure of the SD3 because there is a need to conduct same studies but on different populations in order to verify in what extent the original propositions are replicable. Replicability is the pursued goal, as the more replicable the results are, especially across different population and languages, the more confident the researchers could be with them. Current study compares the results between Polish and American population (Jones and Paulhus 2014); although some differences are expected because the measurement of different human characteristics across different populations is very hardif even possible. Thus, the study like this may not be able to provide all of the answers to the research questions, but it may shed some accuracies, which may be further investigated in future studies with different populations.
The external validity of SD3 was tested by inspecting the correlations with independent instruments, which were developed to measure each trait. The structural validity of the Dark Triad as measured by SD3 was tested by a comparison of a set of models: (1) the Jones and Paulhus (2014) measurement model; (2) the model modified on the basis of modification indices; (3) the measurement model proposed by Atari and Chegeni (2016); and (4) the bifactor Dark Triad model. Finally, we tested whether narcissism as measured by SD3 is unidimensional by investigating two models: (5) the unidimensional model, and (6) the unidimensional model with correlated residuals; and (7) the bifactor Machiavellianismpsychopathy (Dark Dyad) model.
Material and Methods
Participants and Procedure Similarly to Jones and Paulhus (2014), we gathered our data via Internet platform. The sample comprised 1012 Polish participants between 17 and 35 years of age. In the overall sample, there were 202 male (M = 22.28; SD = 3.26) and 810 female participants (M = 22.38; SD = 3.49). All of the participants were informed that the study was anonymous; however, every participant had an opportunity to provide his or her e-mail address in order to participate in a lottery to win a book as a reward for participating in the study.
Measures To assess the Dark Triad traits we used the Polish version of SD3 (Jones and Paulhus 2014) prepared by authors of this paper. We contacted the authors of the original scale from which we obtained the measure. During the translation we followed a standard two-step procedure, i.e., the questionnaire was translated into the Polish, verified and corrected, and was back-translated an sent to the authors, who did not reported any modifications for consideration. Participants indicate their agreement with each statement using five-point Likert type scale. Reliability of SD3 was assessed by using the McDonald's ω coefficient (1999), which is interpreted in the same manner as other reliability estimates. We estimated the following reliability coefficients for narcissism, Machiavellianism and psychopathy, which are: ω = .74 [95%CI = .71-.76], ω = .74 [95%CI = .72-.77] and ω = .67 [95%CI = .64-.70] respectively. These estimated reliability coefficients are acceptable and comparable with estimates obtained by Jones and Paulhus (2014); in current study, only the estimate for psychopathy is lower and not included within the confidence interval.
Additionally each of the Dark Triad traits were assessed independently by other instruments. To assess Machiavellianism we used the MACH-IV (Christie and Geis 1970) which comprise 20 items measuring cynical worldview, manipulative tactics and amorality. Participants rate their agreement using seven-point Likert type. The reliability of cynical worldview, manipulative tactics and amorality are as follows: ω = .62 [95%CI = .58-.66], ω = .71 [95%CI = .68-.74] and ω = .22 [95%CI = .12-.31], respectively. The reliability estimates of cynical worldview and manipulative tactics were acceptable while for amorality scale was unacceptably low and therefore this scale was excluded from further analyses.
External Validity of the Short Dark Triad
The intercorrelation between the summated scores of SD3 is as follows: narcissism and psychopathy r = .28 (p < .01), narcissism and Machiavellianism r = .25 (p < .01), and psychopathy and Machiavellianism r = .57 (p < .01). To assess the external validity of the measurement of narcissism, Machiavellianism and psychopathy measured with the SD3, we correlated them with independent Dark Triad measures. The results are presented in Table 1.
The SD3 narcissism scale was most strongly correlated (.56 and more) with traits distinguished in the NPI, while the correlations between traits measured by LSRP (psychopathy) and those measured by MACH (Machiavellianism) were relatively low (.34 and lower). The correlation of narcissism as measured by SD3 with the two dimensions of narcissism as measured by NARQ demonstrated that although the assertive, dominant and grandiose aspects of narcissism were well tapped by SD3 (as expressed in the high correlations with NPI scales and with the admiration dimension), the antagonistic, aggressive, and exploitative aspects of narcissism were not sufficiently covered (as expressed in a small correlation with the rivalry dimension). The rivalry dimension, instead of correlating with narcissism, correlated with both psychopathy and Machiavellianism. The Machiavellianism scale according to SD3 had the highest correlations (.50 and more) with MACH-IV scales, reflecting manipulative tactics and cynical worldview, but it was also highly correlated with primary psychopathy measured by the LSRP (.67). Despite the fact that the psychopathy scale of the SD3 was most strongly correlated with secondary psychopathy, it is worth noting that the correlation between Machiavellianism measured by SD3 and primary psychopathy measured by LSRP (.67) was greater than the correlation of Machiavellianism with psychopathy itself (.58). Additionally, the correlations between psychopathy as measured by SD3 and one facet of Machiavellianism as measured by MACH (Manipulative tactics) were relatively high (.43). Narcissism as measured by SD3 correlated mostly with other scales measuring different aspects of narcissism, while Machiavellianism and psychopathy as measured by SD3 were strongly correlating with each other, as measured by independent instruments.
Structural Validity of the Short Dark Triad
To assess the structural validity of the SD3, we performed a series of confirmatory factor analyses. To evaluate whether the model fit the data, we followed Hu and Bentler's (1999) recommendations.
The first model resembled the model proposed by Jones and Paulhus (2014) without any modifications. In the second model, we purposefully modified the CFA model and tested the data-based model of SD3. In the third model, we tested the EFA-based proposition of Atari and Chegeni (2016). Because Machiavellianism and psychopathy highly correlated with themselves in the all models, we used the bifactor solution, since this allows the separation of general and domain-specific factors, also known as grouping factors (Reise et al. 2010). In bifactor CFA/ESEM, the bifactor is meant to account for the commonality between items, while grouping factors represent unique variance (Chen et al. 2006). If the grouping factor loadings are stronger than the bifactor, the superiority of grouping factors can be assumed (Reise et al. 2010). Finally, we tested the unidimensional structure of narcissism independently from psychopathy and Machiavellianism, which were tested by using bi-CFA.
Since the response scale of SD3 comprises five options, we treated the data as ordinal; therefore, we performed CFA on polychoric correlation matrices and chose the WLSMV estimator. A summarized table presenting model fit indicators of competing models is presented in Table 2.
First model was poorly fitted to the data, which confirmed our expectations with the difficulties of replication of the SD3 structure. In the second model, we performed a sequence of modifications to the model and used the CFA for exploratory purposes to improve the fit. After being modified in this way, the model obtained acceptable model fit indices. In the third model, we deleted seven items according to Atari and Chegeni (2016), which resulted in poor fit to the data. In all models, the correlation between psychopathy and Machiavellianism was very high (r = .91 in the first, r = .93 in the second and r = .90 in the third model).
In the fourth model, the bifactor was introduced to test the relationship between the Dark Triad traits; however, this model was nonetheless poorly fitted to the data. The bifactor was loaded stronger than Machiavellianism and psychopathy, but the factor loadings on narcissism were higher than on bifactor, suggesting its independence; thus, we decided to test the structure of narcissism independently from that of Machiavellianism and psychopathy. In the fifth model, the unidimensional model of narcissism was at the boundary of acceptable model fit as suggested by CFI, whereas the value of RMSEA suggested that the model was not well specified. Thus, in the sixth model, we investigated modification indices and identified two pairs of items that shared residual variance. After incorporating these two correlations, the model (presented at Fig. 1) was well fitted to the data; thus, it partially confirmed the unidimensional structure of narcissism as measured by SD3. All of correlated items concerned grandiosity and were based on the NPI; one pair of items, namely, item 5 coded reversely and item 11, concerned grandiosity, and the second pair of items, namely, items 17 and 23 (both coded reversely), concerned shyness.
In the seventh and last of the tested models, we assessed the structure of Machiavellianism and psychopathy as the separate Dark Dyad model. The bifactor CFA with standardized loadings on the Dark Dyad (Model 7) has been presented in Fig. 2.
The bifactor was loaded more strongly than Machiavellianism and psychopathy; however, some items (items no. 1, 10 and 19) loaded more strongly on Machiavellianism than on the bifactor. Similarly, three items (items no. 9, 21 and 24) loaded more strongly on psychopathy than on the bifactor. Thus, one can conclude that although Machiavellianism and psychopathy as measured by SD3 merged into the more general Dark Dyad, in each trait a specific facet can nonetheless still be measured, namely, sensation-seeking for psychopathy and Machiavellian tactics for Machiavellianism.
Discussion
Current paper is amongst very first papers examining the structure of the SD3 into other language than English (Atari and Chegeni 2016); and simultaneously is not the first which encounters problems with the questionnaire structure (Pabian et al. 2015). Because we studied only one population in one language, our results should not be interpreted as prejudging, and future work should aim to replicate our results within other languages and in different populations.
In the current study we investigated the structural validity of the Polish adaptation of the SD3. The scales from SD3 could be generally deemed externally valid because they correlated mostly with relevant independent measures. It is worth noting that the narcissistic rivalry (which is an antagonistic and aggressive aspect of narcissism) weakly correlated with narcissism as measured by SD3 and strongly with Machiavellianism and psychopathy, whereas narcissistic admiration and NPI scales, which represent the assertive and grandiose aspect of narcissism, were strongly related with narcissism as measured by SD3. It can be concluded that narcissism as measured by SD3 is similar to narcissism as measured by NPI; thus, similarly to the NPI, the SD3 misses the antagonistic and aggressive aspect of narcissism.
As we expected from the literature review, we found a very high correlation between Machiavellianism and psychopathy, which supports the hypothesis that these traits as measured by SD3 are not sufficiently differentiated, i.e., although they correlate with relevant scales most strongly, they also correlate strongly with each other. Pabian et al. (2015) also found a very high correlation between Machiavellianism and psychopathy in their CFA, but instead of searching for the potential source of this correlation, they implemented the sequence of the modifications of the CFA model to achieve good fit indices. Atari and Chegeni (2016) in assessment of the structure of SD3 also found difficulties with the replication of the original measurement model of SD3, but similarly to Pabian et al. (2015) they simply deleted seven items and obtained good model fit indices. Such an approach to analysis ignores the underlying theory problems, rather than attempting to solve them (Browne 2001). To emphasize this conclusion, we used CFA for exploratory purposes and tested the model with a series of modifications. Such interference resulted in good model fit; however, the correlation between Machiavellianism and psychopathy was still very high. One could conclude that although previous studies achieved good model fit indices (Atari and Chegeni 2016;Pabian et al. 2015), the abbreviated propositions are data-based and are not replicable; thus, in the light of these results, the structure of SD3 has been challenged. As in the assessment of the external validity of SD3, in our investigation of the structural validity, we tested different CFA models and found a very high correlation between Machiavellianism and psychopathy, which emphasize the difficulty of differentiating between the two. This result is in line with the literature, e.g., Egan et al. (2014). Moreover, narcissism has been the least correlated with other dark traits in most studies (e.g., Jonason 2015). All of this evidence suggests that narcissism is the least nested in the Dark Triad. To examine this problem, we introduced the bifactor accounting for observed commonalities between Dark Triad traits.
First, we examined the model with a Dark Triad bifactor and three grouping factors (narcissism, Machiavellianism, and psychopathy). The results from this model also suggested that narcissism is rather the autonomous member and the one least related to Machiavellianism and psychopathy. Second, we tested the structural validity of SD3 as divided into narcissism and the Dark Dyad (Machiavellianism and psychopathy).
In the narcissism model, we confirmed that SD3 measures narcissism as a unidimensional construct; however, its structure is not flawless. All of the four items that were correlated within the model originated from the NPI, and as according to Ackerman et al. (2011), they all measured a single aspect of narcissism, namely, grandiose exhibitionism (which was expressed in the correlations added to the model). Among other items from SD3, two more were also based on NPI items (one item for leadership/authority and one for entitlement/ exploitativeness), while the remaining three items also concern grandiosity are unique for SD3. Thus, the majority of the items within SD3 concerns only one aspect of narcissism (all correlated items as well as the items unique for SD3), i.e., Fig. 2 The bifactor Dark Dyad model Fig. 1 Unidimensional model of narcissism grandiosity, and only two items try to capture all other aspects of narcissism, which makes the SD3 only a narrow measure of narcissistic grandiosity.
Because narcissism as measured by SD3 was strongly based on the NPI, it has also inherited its limitations, and alternative methods of narcissism assessment may provide better insight. Differentiation of these two facets of narcissism as measured by NARQ disentangled some of existing apparent paradoxes concerning narcissism and many psychological constructs, e.g., relationships with self-esteem, impulsivity, personality traits and basic values (Rogoza et al. 2016a, b); thus, the incorporation of this model into Dark Triad research may shed new light on its structure. The assertive and grandiose aspects of narcissism are not strongly associated with the Dark Triadwhich was expressed in both correlational analyses (i.e., the low correlations of the rivalry dimension with narcissism as measured by SD3 and the simultaneously high correlation of this dimension with both Machiavellianism and psychopathy) and in structural assessment (i.e., the exclusion of narcissism from the Dark Triad model). Thus, in further research that could be conducted on the structure of the Dark Triad, one option may be to replace the items of SD3 with those measuring the antagonistic and exploitative aspects of narcissism.
In the Dark Dyad model, most of the items were loading only the bifactor, but some of them composed facets specific to Machiavellianism and psychopathy. The result was that the psychopathy core facet concerned sensationseeking, while the Machiavellianism core facet concerned Machiavellian tactics. Jones and Paulhus (2009), in their review on Machiavellian personality, noted that skillfulness in manipulative tactics may come from superior impulse regulation ability. Similarly, Hare and Neumann (2008), in their review on psychopathy, noted that impulsivity is one of the core constructs associated with psychopathy. Our results support the interpretation that Machiavellianism and psychopathy could be differentiated by the sensation-seeking that is driven by impulsivity and the Machiavellian tactics that are driven by impulse regulation. In summary, Machiavellianism and psychopathy are on opposite sides of the dimension of impulse regulation ability.
In summary, on the basis of the assessment of the external and structural validity, the Dark Triad as measured by SD3 comprises two main constructs: narcissistic grandiosity, which is missing in its measurement the antagonistic and exploitative aspects of narcissism, and the Dark Dyad, which can be differentiated on a conceptual level by its distinct impulse regulation dimension. Obtained results suggests that the differentiation between Machiavellianism and psychopathy as they are currently measured is hard, if even possible, and using short measures like SD3 (Jones and Paulhus 2014) might additionally hinder this distinction.
Compliance with Ethical Standards
Conflict of Interest Radosław Rogoza declares that he has no conflict of interest. Author Jan Cieciuch declares that he has no conflict of interest.
Funding This study was funded by Polish National Science Center (grant number 2015/19/N/HS6/00685).
Ethical Approval All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.
Informed Consent Informed consent was obtained from all individual participants included in the study.
Open Access This article is distributed under the terms of the Creative Comm ons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 2019-05-11T13:04:34.411Z | 2019-06-01T00:00:00.000 | {
"year": 2019,
"sha1": "589a9dfd838f00ad0fca75b1727bf14a996cb0a4",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12144-017-9653-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "589a9dfd838f00ad0fca75b1727bf14a996cb0a4",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
11595626 | pes2o/s2orc | v3-fos-license | Role of bioinformatics in establishing microRNAs as modulators of abiotic stress responses: the new revolution
microRNAs (miRs) are a class of 21–24 nucleotide long non-coding RNAs responsible for regulating the expression of associated genes mainly by cleavage or translational inhibition of the target transcripts. With this characteristic of silencing, miRs act as an important component in regulation of plant responses in various stress conditions. In recent years, with drastic change in environmental and soil conditions different type of stresses have emerged as a major challenge for plants growth and productivity. The identification and profiling of miRs has itself been a challenge for research workers given their small size and large number of many probable sequences in the genome. Application of computational approaches has expedited the process of identification of miRs and their expression profiling in different conditions. The development of High-Throughput Sequencing (HTS) techniques has facilitated to gain access to the global profiles of the miRs for understanding their mode of action in plants. Introduction of various bioinformatics databases and tools have revolutionized the study of miRs and other small RNAs. This review focuses the role of bioinformatics approaches in the identification and study of the regulatory roles of plant miRs in the adaptive response to stresses.
FIGURE 1 | Various abiotic stresses and their physiological effects on plants.
but, they have developed sophisticated systems to cope up with them (Nakashima et al., 2009;Pfalz et al., 2012;Upadhyaya and Panda, 2013). The response to abiotic stresses is usually multigenic and involves altering the expression of nucleic acids, proteins and other macromolecules (Figure 1). Several excellent reviews are available that discuss the impact of these stresses on plants in details (Cramer et al., 2011;Shanker and Venkateswarlu, 2011;Duque et al., 2013;Hasanuzzaman et al., 2013a;Rejeb et al., 2014;Petrov et al., 2015;Sodha and Karan, 2015).
Primarily fluctuations in available water, temperature and soil salt content are recognized as the basic environmental stress factors. The scarcity of water because of less rainfall, paucity of soil water and excessive evaporation, is probably the most common factor, limiting the crop's growth (de Oliveira et al., 2013). Water deficit negatively affects plant growth and development by modulating nutrient uptake, photosynthesis, hormonal levels, water potential etc. This often results in tissue dehydration leading to senescence (Kaiser, 1987;Aroca et al., 2001Aroca et al., , 2012Kacperska, 2004;Wahid and Close, 2007). Under low water conditions plants activate their protective machinery to enhance water uptake and reduce water loss. However, deficiency of sufficient water supply or drought limits the root hydraulic conductivity (Nobel and Cui, 1992;North and Nobel, 1997;Aroca et al., 2012) thereby affecting water uptake and resulting in physiological drought condition for the plant (Bréda et al., 1995;Duursma et al., 2008;Aroca et al., 2012). Similarly, when the water level goes above the optimal levels it results in flooding which causes hypoxic conditions, stimulate the reactive oxygen species (ROS) and induces ethylene production that restricts aerobic respiration (Bailey-Serres and Voesenek, 2008;Perata et al., 2011).
Fluctuations in atmospheric temperature due to climate change are also exerting an adverse affect at physical and cellular levels. High temperatures change the cellular state, lipid composition, membrane fluidity, and organelle properties. They induce oxidative stress and reduce the water content of the soil, causing physiological drought in plants (Wahid and Close, 2007;Giri, 2011;Hasanuzzaman et al., 2013b;Goswami et al., 2014). They also affect flowering by decreasing the number of flowers, reducing pollen viability and flower fertility (Matsui et al., 2000;Prasad et al., 2000Prasad et al., , 2006Suzuki et al., 2001) and cause embryo damage during the early stages of seed germination (Grass, 1994;Hasanuzzaman et al., 2013b). Low temperatures also confer osmotic and oxidative stress on plants Aroca et al., 2012). They reduce metabolic rate, increase rigidification of the cellular membrane, cause flower abortion, fertilization breakdown and negatively impact seed filling (Thakur et al., 2010;Zinn et al., 2010;Hedhly, 2011).
The temperature increases along with poor irrigation practices increase soil salinity. This has emerged as an important stress which inhibits plant's growth at every stage by inducing osmotic stress and ion toxicity (Diédhiou and Golldack, 2006;Joseph and Mohanan, 2013;Roychoudhury and Chakraborty, 2013). Salinity majorly affects roots by decreasing water use efficiency and ion exclusion, which adversely affects the root elongation, spike development and plant height (Choi et al., 2003;Alam et al., 2004;Diédhiou and Golldack, 2006;Mahmood et al., 2009;Aroca et al., 2012;Hakim, 2013;Pierik and Testerink, 2014).
The various environmental stresses result in osmotic and oxidative stresses, which inhibit metabolic reactions . Oxidative damage is one of the main reasons for loss of productivity and is triggered by increase in reactive oxygen species (ROS) that includes superoxide radicals (2O − ), hydroxyl radicals (OH), and hydrogen peroxide (H 2 O 2 ) (Mittler, 2002;Apel and Hirt, 2004;Bartels and Sunkar, 2005;Foyer and Noctor, 2005;Addo-Quaye et al., 2009). The ROS are responsible for nucleic acid damage, protein oxidation, and lipid peroxidation (Foyer et al., 1994). Plants have developed intrinsic mechanisms to avoid the oxidative stresses that includes recruitment of enzymatic scavengers, like superoxide dismutase (SOD), ascorbate peroxidase, glutathione peroxidase, glutathione S-transferase, catalase, and non-enzymatic low molecular mass molecules, such as ascorbate, tocopherol, carotenoids, and glutathione (Mittler, 2002;Mittler et al., 2004).
BASICS OF MICRORNA
The discovery of regulatory small RNAs (sRNAs) that block specific messenger RNAs (mRNAs) at the post-transcriptional levels (PTGS or post-transcriptional gene silencing) by cleavage or translational repression (Sunkar et al., 2006;Shi et al., 2012) or interfere with transcription (TGS or transcriptional gene silencing) by directing DNA methylation of genes (Wu and Zhang, 2010) have unlocked a new avenue in gene expression regulation. The sRNAs constitute a large family represented by many species of RNA molecules distinguished from each other by their size, biogenesis, mode of action, regulatory role etc. Sanan-Mishra et al., 2009;Lima et al., 2011;Meng et al., 2011a;Zheng et al., 2012).
The microRNA (miR) represents a major sub-family of endogenously transcribed sequences, ranging in length from 21 to 24 nt Eldem et al., 2013). They have been established as a major regulatory class that inhibits gene expression in a sequence-dependent manner. The lin-4 and let-7 regulatory RNAs are accepted as the naissance member of the miR family (Lee et al., 1993;Reinhart et al., 2002), which is conserved across animal and plant species. Though there is no conservation between the animal and plant sequences, but high conservation is observed among plant miRs ). An exception is provided by Ath-miR854 and Ath-miR855, which regulate levels of transcript encoding the oligouridylate binding protein 1b (UBP1b) (Arteaga-Vázquez et al., 2006). The target transcript of miR854 performs similar functions in plants as well as in animals (Arteaga-Vázquez et al., 2006).
MicroRNA Biogenesis
Each miR arises in the nucleus from an independent transcription unit, comprising of its own promoter, transcribing region and terminator, by utilizing the basic machinery for DNA-dependent RNA polymerase II mediated transcription (Kurihara and Watanabe, 2004;Lee et al., 2004;Xie et al., 2005a;Kim et al., 2011). Plant miR genes are present throughout the genome, although majority of the loci in plants are generally found in genomic (intergenic) regions that are not protein coding (Jones- Rhoades et al., 2006;Wahid et al., 2010). Comparatively lesser number of plant miRs are present in the introns (Lagos-Quintana et al., 2001;Lau et al., 2001;Chen, 2008;Nozawa et al., 2010;Wahid et al., 2010) and are rarely found in the exons (Olena and Patton, 2010;Li et al., 2011). Two miRs, miR436, and miR444, were mapped to the exonic regions of the protein-coding genes J023035E19 (AK120922) and J033125N22 (AK103332), respectively . It is hypothesized that the miRs control the host gene expression via a negative feedback loop mechanism that affects alternative splicing and cytoplasmic movement of transcripts (Slezak-Prochazka et al., 2013). Recently, CDC5 was identified as a MYB-related DNA binding protein that positively regulates miR production (Zhang et al., 2013a) by binding to their promoters and through interaction with the RNase III enzyme DCL1 (Dicer-Like 1). The large pri-miRs (primary transcripts) contain a 5 ′ -cap and 3 ′ -polyA tail and are stabilized in the nucleus by DDL (Dawdle) which is a RNA binding protein (Yu et al., 2008).
The pri-miRs are further processed into hairpin loop structured pre-miRs (precursor miRs) in the D bodies (Dicing bodies) or SmD3-bodies (small nuclear RNA binding protein D3 bodies) (Kurihara et al., 2006;Fang and Spector, 2007;Fujioka et al., 2007) by a protein complex containing the DCL1 (Schauer et al., 2002) and the CBC (Cap-Binding protein Complex) (Kim et al., 2008). The accuracy of DCL1 mediated pri-miR processing is promoted by both HYL1 (Hyponastic Leaves 1), and the C 2 H 2 -zinc finger protein, SE (Serrate) (Kurihara et al., 2006;Dong et al., 2008;Manavella et al., 2012a). This activity is also aided by DRB (Double strand RNA-Binding) protein Kurihara et al., 2006;Vazquez, 2006). Recently the G-patch domain protein TGH (Tough) was identified as another active player which is responsible for enhancing the DCL1 activity (Ren et al., 2012). It has been shown that HYL1 binds double stranded (ds) region on the pri-miR (Hiraguri et al., 2005;Rasia et al., 2010;Yang et al., 2010), TGH binds the single-stranded (ss) RNA region (Ren et al., 2012) and SE possibly binds at ssRNA/dsRNA junctions (Machida et al., 2011). It was also observed that HYL1 is a phospho-protein that directly interacts with CPL1 (C-terminal domain Phosphatase-Like 1) protein, to maintain its hypo-phosphorylated state (Manavella et al., 2012a). Thus, CPL1 also plays a critical role in accurate miR processing though it is not directly required for DCL1 activity (Manavella et al., 2012a). It was observed that CPL1 directly interacts with SE and a mutation in SE can affect phosphorylation status of HYL1 by preventing recruitment of CPL1 (Manavella et al., 2012a). Thus, the proposed model for the pri-miR processing indicates association of multiple RNA binding proteins with definite regions to maintain the structural determinants for recruiting and directing DCL1 activity. The DCL1, HYL1, SE, and TGH seem to interact directly (Kurihara et al., 2006;Lobbes et al., 2006;Yang et al., 2006;Qin et al., 2010;Machida et al., 2011;Ren et al., 2012) and are colocalized in the D bodies as shown by bimolecular fluorescence complementation. However, it has not been demonstrated whether they represent a stable plant microprocessor complex (Fang and Spector, 2007;Fujioka et al., 2007;Song et al., 2007;Manavella et al., 2012b;Ren et al., 2012).
The hairpin looped pre-miRs thus formed are further processed by DCL1 to produce miR/miR * duplex (Xie et al., 2005b;Sanan-Mishra et al., 2009;Naqvi et al., 2012). Recently a proline-rich protein, SIC (Sickle), was identified to co-localize with HYL1 foci (Zhan et al., 2012) and it was found to play an important role in the accumulation of mature miR duplex (Zhan et al., 2012). The strands of the duplex are protected from uridylation and degradation by the activity of a methyltransferase protein known as HEN1 (Hua Enhancer 1) which covalently attaches a methyl residue at the 3 ′ ribose of last nucleotide from each strand (Li et al., 2005a;Yu et al., 2005). The miR duplexes are transported to the cytoplasm by HST (Hasty), the ortholog of Exportin-5 (Park et al., 2005), where the miR strand guides the AGO1 (Argonaute 1) containing RNA-induced silencing complex (RISC) complex to the target transcript (Baumberger and Baulcombe, 2005;Qi et al., 2005).
microRNA Function
Plant miRs generally control the expression of their targets transcripts by cleavage and translational repression (Chen, 2009). Brodersen et al. concluded that central matches in miR:target-mRNA duplex tend to cleave target mRNA, regardless of a few mismatches in other regions, while central mismatches in miR:target mRNA duplex lead to translational repression (Brodersen et al., 2008). It was hypothesized that the rapid fine-tuning of the target transcripts by translation repression is required for the reversible modulation of the negative regulators of stress responses whereas the on-off switching of target gene expression by cleavage was important in regulating developmental processes, which require permanent determination of cell fates (Baumberger and Baulcombe, 2005).
IDENTIFICATION OF STRESS-ASSOCIATED microRNAs
The identification of plant miR families began in the year 2000, with direct cloning and sequencing (Llave et al., 2002;Park et al., 2002;Reinhart et al., 2002). However, this was an uphill task owing to their small size, methylation status and multiple occurrences in genome. The numbers however increased rapidly with the advancement in cloning techniques and computational algorithms. In the past few years high throughput sequencing and screening protocols has caused an exponential increase in number of miRs, identified and functionally annotated from various plant species (Rajagopalan et al., 2006;Fahlgren et al., 2007;Jagadeeswaran et al., 2010;Rosewick et al., 2013). This is best exemplified by the establishment of miRBase, a biological database that acts as an archive of miR sequences and annotations (Griffiths-Jones, 2004;Griffiths-Jones et al., 2008;Kozomara and Griffiths-Jones, 2014). The first release of miRBase in the year 2002 included total 5 miRs from only 1 plant species, Arabidopsis thaliana. This was followed by the inclusion of Oryza sativa, in miRBase in the year 2003. Thereafter miRs reported from Medicago truncatula, Glycine Max, and Populus trichocarpa were included in the year 2005. The current version (release 21) includes 48,496 mature plant miRs derived from 6992 hairpin precursors reported in 73 plant species (Figure 2).
The association of plant miRs with stress was first reported in 2004 (Sunkar and Zhu, 2004). Now there are a number of reports supporting the hypothesis for the function for miRs in the adaptive response to abiotic stress including drought Zhou et al., 2010), cold , salinity (Liu et al., 2008a; and nutrient deficiency (Fujii et al., 2005). 1062 miRs have been reported to be differentially expressed in 35 different abiotic stress types in 41 plant species (Zhang et al., 2013b). The detailed list of these miRs is available as Supplementary Table 1. The comparative picture of stressinduced dis-regulations of Arabidopsis and rice miRs is compiled as Figure 3.
The survey of literature reveals that three major approaches have been employed for the identification and expression profiling of stress induced miRs. The first approach involves the classical experimental route that included direct cloning, genetic screening, or expression profiling. The second method involved computational predictions from genomic or EST loci and the third one employed a combination of both as it was based on the prediction of miRs from High Throughput Sequencing (HTS) data. Each of these was followed by experimental validations by northern analysis, PCRs or microarrays. related studies led to the establishment of different protocols for sRNA isolation and adaptor mediated synthesis of a cDNA library followed by their amplification and then cloning. The clones were screened and sequenced to identify the potential miRs (Llave et al., 2002;Reinhart et al., 2002;Sunkar and Zhu, 2004). Thus, it was portrayed as a time-consuming, low throughput, laborious, and expensive approach.
However, the first report indicating the role of miRs in plant responses to environmental stresses came from the sequencing and analysis of a library of sRNAs from Arabidopsis seedlings treated with cold, dehydration, salinity, and the plant stress hormone abscisic acid (ABA). It was observed that several miRs were up-regulated or down-regulated by the abiotic stresses (Sunkar and Zhu, 2004). This strategy was used to clone miRs from the mechanical stress-treated Populus plants (Lu et al., 2005). A majority of these miRs were predicted to target developmental-and stress/defenserelated genes. In our lab, 39 new miR sequences were cloned from salt-stressed basmati rice variety. This study also provided evidence for a converging functional role of miRs in managing both abiotic and biotic stresses (Sanan- Mishra et al., 2009).
The importance of miRs in abiotic stress responses was also implicated by the fact that several mutants such as hyl1, hen1, and dcl1 which are defective in miR metabolism, exhibited hypersensitivity to ABA, salt, and osmotic stresses (Lu and Fedoroff, 2000). Nonetheless, the direct evidence was provided by studies monitoring the down-regulation of miR398 expression in response to oxidative stresses, in Arabidopsis. It was later shown that miR398 targeted two Cu/Zn superoxide dismutase (CSD) transcripts, cytosolic CSD1, and chloroplastic CSD2, so stress induced reduction of miR398 was expected to improve plant tolerance. This theory was proved subsequently by analysis of transgenic lines under oxidative stress conditions (Sunkar et al., 2006).
Expression analysis by northern blot analysis revealed that miR395 and miR399 were involved in sulfate and inorganic phosphate starvation responses, respectively (Jones- Rhoades and Bartel, 2004;Fujii et al., 2005). Similarly, RNA gel blot analysis identified miRs induced by cold, ABA, dehydration, and high salinity in 2-week-old Arabidopsis seedlings (Sunkar and Zhu, 2004). The results indicated that Ath-miR393 was highly upregulated whereas Ath-miR397b and Ath-miR402 were slightly up-regulated and Ath-miR389a.1 was down-regulated under all the stress treatments. Similarly low temperature stress condition induced the expression of Ath-miR319c but no increase in response to dehydration, NaCl or ABA (Sunkar and Zhu, 2004). These and related findings not only helped in interpreting the role of miRs during stress but unraveled the role of specific members of the miR family. A comprehensive study of Ath-miR398, revealed that the expression of miR398 precursors (with identical mature sequences) is increased under high temperature stress and that heat stress induces expression of Ath-miR398b to a much higher level than that of the Ath-miR398a,c (Guan et al., 2013). Similarly in rice, Osa-miR169g, was proven as the only drought stress induced member among the ABA responsive miR169 family (Zhao et al., 2007).
The variable expression patterns of the miRs in response to different stresses were captured by reverse transcription quantitative PCR (RT-PCR) in several plants including Arabidopsis (Jung and Kang, 2007;Reyes and Chua, 2007;Li et al., 2008;Liu et al., 2008a;Jia et al., 2009), rice , Phaseolus vulgaris, (Arenas-Huertero et al., 2009), sugarcane (Thiebaut et al., 2012), and poplar (Rossi et al., 2015). These methods captured the similarities and differences in expression profiles of conserved miRs across different plants . This is exemplified by identified molecules like miR393 that is consistently up-regulated during drought stress in many plants such as Arabidopsis, Medicago, common bean, and rice (Sunkar and Zhu, 2004;Zhao et al., 2007;Arenas-Huertero et al., 2009). Whereas miR169 was found to be induced by drought and high salinity in rice (Zhao et al., 2009), but was down-regulated by drought stress treatment in Arabidopsis . High-throughput expression profiling analysis through one-tube stem-loop RT-PCR quantified the relative expression levels of 41 rice miRs under drought, salt, cold, or ABA treatments (Ding et al., 2011).
The need for genome wide characterization of miR expression profiles established the microarray analysis as a useful tool (Garzon et al., 2006;Zhao et al., 2007). The microarray technology is a hybridization based and a relatively costeffective assay that allows analysis of large numbers of molecules in parallel. The tiling path microarray analysis was used to identify 14 stress-inducible Arabidopsis miRs after screening 117 miRs under high-salinity, drought, and low-temperature stress conditions (Liu et al., 2008a;Zhang et al., 2008b). The results were further validated to provide evidence for cross-talk among the high-salinity, drought and low temperature stress associated signaling pathways (Liu et al., 2008a). Similar studies were performed to capture the expression patterns of miRs in response to Ultraviolet-B rays in Arabidopsis (Zhou et al., 2007), drought stress in rice (Zhao et al., 2007), cold stress in rice (Kang et al., 2010), cadmium stress in rice (Ding et al., 2011), and ABA and NaCl in Populus tremula (Jia et al., 2009).
The expression patterns also identified that tissue-specific regulation of miRs may be important for adaptation to stress. Under water deficit conditions, miR398a/b and miR408 were up-regulated in both roots and shoots of Medicago truncatula plant, but the increase was more pronounced in the shoots than in the roots. This was accompanied by the down-regulation of their corresponding targets, COX5b and plantacyanin, thereby suggesting that these miRs have a crucial role in regulation of plants responses against water deficiency (Trindade et al., 2010). In barley, miR166 was up-regulated in leaves, where as it was shown to be down-regulated in roots; and miR156a, miR171, and miR408 were induced in leaves, but unaltered in roots (Kantar et al., 2010).
The miR expression profiles were also used to compare the genotypic differences between varieties exhibiting contrasting stress sensitivities. Microarray profiles of salt-resistant and susceptible Zea mays identified 98 miRs belonging to 27 families (Ding et al., 2009). Zma-miR168 family members were induced in the salt-tolerant maize but suppressed in the salt-sensitive line. Interestingly this salt-responsive behavior of miR168 was found to be conserved in Maize and Arabidopsis (Liu et al., 2008a). miR microarray was also used to study drought-tolerant wild emmer wheat (Triticum dicoccoides) (Kantar et al., 2011), two cotton cultivars with high tolerance (SN-011) and high sensitivity (LM-6) to salinity (Yin et al., 2012) and for comparative analysis between drought-resistant and susceptible soybean (Kulcheski et al., 2011). A comparison of 12 salt-tolerant and 12 saltsusceptible genotypes in Oryza sativa, identified 12 polymorphic miR based simple sequence repeats (Mondal and Ganie, 2014). Only miR172b-SSR was different between the salinity stress tolerant and susceptible genotypes. The genotype-dependent miR profiles suggested that response of miRs to abiotic stresses varies among closely related genotypes with contrasting stress sensitivities. The result of this analysis showed that there was less diversity of miR genes in the tolerant as compared with susceptible cultivars (Mondal and Ganie, 2014).
It had been verified that a majority of known miRs are evolutionarily conserved and are expected to have homologs or orthologs in other species. So search criteria allowed upto three sequence mismatches while looking for conserved miRs in heterologous species. Using this approach 85 conserved sequences which were showing perfect match to miRs reported in miRBase (Release 19) were predicted from Morus notabilis tissues (Jia et al., 2014). Whereas in another study 35 miR families were identified in heat stressed Brassica napus by allowing two mismatches with A. thaliana miRs . Thus, the conserved sequence of plant miRs and other structural features were used for developing suitable strategies and rules for identifying and annotating (Discussed in Section The Influence of Bioinformatics Approaches on microRNA Nomenclature and Annotation) new miR genes (Lagos-Quintana et al., 2001;Reinhart et al., 2002;Floyd and Bowman, 2004;Wang et al., 2004;Adai et al., 2005;Zhang et al., 2006a;Lukasik et al., 2013). One of the early comprehensive computational analysis by Jones- Rhoades and Bartel (2004) systematically identified plant miRs and their regulatory targets that are conserved between Arabidopsis and rice. Using MIRcheck algorithm they predicted that the miRs could target mRNAs like superoxide dismutases (SOD), laccases, and ATP sulfurylases that are involved in plant stress responses. Such studies lead to identification of involvement of Ath-miR398 in the ROS pathway by targeting sites on Cu/Zn-SOD (Jones- Rhoades and Bartel, 2004;Sunkar and Zhu, 2004;Lu et al., 2005;Sunkar et al., 2005) A similar approach was used in miRFinder computational pipeline, to identify 91 conserved plant miRs in rice and Arabidopsis (Bonnet et al., 2004a).
Another strategy was based on the property of miRs to bind with perfect complementarity to their target transcripts (Laufs et al., 2004). In plant species where the target sequence was available the conserved miRs could be easily predicted by using 20 mer genomic segments with not more than two mismatches as in silico probes. This target-guided strategy was adopted to identify 16 families of drought stress-associated miRs from Physcomitrella patens (Wan et al., 2011).
The computational predictions also utilized the criteria for conservation of miR sequence and key secondary structure features of pre-miRs like their characteristic fold-back structure, thermodynamic stability etc. to predict new miRs (Berezikov et al., 2006). Seventy-nine putative miRs were identified in wheat using traditional computational strategy, out of which 9 were validated by northern blot experiments (Jin et al., 2008). Subsequently bioinformatics tools like miRAlign were developed based on the requirement of structural similarity and sequence conservation between new candidates and experimentally identified miRs . Though numerous miR profiles were generated by the computational algorithms, this was not found to be appropriate for species with less annotated genomes (Chen and Xiong, 2012).
The non-availability of complete genome annotation was overcome by employing the Expressed Sequence Tags (EST) database. These represented the true gene expression entities so they emerged as better indicators of dynamic expressions of the miR. A detailed study by identified 123 miRs from stressinduced ESTs of 60 plant species . This study confirmed that irrespective of evolutionary divergence miRs are highly conserved in plant kingdom and miR genes may exist as orthologs or homologs in different species within the same kingdom (Weber, 2005;Zhang et al., 2006b). The EST database was also used to confirm some novel miRs identified earlier by computational strategies in citrus and peach . In a recent study ESTs of abiotic stress treated libraries of Triticum aestivum were used to identify novel miRs in drought, cold, and salt stressed cDNA libraries by searching all mature sequences deposited in the miRBase (Release 19) (Pandey et al., 2013).
High Throughput Sequencing
The recent development of HTS approaches has invoked a new era by allowing the sequencing of millions of sRNA molecules. The HTS techniques employ sequencing-by-synthesis (SBS) technology, which enable accessing the full complexity of sRNAs in plants. In addition, it provides quantitative information of the expression profiles, since the cloning frequency of each sRNA generally reflects its relative presence in the sample. The signature-based expression profiling method such as massively parallel signature sequencing (MPSS) has identified miRs that have thus far proven difficult to find by using traditional cloning or in silico predictions. Sequencing technologies are rapidly emerging as the favored alternatives to the microarray-based approaches, since direct measures of gene expression can be obtained through sequencing of random ESTs, SAGE, and MPSS. The expression patterns of the identified miR targets can then be followed in the transcriptome sequencing data to gain novel insights into plant growth and development and stress responses Li et al., 2013). Though currently an expensive technique, it is expected that as the technology grows, it will become more affordable.
Complex computational algorithms are used to rapidly and rigorously sift through the HTS data for identification of putative miRs (Figure 5). These datasets have been very successful in identification of conserved miRs where the sequence is well maintained across plant species. The targets for these miRs can also be easily predicted using Parallel Analysis of RNA End (PARE) sequencing, where miR and its target mRNA have often nearly perfect complementarily Bonnet et al., 2004b;Jones-Rhoades and Bartel, 2004). The HTS data also provided a useful source to hunt for the nonconserved or species-specific miRs based on the criteria of miR annotation (Discussed in Section The Influence of Bioinformatics Approaches on microRNA Nomenclature and Annotation).
This HTS approach was initially used to visualize the repertoire of sRNAs in Arabidopsis (Rajagopalan et al., 2006;Fahlgren et al., 2007), followed by investigation on the rice miR expression profiles in drought and salt stress responses . Later, Liu and Zhang identified 67 arseniteresponsive miRs belonging to 26 miR families from Oryza sativa (Liu and Zhang, 2012). Solexa sequencing was also used to identify conserved and novel miRs in Glycine max libraries from water deficit and rust infections (Kulcheski et al., 2011), cold responsive miRs in trifoliate orange, Poncirus trifoliate, (Zhang et al., 2014a), drought and salinity responsive miRs in Gossypium hirsutum (Xie et al., 2015), heat stress induced miRs in Brassica napus , and salt stressed miRs in Raphanus sativus . Regulation of miRs in response to various abiotic stresses was studied in Arabidopsis, under drought, heat, salt, and metal ions such as copper (Cu), cadmium (Cd), sulfur (S) excess or deficiency, using sRNA NGS libraries. The search for most profound changes in miR expression patterns identified that miR319a/b, miR319b.2, and miR400 were responsive to most of the stresses under study (Barciszewska-Pacak et al., 2015).
Comparative profiles of miR expression during cold stress among Arabidopsis, Brachypodium, and Populus trichocarpa revealed that miR397 and miR169 are up-regulated. This indicated the presence of conserved cold responsive pathways in all the species. Whereas the differences in the pathways was highlighted by miR172 which was up-regulated in Arabidopsis and Brachypodium but not in poplar (Zhang et al., 2009a). Opposing patterns of miR regulation in different plant species during cold stress were observed for miR168 and miR171. The miRs are up-regulated in poplar (Lu et al., 2008) and Arabidopsis (Liu et al., 2008a) but down-regulated in rice (Lv et al., 2010). Likewise the HTS analysis of salt stressed sRNAome identified 211 conserved miRs and 162 novel miRs, belonging to 93 families between Populus trichocarpa and P. euphratica (Li et al., 2013). Using the approach of comparative miR profiling followed by experimental validation, our group identified 59 Osa-miRs that show tissue-preferential expression patterns and significantly supplemented 51 potential interactive nodes in these tissues (Mittal et al., 2013).
HTS technology has also played a crucial role in identification and characterization of the miR targets with PARE or Degradome sequencing. This involves sequencing of the entire pool of cleaved targets followed by mapping of the miR-guided cleavage sites (Ding et al., 2012). In Populus, 112 transcripts targeted by 51 identified miRs families were validated by using degradome sequencing (Li et al., 2013). These are several reports which used HTS of sRNA pools and degradome analysis to identify targets of stress induced miRs such as, in maize (Liu et al., 2014), tomato (Cao et al., 2014), Raphanus sativus (Wang et al., 2014), Populus (Chen et al., 2015), rice (Qin et al., 2015), Phaseolus vulgaris (Formey, 2015), and barley (Hackenberg et al., 2015).
It has been shown that plant miRs also act by inhibiting mRNA translation (Brodersen et al., 2008;Lanet et al., 2009), therefore such targets tend to get overlooked during degradome sequencing. The HTS techniques are also being employed for sequencing the whole transcriptome pools to identify the miR targets in Medicago (Cheung et al., 2006), Zea mays (Emrich et al., 2007), and Arabidopsis (Weber et al., 2007). The combined strategy of sRNAs and mRNAs (transcriptome) sequencing enabled the identification of new genes, involved in nitrate regulation and management of carbon and nitrogen metabolism in Arabidopsis. This study identified miR5640 and its target, AtPPC3, leading to the preposition that the NO − 3 responsive miR/target might be involved in modulating the carbon flux to assimilate nitrate into amino acids (Vidal et al., 2013).
THE INFLUENCE OF BIOINFORMATICS APPROACHES ON microRNA NOMENCLATURE AND ANNOTATION
The in silico approaches have also played a dominant role in the identification of plant miRs and their targets. The advancement in molecular and computational approaches has not only resulted in the exponential growth in the discovery and study of sRNA biology but has also provided a deeper insight into the miR regulatory circuits. At the same time, they have been instrumental in defining and redefining the rules for annotating the miRs and their nomenclature. A miR registry system was adopted in 2004 to facilitate a complete and searchable place for the published miRs and to provide a systematic rule so that the new miRs can be assigned with a distinctive name prior to publication of their discovery Griffiths-Jones, 2004). In miRBase the nomenclature of miRs starts with initial 3 letters signifying the organism, followed by a number which is simply a sequential numerical identifier based on sequence similarity, suffixed by "miR, " trailed by alphabet letters which denotes the family member (Figure 4). It was later enforced that sequences showing homology within organisms and mature identical sequences coming from two or more different organism should be assigned the same family names (Meyers et al., 2008). Sequences with no similarity to previously reported sequence were considered novel and assigned next number in the series (Griffiths-Jones, 2004). It is observed that in miRBase Medicago truncatula, mtr-miR2592 is the largest miR family with 66 members, while in rice; the largest family is seen for Osa-miR395 with 25 members. The occurrence of more than 1 mature sequence from same precursor is designated by an integer followed by a dot at the end (Griffiths-Jones, 2004;Meyers et al., 2008). With the accumulation of HTS data and the experimental validation that both miR and miR * of same precursor can be functional, it was decided to add a suffix of 3p and 5p at the end of the sequence to represent the presence of miR on 3 ′ or 5 ′ arm of stem loop precursor (Meyers et al., 2008).
The processing of biological information through bioinformatics tools and computational biology methods has now become crucial for elucidating complicated biological problems in genomics, proteomics, as well as in metabolomics. With the accumulation of huge sRNA sequencing datasets, it is almost impossible to analyze each and every sequence through direct experimental approaches. This has necessitated the role of bioinformatics tools and databases in analyzing and screening the huge data sets in a short time period, with minimum costs and without compromising on the specificity of analysis.
The primary criteria for annotation of plant miRs is the precise excision of a miR/miR * duplex from the stem of a singlestranded, stem-loop precursor. Computational algorithms use these criteria to predict the RNA secondary structure for the sequences identified from the genomic DNA, transcript or ESTs. Subsequently the annotation rules are followed to distinguish a miR from the sRNA pool. The first set of guidelines for miR annotation was based on specific expression and biogenesis criteria . The expression criteria included the identification by cloning and/or detection by hybridization and phylogenetic conservation of the miR sequence. While the biogenesis criteria included the presence of a characteristic hairpin structured precursor transcript, conservation of the precursor secondary structure and increased accumulation of a precursor in absence or reduction in Dicer activity .
The advancement in sequencing technologies provided with highly sensitive techniques for obtaining the complete small RNA profiles that could distinguish between fragments differing by a single base. This also provided an excellent medium to search for known and novel miR family members, their precursors, and modified versions. The bioinformatics based analysis of HTS datasets, made it feasible to predict the entire set of miRs present in a RNA sample. This was also utilized to retrieve the information on expression profiles, putative target transcripts, the miR isoforms, and sequence variants of miRs through differential expression profiling under various conditions (Moxon et al., 2008;Addo-Quaye et al., 2009;Yang and Li, 2011b;Neilsen et al., 2012). Dedicated web servers like isomiRex (Sablok et al., 2013) are available online for identification of the sequence variants using HTS data.
With the development in computational tools and the availability of genomic sequences the rules were further refined to include characteristics that are both necessary and sufficient for miR annotation. It was proposed that the prediction criteria should include that the miR and miR * are derived from opposite arm of same precursor such that they form a duplex with two nucleotide overhang at the 3 ′ end, base pairing of miR and miR * should have less than four mismatched bases, the asymmetric bulges are minimum in size and frequency specifically in miR/miR* duplex. sRNA-producing stem-loops that violate one of these criteria could still be annotated as miRs, provided that there is conclusive experimental evidence of precise miR/miR * excision (Meyers et al., 2008). In continuation to the guidelines set by Ambros et al. (2003) it was recognized that conservation of miRs, assessed using either bioinformatics or direct experimentation, was still a powerful indicator of their functional relevance though it need not be necessary for annotation as many plant miRs lack homologs in other species. It was proposed that identification of a target is not necessary for miR annotation as targets could not be predicted for many of the less-conserved miRs or the predicted targets lacked experimental confirmation.
It is being observed that increased coverage of deepsequencing results have resulted in capturing sequences of everlower abundance. This has made the identification of miRs even more challenging. A number of recent publications have attempted use additional criteria based on patterns of mapped reads (Hendrix et al., 2010). The consensus set of guidelines that have started to emerge lay importance to the presence of multiple reads with consistent processing of the 5 ′ -end of the mature sequence preferably from several independent experiments. The mapped reads should not overlap other annotated transcripts as they may represent fragments of mRNAs or other known RNA types.
Various tools were developed based on the annotation guidelines to analyze the HTS data sets. The major steps adopted by various available tools for prediction of novel miRs and their target identification are discussed in Figure 5.
Basically the sequenced reads are selected, based on the average quality score appended with each base, and subjected to 3 ′ adapter trimming. This can be achieved by designing specific scripts (using languages such as PERL) or by using various available tools such as NGSQC Toolkit (Patel and Jain, 2012), FASTX-Toolkit (Gordon and Hannon, 2010), CLC Genomics Workbench (Matvienko) 1 etc. Next the reads with length of 18-24 nucleotides are selected and aligned to the corresponding genome of the plant species under consideration using tools such as bowtie, soap, and bwa. The aligned reads are then used to filter out sequences mapping with other sRNAs such as, tRNA, rRNA, sRNA, snRNA, snoRNA, and known miRs. The remaining reads are used to retrieve the potential precursors from the reference genome and their secondary structure is predicted. Excellent softwares like Mfold (Zuker, 2003), RNAfold (Denman, 1993) etc. are freely available and have been useful in identifying the appropriately folded structures. Then these candidate precursors are evaluated on the basis of the annotation criteria (Meyers et al., 2008). The expression profiles of identified known and novel miRs from sequence pools are achieved by calculating the number of times a unique read occurred in the entire sRNA pool and normalized against total reads. Reads Per Million (RPM) for each sequence occurring in each sample is most common way to achieve the normalized expression of each sequence. RPM = (Actual read count/total number of reads in sample) × 1,000,000) (Motameny et al., 2010).
MICRORNA REPOSITORIES
The study of miR and their targets by analyzing the sRNA and transcriptome sequences is greatly facilitated by the availability of numerous freely accessible tools and databases, which can be used by experimental researchers without any specialization in bioinformatics. The various web-based tools and databases available for the prediction and analysis of plant miRs and their targets are listed in Tables 1, 2, respectively. Each of these is based on different algorithms and methodologies and has their respective strengths and shortcomings. However, the major limitation in most of these techniques is the requirement for a known sequence and the search for a conserved hairpin loop structure (Unver et al., 2009). To overcome these limitations, Kadri et al. (2009) developed the Hierarchical Hidden Markov Model (HHMM) that employs region-based structural information of pre-miRs without relying on phylogenetic conservation. It obtains the secondary structures on the basis of minimum free energy and then classifies the sequence with HHMM (Kadri et al., 2009). Some of the popularly used tools are discussed below. miRCheck This is an algorithm written in the form of a PERL script for identifying 20 mers having potential to encode plant miRs. The tool requires input of a putative hairpin sequences and their secondary structures. The presence of candidate 20 mer sequences is then searched within the hairpin to predict potential plant miR. This algorithm was first used for identifying conserved miRs in Arabidopsis and rice (Jones- Rhoades and Bartel, 2004).
UEA sRNA Workbench
It is a comprehensive tool for the complete analysis of sRNA sequencing data and provides the convenience of using the facilities provided by different tools in one place. Its Graphical User Interface (GUI) makes it easy to use for researchers, do not needs any prior knowledge of computer programming (Moxon et al., 2008). It can be downloaded and installed locally, and it also has a web-based facility of doing the same analysis in form of UEA sRNA toolkit which is freely accessible. Table 3 lists all the available tools at UEA sRNA Workbench.
TAPIR
This is an online web server for prediction of targets of plant miRs. It can characterize miR-targets duplexes with large loops which are usually not detectible by traditional target prediction tools. The prediction results are driven by a combination of two different algorithms. The first one is the fast and canonical FASTA local alignment program which cannot detect duplexes with large number of bulges and/or mismatches (Pearson, 2004) and second one is RNAhybrid (Krüger and Rehmsmeier, 2006) for detection of miR-mRNA duplexes (Bonnet et al., 2010). Though it is a good option for miR target prediction but is not preferred as the users face problem in analyzing large datasets on the online server.
CLC Genomics Workbench
It is a commercial software developed by QIAGEN that offers Quality Check (QC) and pre-processing of NGS data. Although it is a good tool for preprocessing of NGS data but it focuses more on other genomic areas such as de novo assembly and it doesn't provides the facility to process the sRNA data for miR and target identification. In relation to the sRNAs it has been majorly used in initial steps of quality filtering, adapter trimming and calculating abundances of sRNA libraries. It can also generate genome alignments by using standalone blast search. The workbench provides an interactive visualization to the differential expression and statistical analysis of RNA-Seq and sRNA data.
C-mii
It uses a homology-based approach for plant miR and target identification. The tool aligns known miRs from different plant species to the EST sequences of the query plant species using blast homology search. The aligned sequences are allowed to fold in to the characteristic hairpin loop structures to identify the putative miRs. The predicted miR sequences are further used for identifying perfect or nearly perfect complimentary sites on the input transcript sequences to identify the putative targets. The tool has a unique feature of predicting the secondary structures of the miR-target duplexes. The identified targets can be annotated further by searching their functions and Gene Ontologies (GO) (Numnark et al., 2012a). It provides user friendly GUI, and is easily downloadable hence it can be easily used for analyzing large datasets. However, the major limitation lies in the search and availability of homologous sequences, so it cannot be used to analyze the NGS datasets.
Tool Function References
Adapter removal Removes the adapter sequence Moxon et al., 2008 Filter It filters already annotated sRNA (rRNA, tRNA. snRNA, snoRNA, miRNA etc) data Moxon et al., 2008 Sequence alignment Allows alignment of short reads to the genome Moxon et al., 2008 CoLIde It defines a locus as a combination of regions sharing same expression profiles, present in close proximity on genome Mohorianu et al., 2013 miRCat Predicts miRs from HTS data without requiring the precursor sequence Moxon et al., 2008 miRProf Determines normalized expression levels of sRNAs matching to known miR in miRBase Moxon et al., 2008 PAREsnip Finds target of sRNA using degradome data. Folkes et al., 2012 SiLoCo Compares expression patterns of sRNA loci among different samples Moxon et al., 2008 ta-si Prediction Trans-acting RNA prediction, by identifying 21nt characterstic of ta-siRNA loci by using sRNA dataset and respective genome Moxon et al., 2008 RNA/Folding annotation Predicts the secondary structure of RNA sequences and annotates it by highlighting up to 14 comma seperated short sequences Moxon et al., 2008 VisSR Used for sequence visualization Moxon et al., 2008 miRdeep-P It is a collection of PERL scripts that are used for prediction of novel miRs from deep sequencing data. It was developed by incorporating the plant-miR specific criteria to miRDeep (Friedländer et al., 2008). Its pipeline utilizes bowtie for sequence alignments and RNAfold for secondary structure prediction of putative precursors. The remaining steps such as extracting potential precursor sequences and identification of putative novel miR is regulated by specific scripts (Yang and Li, 2011a).
Although it is a specialized tool for identification of plant miRs, but does not has a GUI interface. So the user needs to work through command line for its execution, which warrants knowledge on PERL scripting.
CleaveLand
It is a general pipeline, available as a combination of PERL scripts, for detecting miR-cleaved target transcripts from degradome datasets (Addo-Quaye et al., 2009). It can be executed by a single command and requires input of degradome sequences, sRNAs, and an mRNA database to yield an output of cleaved targets. The pipeline runs in command mode and requires the coinstallation of several dependencies such as PERL, R, samtools, bowtie, RNAplex etc.
ARMOUR
The accumulation of sequencing data has generated the need for a comprehensive and integrated database of miR:mRNA, expression profile information and target information. Our group has developed ARMOUR database (A Rice miRNA: mRNA Interaction Resource) that consolidates extensive datasets of rice miRs from various deep sequencing datasets for examining the expression changes with respect to their targets. Development of such interactomes for different plant species shall provide a valuable tool to biologists for selecting miRs for further functional studies.
PERSPECTIVES
miRs are an extensive class of endogenous, small regulators of gene expression in the numerous developmental and signaling pathways. There is ample evidence for the role of miRs in abiotic stress mediated genomic changes that result in attenuation of plant growth and development. The different experimental approaches have identified the intriguing expression profiles of miRs in distinctive tissues and/or stages of development. The regulation of miR expression also varies between the domesticated plant species and their wild relatives. Sequence-based profiling along with computational analysis has played a pivotal role in the identification of stress-responsive miRs, although these results require independent experimental validations. sRNA blot and RT-PCR analysis have played an equally important part in systematically confirming the profiling data. The identification of putative targets for these miRs has provided robust confirmation of their stress responsiveness. This has also enabled quantification of their effect on the genetic networks, such that many of the stress regulated miRs have emerged as potential candidates for improving plant performance under stress. However, so many efforts are still required for in-depth analysis of the miR modulation of each gene product induced by abiotic stress(es) and its interacting partners. This requires development of reliable and rigorous assays for firm characterization of the spatiotemporal regulation of these miRs under stress conditions. The potential of computational biology needs to be tapped for performing an extensive comparison of miR expression profiles among agriculturally important crops during environmental stress conditions to tap key target nodes that need to be modulated for improving crop tolerance to environmental stress. The development and integration of plant synthetic biology tools and approaches will add new functionalities and perspectives in the miR biology to make them relevant for genetic engineering programs for enhancing abiotic stress tolerance.
ACKNOWLEDGMENTS
There is a vast literature on miRs, so we offer our apologies to researchers whose work could not be cited here. The research in our lab is supported through different grants from the Department of Biotechnology (DBT), Government of India. | 2016-06-17T22:58:48.418Z | 2015-10-26T00:00:00.000 | {
"year": 2015,
"sha1": "c31ee9e770a0fdf2b7bcf0ee932f95c3d41f529f",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2015.00286/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c31ee9e770a0fdf2b7bcf0ee932f95c3d41f529f",
"s2fieldsofstudy": [
"Biology",
"Computer Science",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
208358215 | pes2o/s2orc | v3-fos-license | The Relationship between Emotional Regulation and School Burnout: Structural Equation Model According to Dedication to Tutoring
School burnout constitutes a current phenomenon which generates diverse negative consequences in the personal and academic lives of students. Given this situation, it is necessary to develop actions that permit us to regulate this harmful mental state and that are administered from within the school context. A descriptive and cross-sectional study is presented that pursues the objective of examining a structural equation model which brings together burnout and emotional regulation. The model assumes that students receive tutoring at school in order to tackle these types of problems. For this, the sample constituted a total of 569 students from the province of Granada (men = 52.3% (n = 298); women = 47.7% (n = 271)). Mean age was reported as 10.39 ± 0.95 years and the School Burnout Inventory (BMI) and the Emotional Regulation Scale were utilized as the principal instruments. As main findings it was observed that students who received one hour of weekly tutoring showed a positive relationship between expressive suppression as a strategy of emotional regulation, cynicism, and exhaustion as consequences of school burnout. In the same way, a direct association existed between burnout-related exhaustion and cognitive repair. Given that significant relationships could not be observed between these variables in students who do not receive tutoring, higher use of emotional regulation was confirmed amongst tutored students when faced with this negative mental state.
Introduction
Burnout has traditionally been treated according to economic and social factors, without considering the underlying aspects that relate to individuals themselves [1,2]. Nevertheless, scientific studies have directly linked burnout to personal components, specifically, to the individual's ability to regulate their emotions [3,4]. In this way, this capacity will clearly determine the way in which individuals face certain situations.
These emotional competencies must be integrated into educational processes, not only as a fundamental element in the development of the individual's personality but as a way of optimizing the process itself and achieving a high level of training efficiency that avoids elements that can distort personality or cause burnout [5,6]. In this way, this integration must be facilitated from contexts such as guidance and the personal tutoring of students [7,8].
In this sense, burnout is an individual psychosocial state characterized by mental and emotional exhaustion, cynicism or dehumanization, lack of motivation, and the sensation of low personal achievement. This is produced as a result of overwhelming and chronic work stress [9][10][11][12]. It is In general, interest in emotional regulation has increased over recent years given that it is linked to wellbeing, health, personal development, performance, empathy, and interpersonal relationships [35,37]. Definitively, the management or control of one's own emotions is fundamental for emotional wellbeing and healthy human functioning [17].
Specifically during childhood and adolescence, appropriate emotional control that enables inappropriate impulses to be inhibited, redirects behavior in a constructive way, and adapts to the situation, is key to success in interpersonal relationships, overcoming complex situations, goal achievement and, essentially, psychological and emotional wellbeing [34].
Research studies that relate these constructs have mostly been conducted within the work setting, particularly within those professions that typically present higher levels of burnout or a greater risk of suffering from it. Concretely, the teaching body represents the group most investigated in the burnout literature given that it presents record levels of exhaustion and fatigue [17].
These investigations have established that emotional regulation plays a central role in burnout [38] and that it is important to reflect upon the multiple strategies employed in the process of self-regulation that are directed towards controlling and encouraging emotional wellbeing [17]. In this sense, individuals who demonstrate a greater capacity of emotional regulation (ERA) remain open to both positive and negative emotions. They are able to recognize the value of emotions as a function of the situation and commit to them, or move away from them, depending on their usefulness. Further, they have the ability to discern which strategy is most appropriate when dealing with a specific emotion [29].
As a result, emotional regulation has the opposite effect on burnout [39]. Definitively, emotional regulation and, specifically, the emotional regulation ability (ERA), are key to burnout [29,40,41]. In this sense, different authors [17,29,39], moved by the results of their work, show the relevance of pushing for intervention programs targeting the development of social and emotional skills and emotional regulation strategies.
Further, "results demonstrate that school burnout is positively associated with emotional deregulation" in adolescents [42] (p. 18), specifically in an "emerging adults" sample (M = 19) in the southeastern United States. In fact, in a study carried out with Spanish secondary school students between 12 and 18 years, a significant negative relationship has been found to exist between the emotional intelligence sub-scales, between them, emotional regulation, and school burnout [31].
The review of these studies makes it possible to establish the importance of also promoting the development of programs and activities within the educational setting. These actions should be directed towards working on emotional skills in the classroom, emphasizing the capacity to regulate emotions in different situations by putting strategies into action that are adapted to these very emotions. Developing the ERA of students will decrease the likelihood of them suffering school burnout, whilst at the same time favoring their wellbeing and academic success. These interventions should be proposed from a multidisciplinary and transversal standpoint, an approach that is typically taken in school tutoring [7,43].
This new model of tutoring [8,44] that include teachers, families, and students as active agents not only at school, but also in the family and social context, is usually called Tutorial Action [39,45,46]. However, above all, it transcends the traditional contents of academic guidance, focused on the orientation for studies; and includes guidance for development of the student's personality, on topics such as self-esteem, self-concept, and emotional intelligence; for improvement of learning [47,48], on the use of appropriate learning strategies and study techniques; and for development of social and professional skills, developing skills of communication, social responsibility, and sustainable knowledge transfer [49].
To this end, the present study pursues the following objectives: • To develop a structural equation model that enables associations between school burnout, emotional regulation, and the age of schoolchildren to be defined.
•
To compare existing associations between the variables that compose the path model according to treatment during classroom tutoring through a multigroup analysis.
Design and Participants
The present study describes a nonexperimental ex post-facto study design that is descriptive and cross-sectional in nature and conducted amongst schoolchildren from the province of Granada (Spain). A single measurement was conducted within a single group, composed of a total sample of 569 schoolchildren belonging to eight public centers, which are located in different areas of the province of Granada with diverse socio-economic contexts. Participants were attending the second or third year of primary education, with age being reported between 8 and 13 years (M = 10.39; SD = 0.95). The sample consisted of 52.3% (n = 298) males and 47.7% (n = 271) females. The study sample was recruited using a multistage process. Educational centers were selected intentionally as they already had a pre-existing collaborative relationship with the research team. They were informed of the study aims, procedures, and instruments for collecting information and handling data, and a sample of volunteers from the School Council were invited to participate. The collaboration of natural-class groups was requested, whilst adhering to legal requirements around the participation of schoolchildren in research. The following selection criteria was determined: (a) individuals who were enrolled in the fourth, fifth, or sixth grade of primary education during the 2018/2019 school year at selected schools in the province of Granada. The following exclusion criteria was established: (b) individuals who did not possess established legal authorizations or possessed any impediment which impeded them from completing any of the instruments proposed for data collection.
Instruments
The following instruments were employed for the measurement of the proposed variables: • School Burnout Inventory, developed in 2009 [24] and validated in Spanish in 2013 [5]. This scale is composed of nine items (e.g., "1. I feel that I am not able to complete all of my school work") which are rated along a Likert scale with five response options (1 = completely disagree; 5 = completely agree). This questionnaire groups school burnout according to three dimensions: exhaustion (items 1, 4, 7, and 9), cynicism (items 2, 5, and 6), and inefficacy (items 3 and 8). This instrument presents adequate internal consistency with this specific sample, with a Cronbach alpha of α = 0.746 • Emotional Regulation Questionnaire, developed in 2003 [50] and validated in Spanish in 2016 [6]. This scale is composed of 10 items (e.g., "1. I keep my emotions to myself"), which are rated along a Likert scale with seven response options (1 = completely disagree; 7 = completely agree). This questionnaire groups emotional regulation into two dimensions: (1) expressive suppression, which is associated with the nature of the emotional response, implying a decrease in the expressive behavior of the emotion that is being felt (items 1, 2, 3, and 4), and (2) cognitive reevaluation, which is related to the way of interpreting cognitively the information that a subject experiences, modulating their emotional meaning, and also with the ability he has to face it (items 5, 6, 7, 8, and 9). The instrument revealed acceptable, although relatively low, internal consistency with this specific sample. The value of Cronbach alpha was α = 0.633.
•
Self-registration sheet. An "ad-hoc" questionnaire was attached with variables that were socio-demographic in nature and related with the tutoring approach. It considered sex (male/female); age; school year (3rd/4th/5th/6th), school/institute; dedicates 1 h a week to tutoring (yes/no).
Procedure
The research team informed the educational centers of the study aims, measurement instruments, and how the data would be handled following its collection. Once the permissions were obtained that permit access to the research setting, the instruments were administered in the presence of a research interviewer. The purpose of this was to ensure correct application of the scales and to resolve any doubts that might emerge during data completion. Data were collected and its quality was confirmed, whilst ensuring throughout that the process conformed to the ethical principles for research defined in the Declaration of Helsinki in 1975 and later updated in Brazil in 2013 [51].
Data Analysis
Data analysis was conducted using the software IBM SPSS ® for Windows, version 23.0. Frequencies and means were employed for the basic descriptive analysis. Likewise, Cronbach's coefficient was utilized in order to determine internal consistency of the instruments, fixing the reliability index at 95%. Multi-group analysis through structural equations (SEM) was carried out using the software IBM AMOS ® version 23.0 (International Business Machines Corporation, Armonk, NY, USA). SEM was utilized in order to determine the relationships between the variables that constitute the theoretical model ( Figure 1) for both groups (schoolchildren enrolled in formative processes that involve a commitment to tutoring for their emotional development and schoolchildren enrolled in training processes without a commitment to tutoring specifically targeting these aspects. The SEM was constituted by six exogenous variables. The model provided explanations of these variables through observed associations. Observed variables are those that present an error term that is represented using a circle, whilst latent variables do not present error terms and employ bidirectional arrows. In this way, the latent variables were Burnout-Exhaustion (B-EXH), Burnout-Cynicism
Data Analysis
Data analysis was conducted using the software IBM SPSS ® for Windows, version 23.0. Frequencies and means were employed for the basic descriptive analysis. Likewise, Cronbach's coefficient was utilized in order to determine internal consistency of the instruments, fixing the reliability index at 95%. Multi-group analysis through structural equations (SEM) was carried out using the software IBM AMOS ® version 23.0 (International Business Machines Corporation, Armonk, NY, USA). SEM was utilized in order to determine the relationships between the variables that constitute the theoretical model ( Figure 1) for both groups (schoolchildren enrolled in formative processes that involve a commitment to tutoring for their emotional development and schoolchildren enrolled in training processes without a commitment to tutoring specifically targeting these aspects. The SEM was constituted by six exogenous variables. The model provided explanations of these variables through observed associations. Observed variables are those that present an error term that is represented using a circle, whilst latent variables do not present error terms and employ bidirectional arrows. In this way, the latent variables were Burnout-Exhaustion Bidirectional arrows show the associations between the latent variables (covariances), whilst the unidirectional arrows show the associations between the observable variables and their associated error terms, which are interpreted as multivariate regression coefficients. Prediction errors relate the observable and endogenous variables to the model. Likewise, the maximum likelihood method (ML) was employed to estimate the associations between variables as it is consistent and invariable to scale type.
With the purpose of determining compatibility of the SEM with the empirical information obtained, different indices were employed to determine fit of the theoretical model. Nonsignificant values should be obtained for the p-value, however, other fit indices should be employed as this statistic is highly sensitive to the effects of sample size [52]. Amongst these other indices, the Comparative Fit Index (CFI), Incremental Fit Index (IFI), Normalized Fit Index (NFI), and the Tucker-Lewis Index (TLI) were utilized. For these, values higher than 0.90 should be obtained in order to present an acceptable fit and values higher than 0.95 present excellent fit indices. Further, the Root Mean Square Error Approximation (RMSEA) was used with acceptable fit being determined by values lower than 0.08 and excellent fit by values lower than 0.05.
Results
The structural model developed showed good fit indices for the multi-group analysis. The chi-squared test revealed a statistically significant value (χ 2 = 7.389; df = 8; p = 0.495). Given the sensitivity to sample size presented by this statistic, the relevance of using other standardized fit indices is noted [52]. In this way, the NFI obtained a value of 0.976, the IFI revealed a value of 0.982, and the CFI produced a value of 0.986, all of these representing excellent fit. Likewise, the RMSEA obtained a value of 0.036, this also being excellent and demonstrating an appropriate level of adjustment of the SEM. Table 1 and Figure 2 show the regression weights and standardized regression weights for the structural model developed within schoolchildren who dedicate one hour a week to work on questions linked to tutoring (academic, personal, and emotional). This enabled determination of the associations between school burnout, emotional regulation, and age. Statistically significant associations are shown (p < 0.005) at the first level of the model between the three dimensions of school burnout, these all being positive and direct, with the strength of the relationship as indicated by the regression weight being as follows: cynicism and inefficacy (b = 0.561), exhaustion and cynicism (b = 0.527), and exhaustion and inefficacy (b = 0.467).
Following this, associations were shown between the dimensions of school burnout and emotional regulation. The strongest association is observed for the relationship between exhaustion and expressive suppression (p < 0.005; b = 0.231), followed by the association between cynicism and expressive suppression (p < 0.01; b = 0.161) and finally, exhaustion and cognitive repair (p < 0.01; b = 0.149). Statistically significant differences were not observed at this level for any other associations.
Finally, the relationship between both dimensions of emotional regulation are shown, with statistically significant differences being revealed (p < 0.01; b = 0.165). Likewise, expressive suppression was negatively associated with age (p < 0.05; b = −0.096). Figure 3 show the regression weights and standardized regression weights of the structural model developed for schoolchildren who did not have an hour each week dedicated to Figure 3 show the regression weights and standardized regression weights of the structural model developed for schoolchildren who did not have an hour each week dedicated to work on questions linked to tutoring. In this way, relationships can be determined between school burnout, emotional regulation, and age within participants who do not tackle academic, personal, or emotional questions with a tutor. Statistically significant associations (p < 0.005) are shown at the first level of the model between the three dimensions of school burnout. All relationships are positive and direct, with the strongest to weakest regression weights being as follows: cynicism and inefficacy (b = 0.509), exhaustion and inefficacy (b = 0.407), and exhaustion and cynicism (b = 0.372).
For this model, a statistically significant association was only observed between school burnout and emotional regulation, this being evident in the dimensions of inefficacy and cognitive repair (p < 0.05; b = 0.225). Age was also associated with emotional regulation, whilst statistically significant differences were also observed between expressive suppression and cognitive repair (p < 0.05; b = 0.230).
Discussion
The present research work sought to compare an explanatory model of existing relationships between burnout and emotional regulation in a sample of Spanish schoolchildren. It followed the line of research presented by some similar studies which have approached wellbeing at school and emotional education [31,39,42]. Likewise, a multi-group analysis was carried out with the purpose of identifying existing differences in the relationships between these variables, considering dedication to tutoring in the educational context. Concretely, the relevance of tutoring for ensuring the integral development of students, considering its academic, socio-affective, and cognitive dimensions [53]. To this end, the development of tutoring is presented as an essential element that encapsulates a set of actions planned by various professionals from the educational context and coordinated by the tutor. These actions may be able to help in the treatment of school burnout [39,54].
In reviewing the first level of the structural model, a positive relationship can be observed between the dimensions of burnout-exhaustion, cynicism, and inefficacy-in the two analyzed groups. The strength of association was moderate in both schoolchildren receiving dedicated tutoring time and those who were not benefiting from this process of guidance and orientation. These findings confirm that a consistent inter-relationship exists amongst the variables that form the construct of interest, providing a theoretical model of burnout that can be justified by the inter-relationship evident in the three factors that compose school burnout [55]. In fact, this state of physical and mental exhaustion, alongside a lack of motivation, is linked to negative conducts at school. Such behaviors
Discussion
The present research work sought to compare an explanatory model of existing relationships between burnout and emotional regulation in a sample of Spanish schoolchildren. It followed the line of research presented by some similar studies which have approached wellbeing at school and emotional education [31,39,42]. Likewise, a multi-group analysis was carried out with the purpose of identifying existing differences in the relationships between these variables, considering dedication to tutoring in the educational context. Concretely, the relevance of tutoring for ensuring the integral development of students, considering its academic, socio-affective, and cognitive dimensions [53]. To this end, the development of tutoring is presented as an essential element that encapsulates a set of actions planned by various professionals from the educational context and coordinated by the tutor. These actions may be able to help in the treatment of school burnout [39,54].
In reviewing the first level of the structural model, a positive relationship can be observed between the dimensions of burnout-exhaustion, cynicism, and inefficacy-in the two analyzed groups. The strength of association was moderate in both schoolchildren receiving dedicated tutoring time and those who were not benefiting from this process of guidance and orientation. These findings confirm that a consistent inter-relationship exists amongst the variables that form the construct of interest, providing a theoretical model of burnout that can be justified by the inter-relationship evident in the three factors that compose school burnout [55]. In fact, this state of physical and mental exhaustion, alongside a lack of motivation, is linked to negative conducts at school. Such behaviors can include lack of approval of the processes taking place in school, stress, and low perception of one's own ability to meet educational requirements [5]. It seems to be the case that once one finds them self in this mental state an inter-relationship is produced between exhaustion and inefficacy, independent of the actions introduced to tackle it [21,56].
The second level of the model tackles the relationship between school burnout and emotional regulation. Specifically, it was observed that pupils who receive one hour of tutoring (the development of emotional competences is considered to be amongst one of the fundamental contents according to new models of tutoring attention) demonstrated a positive relationship between cynicism generated by burnout and expressive suppression (the attempt to eliminate negative thoughts), without this relationship being seen in schoolchildren who do not receive tutoring. Tutoring and psychopedagogical orientation constitute a basic means of emotional education. This is because tutoring enables specialized attention to be paid to students in their personal and social ambits. This then helps students to understand and regulate their own emotions, supporting the development of their wellbeing and the configuration of their attitudes towards school [8,43,57]. These premises cement the idea that students who are appropriately tutored will better understand their emotions and possess regulation strategies. In this way they can move to eliminate negative thoughts by putting strategies for expressive suppression into place before burnout associated cynicism even appears [29,58].
Along similar lines to that presented, it was observed that schoolchildren who receive one hour of tutoring show a positive relationship between exhaustion generated by burnout and expressive suppression, the same as that which happened with cynicism. On the other hand, a positive and direct relationship existed between burnout associated exhaustion and cognitive repair (the generation of positive thoughts when faced with adverse situations). Emotional education provided through educational orientation in tutorials will help to prevent learning and adaptation difficulties [59,60]. This, in addition to it helping to develop skills for school and social life, makes it a stand-out element in burnout prevention. This could justify the fact that students who present burnout symptoms and receive tutoring, are those who also possess more strategies for emotional regulation. Concretely, the act of tutoring constitutes a planned, systematic, and proactive process that permits the prevention of diverse risk factors and the treatment of a multitude of problems, amongst which burnout is found [7].
Definitively, tutoring describes a continuous process that enables the tutor to identify stressful agents linked to the state of burnout [61]. Tutoring actions permit groups of teaching staff to modify their practice in order to favor the generation of a positive perception of school amongst students, eliminating the negative elements and actions that lead to feelings of inefficacy and exhaustion [17,62]. Further, building on the aforementioned conclusion, the tutor will be able to provide their students with coping strategies and emotional regulation. This means the problem can be approached in a bidirectional way. In this way, the development of tutoring is understood as an educational process that pursues integral development of students [7]. It will help in the prevention and treatment of this problem which has expanded in the 21st century and is associated with other negative consequences such as poor academic performance, school anxiety, and depression [63,64].
The third level of the model reveals an association between the two dimensions of emotional regulation and age. Concretely, this suggests that schoolchildren who do not receive tutoring show a direct and strong relationship between expressive suppression and cognitive repair. Expressive suppression, whilst being an emotional regulation strategy, has fewer positive connotations than those linked with cognitive repair [6]. In this way, predominance of the latter within schoolchildren receiving tutoring is logical and explains the weak relationship between both dimensions in this group. In this sense, actions realized by the tutor will help in the construction of essential psychosocial factors and development of positive character traits which will help prevent risk behaviors [59]. Finally, with regard to age, it is highlighted that this was inversely related with expressive suppression within students who received tutoring, although the regression weight was relatively weak. Specifically, age can represent a relevant factor of emotional regulation, in the way that previous experience will help cognitive repair learning as a predominant strategy for expressive suppression [65].
Finally, it is interesting to identify some of the main limitations of the present study. Firstly, the nature of the study should be highlighted. As the study was nonexperimental it does not permit causal relationships to be established between variables. However, this type of study is highly efficient for understanding the state of the question. Another limitation resides in the sample used, as probabilistic sampling was not employed, and the sample was not representative of the population of Granada. This being said, it should be noted that the number of participants was large and the selection of natural groups in the educational context implies a degree of randomness, as has been stated in the literature [66]. On the other hand, using only self-report measures entails the risk that participants omit information or misunderstand an issue. As a final limitation, the instruments administered can be highlighted. Whilst they were validated within schoolchildren and adolescents, high internal consistency was not obtained. Thus, the need for future studies to adapt the scales employed here is proposed.
Conclusions
As the main findings, the structural equation model proposed revealed that schoolchildren who receive one hour a week of tutoring show a positive relationship between expressive suppression as a strategy of emotional regulation, and cynicism and exhaustion as consequences of school burnout. In the same way, a direct relationship existed between exhaustion linked to burnout and cognitive repair. Given that significant relationships could not be observed between these variables for schoolchildren who were not receiving tutoring, a greater use of emotional regulation is noted within tutored schoolchildren when faced with this negative mental state. It makes sense to point out that tutoring is an educational dimension that is considered to be anachronistic, poorly founded, and rarely done in a coordinated manner. For this reason, a conceptualization and transformation of its practice is required [7]. Training as a way to develop personality, in which improvement and development of emotional competences should be included. In the same way, tutorials involve a large number of actions coordinated by the tutor, in which diverse agents from the educational community are incorporated in order to implement a planned and structured process, based on providing help and support to students to strengthen their integral development [67]. This must be in addition to tutoring being worked in an integrated or infusive way [68]. The actions performed by the tutor make up new tasks that are embedded within a new contemporary teaching profile. These tasks are more focused on paying attention to the student rather than on the subject matter or content being studied. Guidance, tutoring, or mentoring can be considered the ideal educational setting from which to tackle the new educational matters of personal development and emotion management. Further, tutoring offers a more appropriate strategy to transversally introduce these aspects into other school materials and environments. Definitively, all of these aspects make up specific actions upon which new models of educational attention can be developed. This is supported by the present work which specifically demonstrates the link between the management of emotions at different educational levels and school burnout prevention. Generally, the findings allow new intervention variables to be delimited and considered for the improvement and development of higher quality education. | 2019-11-28T12:47:57.577Z | 2019-11-26T00:00:00.000 | {
"year": 2019,
"sha1": "aa31708cff8afd854a0d7c332d2ed1b7a612d542",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/16/23/4703/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5bf92d9dbabc3f1bfb07fb85b62fa33e52ce57f9",
"s2fieldsofstudy": [
"Education",
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
256273421 | pes2o/s2orc | v3-fos-license | Update on the Use of Artificial Intelligence in Hepatobiliary MR Imaging
The application of machine learning (ML) and deep learning (DL) in radiology has expanded exponentially. In recent years, an extremely large number of studies have reported about the hepatobiliary domain. Its applications range from differential diagnosis to the diagnosis of tumor invasion and prediction of treatment response and prognosis. Moreover, it has been utilized to improve the image quality of DL reconstruction. However, most clinicians are not familiar with ML and DL, and previous studies about these concepts are relatively challenging to understand. In this review article, we aimed to explain the concepts behind ML and DL and to summarize recent achievements in their use in the hepatobiliary region.
Introduction
In recent years, machine learning (ML), including deep learning (DL), has been applied extensively in the field of radiology 1,2 primarily for diagnostic imaging 3,4 and image quality improvement. [5][6][7] However, only few radiologists are familiar with these techniques which are rarely used in clinical practice or research by conventional radiologists. In this review article, we aimed to assess the application of ML and DL in diagnostic imaging and image quality improvement in the hepatobiliary region. Figure 1 shows the association among artificial intelligence (AI), ML, and DL. AI is the most comprehensive concept that refers to human intelligence that can be replicated by a computer. AI is a computerized reproduction of part or all of human intelligence. ML is a type of AI that specializes in data analysis and prediction and that have functions similar to human learning on a computer. 2,8 In recent years, "radiomics," which has been actively studied in the field of radiology, is a field of ML that uses images as data. 9 DL is one of the ML methods that has been applied to various fields in recent years due to its high performance when a large amount of training data are available.
Basics of Machine Learning and Radiomics
ML is a AI as it analyzes and predicts by learning from training data, finding patterns hidden in the data, and applying them to new data. It shares some techniques with statistical analysis. Statistics aims to draw conclusions from data, whereas ML aims to obtain accurate predictions. Computers can learn the rules of classification with ML, and classification accuracy can improve by devising different methods. 8 However, to successfully perform these classifications, one should remember the basic premise that there is an association between the features and the matter that must be classified.
Since ML can achieve the best outcomes based on the trend of features, several methods are used. 10 The k-nearest neighbor algorithm estimates the test data using the training data. 11 The decision trees are classified to create a type of flowchart by learning the simple conditions of training data. Since the conditions cannot be combined for multiple features, the accuracy is relatively low. However, it is advantageous in calculating the importance of each feature and is easy to understand. 12 The support vector machine projections to high-dimensional spaces maximize the distance between each point and the boundary. This method is precise and can prevent overlearning. 13 Ensemble learning (e.g., random forests) is an ML technique that combines simple multiple MLs. The fundamental ML method, such as decision trees, has a low computational load due to a lengthy computation time. This method has several advantages. That is, it is extremely precise and, in several instances, can calculate the significance of each feature. 14 The data used in ML for the assessment of the hepatobiliary region are often numerical representations of images, commonly referred to as radiomics. 9,15 Originally, the term -omics was a compound of the suffix ics, which means study, and the suffix ome, which means all or complete. It refers to the science of systematically handling large amounts of information. Radiomics, a tool for clinical decision-making and precision medicine, has attracted significant attention in the field of oncology. 16 The typical numerical data for this radiomics include morphological, histogram, and texture features.
The conventional morphological characteristics of tumors are size, volume, shape, etc. Contrary to the ocular evaluation of tumor morphology conducted by radiologists, radiomics expresses morphologic aspects as statistical values. In general, not only size but also elongation is useful in the diagnosis of lymph node metastasis. Regardless if the lymph nodes are round, they can be expressed as a quantitative value, referred to as roundness, by calculating the aspect ratio of the lymph node.
In conventional studies, signal analysis is often performed based on the average signal, and tumors with low average apparent diffusion coefficient (ADC) values had a high malignant potential. However, in cases of tumor necrosis, the ADC value of the necrotic area is high, and the low ADC value of a viable tumor may not be fully reflected to an average ADC value. In fact, the minimum ADC value is more useful than the average ADC value in evaluating glioma grade. 17,18 To solve these problems, the histogram and texture analysis were introduced for radiomics analysis.
Histogram, a figure that shows the frequency of pixels in relation to their values, can resolve discrimination using the average signal. Histogram analysis can represent human overall sentiment tumor signals such as amplitude, dispersion, asymmetry, peaking, and flattening, and randomness as quantitative values such as mean, standard deviation, skewness, kurtosis, and entropy of gray-level pixel values. Histogram analysis can bring information quantification closer to visual assessment. Nevertheless, histogram features, which solely describe the distribution pattern of gray-level pixel values over a whole tumor, cannot account for the spatial relationship between pixels. The spatial relationship between each pixel and its neighbors is described by textural features, which are an important part of radiomics features.
Texture originally means the weave pattern of a fabric. However, in image diagnosis, it often refers to the method of quantifying parameters such as roughness, smoothness, and periodicity. Texture analysis includes several methods such as structural analysis, statistical analysis, model-based analysis, and transformation-based analysis. These methods can differentiate the same overall color distribution, which is challenging to discriminate via histogram analysis, by quantifying the image pattern. The gray-level co-occurrence matrix and the gray-level run-length matrix are frequently used in textural research. 19,20
Basics of Deep Learning
Deep learning is a type of ML in which human and animal neurons are simulated in a computer and referred to as perceptron, which are connected in a specific structure according to the application invented in 1957. 21 Figure 2 shows the simple structure of perceptron and DL, where each perceptron has an extremely simple structure in which the input signal is multiplied by a coefficient and added to input the activation function. By connecting several neurons in the intermediate layers before the output, complex relationships can be expressed. 22 In addition, the loss function and optimizer of the neural network will be discussed. A neural network anticipates a reaction based on input data. However, it is frequently incorrect. By iteratively adjusting the coefficients (also known as weights) of each perceptron, we can reduce errors and obtain the correct answer. The cost function is utilized to quantify errors. The square root of the mean error and cross entropy is the common loss function. Optimizers modify model parameters to minimize error. Further, they make minimal adjustments to the neural network's weight, calculate the error, and iteratively decrease the error to obtain the lowest error point. Stochastic Gradient Descent, RMSprop, and Adam are the frequently used techniques.
DL is a deep layer of artificial neural networks that can be promising in some scientific domains. 23 DL has a feature referred to as the universal approximation theorem, which states that neural networks with three or more layers (input Fig. 1 The schema of the relationship of AI, ML, and DL. AI is a computerized reproduction of part or all of human intelligence. ML is a type of AI that specializes in data analysis and prediction and that have functions similar to human learning on a computer. DL is a kind of ML that utilizes a large amount of data and computer power, but is distinguished by its high degree of accuracy. AI, artificial intelligence; DL, deep learning; ML, machine learning.
layer, output layer, and hidden layer) can approximate any function with accuracy based on the number of its weights. 24 Further, it can be a good technique for reducing dimensionality. Auto-encoders, a type of unsupervised DL, can reduce the number of hidden layer nodes to extract visual features in fewer dimensions. 25 DL training involves approximating an unknown function that can convert complex input data to output data such as images, texts, music, class, and number (Fig. 3). These data can also be converted to a matrix. Hence, processes such as image diagnosis, translation, and text composition can be replaced by a function that converts from one matrix data to another. However, in several cases, this function cannot be directly approximated. Therefore, this function should be approximated with DL using training data. Other ML methods are similar to DL. Nevertheless, DL is characterized by its ability to represent complex relationships though it requires a large amount of training data. Although there are different varieties of DL, the structure can be predicted to some extent based on the format of the data that must be processed. Hence, the structure of the neural network is selected based on its suitability for the particular data. 26 The convolutional neural network (CNN) is often used in radiology research and is based on neocognitrons, which were published around 1980 27 and inspired by the structure of the visual cortex in humans and animals. Convolutional and pooling layers are present in the hidden layer. The convolutional layer is in charge of capturing the surrounding features and condensing them in the pooling layer. While keeping the properties of the input image, these techniques result in a considerable decrease in image information. The incoming image is then recognized or classified using this reduced image. AlexNet, 28 GoogLeNet, 29 and ResNet 30 are examples of typical CNNs. These methods were widely used for the diagnosis and prediction of prognosis and treatment effect in the field of hepatobiliary region.
Deep learning reconstruction (DLR) or denoising is another major neural network used in the hepatobiliary region and is a type of image-to-image conversion process. Moreover, it is currently available commercially. U-Net is an example of DL that only transforms images from images, such as segmentation and DLR. Similar to CNN, U-net is a symmetrical encoder-decoder structure that uses skip connections to connect the mirrored layers of the encoder and decoder. 31 By down-sampling the image with the encoder to create a feature map and up-sampling it with the decoder, the skip connection preserves the original form to some extent. 32 The primary feature of U-net is that it pioneers a method for joining feature maps produced by each layer of the encoder to the corresponding feature maps of each layer of the decoder. Several companies use variations of this model for DLR or MRI denoising.
Use of Machine Learning and Deep Learning for Diagnosis via Hepatobiliary Region Segmentation
Segmentation is a frequent preprocessing step in AI systems for assessing liver diseases. It is mainly utilized to evaluate liver function and resect liver tumors for subsequent differential diagnosis. Traditionally, segmentation processes have been performed manually with specialized software tools. However, manual processes are time-consuming. In relation to these reasons, automated processes, such as the use of gadoxetic acid-enhanced MRI, have been developed. 33 This can also be a simple threshold-based process; however, in recent years, it has been applied for several purposes in DL or ML. Fehrenbach et al. trained the U-net architecture using 278 neuroendocrine liver metastases. 34 Results showed that it has a high accuracy for assessing metastasis and liver diseases (Matthew's correlation coefficient: 0.86 and 0.96) in the validation group. Mojtahed et al. reported that the DL-based segmentation technique had a high consistency to experienced radiologists and each liver segment volume (± 3.5% of the total liver volume). 35
Diagnosis of Hepatic Tumors
Scoring systems such as the Liver Imaging Reporting and Data System (LI-RADS) are useful in differentiating liver tumors on MRI. 36,37 However, in recent years, radiomics and DL have been applied in this field.
Using the radiomics-based approach, Lewis et al. utilized extracted radiomic characteristics from diffusionweighted MRI combined with the LI-RADS category to classify whether a tumor is hepatocellular carcinoma (HCC) or another primary liver tumor, such as intrahepatic cholangiocarcinoma and combined HCC and cholangiocarcinoma. Through logistic regression analysis, they obtained area under the curve (AUC) of 0.90 and 0.89 for two radiologists. 38 Zhong et al. assessed the efficacy of MRIbased radiomics analysis in addition to LI-RADS 2018 in distinguishing small (≤ 3 cm) HCCs from benign nodules in liver cirrhosis. 39 Results showed that both LI-RADS (AUC = 0.898) and MRI-based radiomics (AUC = 0.917) had a good discrimination. Meanwhile, the combined model had superior classification performance (AUC = 0.975). Oyama et al. evaluated the classification accuracy of hepatic tumors by characterizing T1-weighted images via texture and topology analysis using radiomics models, and results showed that the accuracy was 92% (resp. 90%, 73%) via texture analysis in the classification of HCC and metastatic tumors. 40 Wu et al. evaluated the performance in predicting HCC grade based on the non-contrast-enhanced MRI radiomics signature. 41 Results showed that the AUCs of clinical factors, radiomics signatures, and the combined clinical and radiomics signature models for HCC grade prediction were 0.600, 0.742, and 0.800 in the test datasets, respectively. Yang et al. performed a similar study, which revealed that the accuracy of the integrated model using clinical and radiomics features for low differentiation HCC was 79.4% and 74.5% in the training and validation cohorts, respectively. 42 Using the DL-based approach, Wu et al. 43 utilized CNN to classify cropped HCCs as either LR-3 (intermediate probability for HCC) or the combination class LR-4 or LR-5 (likely or definite HCC, respectively). This study achieved a classification accuracy of 90% and an AUC of 0.95 with reference to the expert human radiologist report. Hamm et al. reported the algorithm results based on multiphasic MRI, with an accuracy of 90% for lesion identification and an accuracy of 92% for lesion categorization when using the LI-RADS. 44 Further, the same researchers reported that the DL system identified the proper radiological characteristics present in each test lesion with a positive predictive value of 76.5% and a sensitivity of 82.9%. 45 Oestmann evaluated the diagnostic performance of CNN for atypical HCC. Results revealed that the sensitivities and specificities of CNN for HCC and non-HCC lesions were 92.7%/ 82.0% and 82.0%/92.7%, respectively. 46 Zhou et al. reported that the DL method for the hepatic diffusion weighted images using b = 0, b = 100, and b = 600 had the highest accuracy for HCC grading (80%), thereby outperforming the ADC map directly (72.5%) and the original b0 (65%), b100 (68%), and b600 (70%) images. 47 Kim et al. utilized CNN to detect the presence of HCC on liver MRI. 48 The accuracy of CNN for HCC lesion detection is similar to T. Nakaura et al. that of a junior radiologist (AUC = 0.9 vs. 0.893). However, it is lower than that of an expert radiologist (AUC = 0.957).
Microvascular Invasion, Treatment Response, and Prognosis
The evaluation of microvascular invasion (MVI) is an extremely important indication for HCC treatment, and there have been several studies using radiomics and DL. Based on a gadoxetic acid-enhanced MRI, Feng et al. evaluated a tumor and a 1 cm area around it. 49 Results revealed that the combined intratumoral and peritumoral radiomics model could predict the MVI with an AUC of 0.85. Moreover, Yang et al. performed a radiomics analysis based on gadoxetic-acidenhanced MRI and reported that AFP level, non-smooth tumor margins, arterial peritumoral enhancement, and radiomics signatures can be used to predict MVI. The AUC of the prediction model with combined clinical and radiomic features is 0.943. 50 Qu et al. evaluated three radiomics models based on arterial phase images, hepatobiliary phase images, and all images of a gadoxetic-acid-enhanced MRI. 51 Results showed that the radiomics model using all images outperformed the other two. The combined model based on clinical and radiomics analysis showed an AUC of 0.90 and 0.70 in the training and test subsets, respectively. Zhang et al. evaluated the DL model using 3D-CNN based on T2weighted images to predict MVI. 52 The results were as follows: the training set-AUC: 0.81, sensitivity: 69%, and specificity: 79% and the validation set-AUC: 0.72, sensitivity: 55%, and specificity: 81%. Zeng et al. 53 proposed the DL models based on the intra-voxel incoherent motion model of diffusion-weighted images to detect MVI. Liu et al. 54 reported the combined model using this type of DL, clinical features (α-fetoprotein level and tumor size), and ADC had the best AUC (0.829).
Radiomics and DL in tumor imaging are extremely important in predicting treatment effect and prognosis. Kong et al. evaluated the MRI-based radiomics model for predicting tumor response to transcatheter arterial chemoembolization in patients with intermediate-advanced HCC. 55 Results showed that the AUC based on radiomics scores were 0.812 and 0.866 in the training and validation cohorts, respectively. Zhang et al. developed an overall survival prediction model based on DL for patients with HCC treated with transarterial chemoembolization with sorafenib and the DL model had good prediction, with a C-index of 0.717 in the training set and 0.714 in the validation set. 56 For hepatic metastatic tumors, Daye et al. found that intra-tumor heterogeneity is a survival predictor for individuals with metastatic colorectal cancer. 57 The AUC of the trained random forest ML model that comprised standard clinical and pathological prognostic factors was 0.83. The expression of programmed cell death-1 (PD-1) and the programmed cell death ligand-1 (PD-L1) significantly affects the effects of immune 58 Results showed that the integrated model had the best prediction performance with an AUC score of 0.897 ± 0.084, followed by the DL-based model with an AUC score of 0.852 ± 0.043 and the radiomics-based model with an AUC score of 0.794 ± 0.035, based on the five-fold cross-validation. For the intrahepatic cholangiocarcinoma, Zhang et al. used ML based on MRI to identify the efficacy of noninvasive imaging biomarkers in predicting PD-1 and PD-L1 expression and outcome, and the highest AUCs for the models in predicting PD-1 and PD-L1 expressions were 0.897 and 0.890, respectively. 59
Liver Fibrosis
The assessment of liver fibrosis is another important application of DL in the diagnosis of liver diseases. Yasaka et al. evaluated the efficacy of CNN for the diagnosis of liver fibrosis using 600 gadoxetic-acid-enhanced hepatobiliary phase images and non-imaging variables such as hepatitis B and C and the static magnetic field of MRI. 60 According to their findings, the AUCs for stage F2, F3, and F4 fibrosis were 0.85, 0.84, and 0.84, respectively. Hectors et al. obtained similar results. 61 Park HJ et al. reported the usefulness of radiomics analysis for the diagnosis of liver fibrosis. 62
Deep Learning Denoising or Reconstruction in the Hepatobiliary Region
In the image reconstruction of MRI, the DL is mainly used for denoising images using different methods. DLs are commonly trained as approximate functions that received an image with noise as the input and that produced an image with reduced noise as the output. However, since a simple conversion cannot handle various contrasts and may result in large discrepancies with the original data, various efforts are made by different vendors. For example, Fig. 4 shows the schema of one of the most popular DLRs (Advanced Intelligent Clear-IQ Engine [AiCE]; Canon Medical Systems Tochigi, Japan). 63 If DL learns the noise and image microstructure of a particular sequence, it can find and differentiate a different pattern between these two structures. However, it should also learn contrasts and artifacts that are unique to that sequence and cannot be used for other sequences. This algorithm should learn contrasts and artifacts that are exclusive to this sequence and cannot be applied to other sequences. In contrast, learning a mixture of different contrasts will yield different patterns in the combination of noise and picture microstructure, thereby requiring an extremely large amount of training data to learn the distinctions. AiCE can solve this problem by removing noise from only the high-frequency components of the image, as seen in Fig. 5. Image contrast is predominantly contained in the low-frequency component, and noise is located in the highfrequency component in the k-space. Hence, this technique can use the same process to multiple sequences with different contrasts. Although detailed algorithms have not been disclosed, AIR Recon DL (GE Healthcare, Erlangen, Germany) and Deep Resolve (Siemens Healthcare, Chicago, IL, USA) have similarly applied deep learning to MRI reconstruction. 64,65 In the hepatobiliary region, DLR is useful in MR cholangiopancreatography (MRCP). Tajima et al. evaluated the image quality of conventional respiratory-triggered MRCP and breath-hold MRCP with and without DLR at 1.5T. 66 Results showed that the qualitative measurements of breath-hold MRCP (18s) with DLR were equivalent to or higher than respiratory-triggered MRCP (106-270s). Matsuyama et al. evaluated the use of DLR for intraductal papillary mucinous neoplasm diagnosis for 3D MRCP with different fast scan techniques. 67 They reported that the distribution accuracy of fast scan techniques with DLR for the intraductal papillary mucinous neoplasm was significantly higher than that without DLR. Moreover, DLR is also useful in the thin-slice breath-hold single-shot fast spin echo sequence. Tajima et al. evaluated the usefulness of DLR in image quality and scan time of breath-hold and respiratory gated fast spin echo MRI. 68 Results showed that the image quality, SNR, and contrast ratio of breath-hold images using DLR (scan time: 30 ± 4s) were significantly better than those of breath-triggered images (scan time: 122 ± 25s).
In addition to noise reduction, DL is useful for artifact removal. In the hepatobiliary region, motion artifacts caused by respiration and other factors are reduced. In most studies, artifact reduction is achieved by training the U-net-like DL architecture referred to as the motion artifact reduction method based on the CNN with the artificial artifacts added images as input data and the original images as output data. 69 Tamada et al. showed the usefulness of DL in improving the image quality of dynamic contrastenhanced hepatic MRI, which contains motion artifacts and blurring. 70,71 Conclusion In this article, we described the fundamentals of ML, radiomics, texture analysis, and DL, and recent topics in hepatobiliary MRI. The application of these techniques to hepatobiliary MRI has just started, and several studies will be published in the future. We hope that this review article can provide a better understanding about the use of ML and DL.
Conflicts of Interest
Takeshi Nakaura has received research support from Nemoto Kyorindo Co., Ltd. Toshinori Hirai has received research support from Canon Medical Systems. The department of diagnostic imaging analysis, to which Dr. Kidoh belongs, is an endowed chair supported by Philips Healthcare. The other authors have no conflict of interest to declare. | 2023-01-27T06:16:01.770Z | 2023-01-26T00:00:00.000 | {
"year": 2023,
"sha1": "cd52119b7ab4c27b2741947739608fb8384e1035",
"oa_license": "CCBYNCND",
"oa_url": "https://www.jstage.jst.go.jp/article/mrms/advpub/0/advpub_rev.2022-0102/_pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6ba01a8874170eac938110514b5de559309464ed",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6504591 | pes2o/s2orc | v3-fos-license | Neural Correlates of Lyrical Improvisation: An fMRI Study of Freestyle Rap
The neural correlates of creativity are poorly understood. Freestyle rap provides a unique opportunity to study spontaneous lyrical improvisation, a multidimensional form of creativity at the interface of music and language. Here we use functional magnetic resonance imaging to characterize this process. Task contrast analyses indicate that improvised performance is characterized by dissociated activity in medial and dorsolateral prefrontal cortices, providing a context in which stimulus-independent behaviors may unfold in the absence of conscious monitoring and volitional control. Connectivity analyses reveal widespread improvisation-related correlations between medial prefrontal, cingulate motor, perisylvian cortices and amygdala, suggesting the emergence of a network linking motivation, language, affect and movement. Lyrical improvisation appears to be characterized by altered relationships between regions coupling intention and action, in which conventional executive control may be bypassed and motor control directed by cingulate motor mechanisms. These functional reorganizations may facilitate the initial improvisatory phase of creative behavior.
H ip-Hop music, in particular rap, has had a huge cultural impact in western society, especially among the young, since its appearance four decades ago. Freestyle rap, a popular form, requires an artist to freely improvise rhyming lyrics and novel rhythmic patterns, guided by the instrumental beat -a particularly challenging form of spontaneous artistic creativity.
It has been proposed that artistic creativity is itself a twofold process, in which an initial improvisatory phase, characterized by spontaneous generation of novel material, is followed by a period of focused re-evaluation and revision 1 . The neural correlates of the improvisatory phase are poorly understood [1][2][3][4][5][6][7][8] Freestyle rap thus provides a unique opportunity to study this initial, improvisatory phase at the interface of music and language.
In an attempt to identify the neural correlates of spontaneous lyrical improvisation in this context we compared freestyle (improvised) to conventional (rehearsed) performance, using functional magnetic resonance imaging (fMRI). Utilizing spatial independent component analysis (sICA) methods 9 recently developed in this laboratory to effectively remove imaging artifacts associated with connected speech or song, has made it possible to study this unique genre using fMRI for the first time. Importantly, in order to study spontaneous lyrical improvisation in its most natural form, our design evaluated the natural and ecologically valid process: freestyle artists producing freestyle rap, unencumbered by unrelated cognitive demands.
Spontaneous improvisation is a complex cognitive process that shares features with what has been characterized as a 'flow' state 10 . It has been suggested that the frontal lobe, may play a central role in the improvisatory process, although the nature of its contributions is unclear 2 . On this basis, in addition to its other characteristics, we expected the neural correlates of lyrical improvisation to include changes in prefrontal activity that might enable spontaneous creative activity through effects on systems that regulate attention, affect, language and motor control.
Our results support these predictions and provide a novel model for improvisation characterized by functional changes within a large-scale network that is anchored in the frontal lobe. This pattern -activation of medial and deactivation of dorsolateral cortices -may provide a context in which self-generated action is freed from the conventional constraints of supervisory attention and executive control, facilitating the generation of novel ideas 11 . Importantly, altered relationships within the prefrontal cortex appear to have widespread functional consequences, affecting motivation, emotion, language as well as motor control, and may generalize to other forms of spontaneous creative behavior.
Results
Subjects were scanned while they performed two tasks, each of which used an identical 8-bar musical background track: 1) spontaneous, improvised freestyle rap (improvised) and 2) conventional performance of an overlearned, well-rehearsed set of lyrics (conventional). All figures and tables presented here are in Montreal neurological institute (MNI) space and are thresholded at a family-wise error rate less than 0.05 based on Monte Carlo simulations.
Language measures. Subjects' scores on verbal fluency tests administered prior to the scanning sessions [29.3 6 6.8 (mean 6 s.d.) words generated in one minute on semantic; and 58.0611.1 in three minutes on phonological tests] were above the 80 th percentile 12 in each instance. This highlights the importance of superior linguistic skills in this genre, which requires rapid online formulation of meaningful, rhyming words and phrases within a prescribed tempo and rhythm.
GLM contrast of improvised vs. conventional conditions. To determine the neural correlates of spontaneous lyrical improvisation, we first compared improvised and conventional conditions directly using the general linear model (GLM) (Figure 1, a, b and Table 1). Improvised performance was characterized by significant increases in activity of the medial prefrontal cortex (MPFC), extending from the frontopolar cortex to the pre-supplementary motor area (pre-SMA) and decreases in the dorsolateral prefrontal cortex (DLPFC), extending from the orbital to superior regions. Medial prefrontal activations were lateralized to the left hemisphere; lateral prefrontal deactivations were lateralized to the right) ( Figure 1c).
The improvised condition was also associated with increased activity in perisylvian areas in the left hemisphere, including inferior frontal gyrus (LIFG), middle temporal (MTG) and superior temporal (STG) gyri, and intervening superior temporal sulcus (STS) and fusiform gyrus. Improvised performance was in addition associated with left lateralized activation of motor areas; these included the left cingulate motor area (CMA), pre-SMA, dorsal premotor cortex (PMd), head and body of the caudate nucleus, and globus pallidus, and the right posterior cerebellum and vermis. Indices of articulatory movements did not differ between conditions: there were no significant differences in the number of syllables produced during improvised and conventional performance [9946103 (mean 6 s.d.) and 1035698 total syllables respectively].
Parametric modulation. We applied parametric modulation methods to determine how the innovative quality of performance might modulate these activity patterns. Using blinded ratings of performance quality (Table 2), we found significant associations between these measures and activity in the posterior and middle MTG and STS, the left MPFC, specifically lateral Brodmann area (BA) 9, a region near superior frontal sulcus, and the posterior cingulate cortex (PCC) (Figure 2).
Functional connectivity. Using a seed selected from the left lateral MPFC (guided by the parametric modulation results outlined above), we detected stronger negative correlations between activity in the MPFC and the ventral DLPFC (Figure 3a, indicating that the dissociated, reciprocal changes noted in these prefrontal areas in the above GLM contrasts are not independent; changes in one are tightly coupled to changes in the other). Similarly, activity in the MPFC was anticorrelated with activity in the intraparietal sulcus (IPS).
On the other hand, we detected stronger positive correlations between activity in the MPFC and the anterior perisylvian (LIFG) and cortical motor areas including cingulate motor area and adjacent anterior cingulate cortex (ACC), the pre-SMA and the dorsal lateral Warm colors represent significant increases in BOLD signal during improvised, cool colors represent significant decreases. Results, revealing dissociated frontal and left lateralized patterns, are displayed (a) on a 3D brain surface rendered using SUMA (SUrface MApping) and (b), as axial slices with planes of section relative to the bi-commissural line indicated. The bar plot (c) illustrates the mean values 6 standard errors of signal changes (improvised-conventional) in DLPFC and MPFC ROIs defined in the GLM contrast and homotopic ROIs in the contralateral hemisphere. MPFC activations were strongest on the left, while activations in the right hemisphere were sub-threshold, and DLPFC deactivations were strongest on the right, while left hemisphere regions were non-significantly decreased. It should be noted that the activity in the DLPFC was significantly attenuated when the improvised condition was compared directly to an implicit baseline, indicating that this finding was not simply produced by contrast with the conventional condition (in which activity in the DLPFC did not itself differ from baseline). To trace the extensions of this network, we selected secondary seeds -i.e. from the inferior frontal and cortical premotor areas that were positively correlated with MPFC activity -and found that the left IFG and medial premotor areas were in turn positively correlated with activity in the left (but not the right) amygdala (AMG) (Figure 3b). By selecting an AMG seed from the peak in the conjunction of connectivity maps of LIFG, CMA and pre-SMA (Figure 3b), we found that the left AMG was positively connected to an extended network (Figure 3c), that included the right IFG and inferior parietal lobules (IPL) and anterior insula in both left and right hemispheres.
Changes in activation patterns over time. To explore the possible evolution of the creative process over time, we compared the contrast between improvised and conventional conditions at the beginning and end (first and last measures) of each eight bar segment ( Figure 4). Activity in the left prefrontal, premotor, anterior perisylvian language areas and amygdala identified in the initial GLM contrasts, was significantly higher at onset. In contrast, activations were greater in the right hemisphere by the final measure. The latter were found in regions that had been deactivated in the initial GLM contrasts, including the frontal eye fields and contiguous portions of the DLPFC, the dorsal premotor area, IPS, IPL and precuneus.
Discussion
In this study, we used fMRI to investigate the neural correlates of spontaneous lyrical improvisation by comparing spontaneous freestyle rap to conventional rehearsed performance. Our results reveal characteristic patterns of activity associated with this novel form of lyrical improvisation, and may also provide more general insights into the creative process itself. It has been suggested that the creative behaviors could occur in two stages: an improvisatory phase characterized by generation of novel material and a phase in which this material is re-evaluated and revised 1 . The present study may provide clues to the mechanisms that underlie the initial, improvisatory phase. Our results suggest a model in which an elementary reorganization of brain activity facilitates improvisation and may generalize to other forms of spontaneous creative behavior. The most striking feature of lyrical improvisation detected by direct comparison of freestyle and conventional performance, in large part consistent with the previous study of melodic improvisation 5 , was a dissociated pattern of activity within the prefrontal cortex: increases in activity throughout the MPFC, extending from the frontal pole to the border of the pre-SMA, and simultaneous decreases in the DLPFC, from its orbital to superior regions. The implications of this dissociation are discussed below, in the context of subsequent analyses.
A second salient feature of improvisation revealed by the GLM contrasts was a marked lateralization of task-related changes in the BOLD signal. For example, the medial prefrontal activations just noted were stronger in the left hemisphere, while dorsolateral prefrontal deactivations were stronger on the right. Similarly, additional task-related activations in language and motor areas were strongly lateralized to the left hemisphere, while additional deactivations in superior frontal and parietal areas were lateralized to the right.
Activation of left hemisphere language areas (in inferior frontal and posterior middle and superior temporal gyri) was predicted and is perhaps unsurprising given the nature of the genre. However it should be noted that activation here indicates enhanced activity, over and above levels observed during conventional performance, and so is not related to language processing per se. Instead activation of language-related cortices likely reflects the unique demands of freestyle improvisation, which requires rapid online selection of novel words 13 and phrases that rhyme 14 .
Increased activity in other left hemisphere regions associated with motor control (including medial and lateral premotor cortices, cingulate motor area and basal ganglia) does not appear to be related to increases in movement per se: there were no significant differences in quantitative indices of motor activity, including the number of syllables produced, during improvised and conventional conditions, and no condition-dependent differences in activity of the primary motor cortex were observed. Activity in these regions may instead reflect spontaneous phonetic encoding and articulation of rapidly selected words during improvisation. Enhanced activity in the caudate may also support rapid online sequencing of ongoing behaviors in this condition 15 . Freestyle improvisation also requires that the articulation of words and phrases be spontaneously incorporated into established rhythmic patterns; this process might place additional demand on these regions. Additionally, both the cerebellar hemisphere and vermis, selectively activated during improvised performance, have been associated with maintenance of rhythmic patterns in working memory 16 . The brain regions activated in association with rhyming and rhythmic variations may provide clues to mechanisms underlying the effects of musical intervention in clinical populations 17 . Three separate connectivity maps were then generated using peak coordinates (of regions positively correlated with MPFC derived from the above analysis) from LIFG (254, 12, 21, t54.60), CMA (23, 9, 48, t54.49) and pre-SMA (0, 9, 66, t54.05) as seeds. The conjunction of these maps (b) reveals that each of these regions is more positively connected to the left amygdala during improvised. [The PMd was not]. In (b), blue highlights voxels in the amygdala with that were significantly more correlated with any one of the three seed regions; red with any two of these regions; yellow with all three. In (c) the peak AMG voxel from the above conjunction map (221 29 224) was used as a seed. The results indicate that during the improvised condition, the left amygdala is itself connected to a wide array of regions in both hemispheres, including the insula, IFG, IPL and ACC. The widespread changes identified by the foregoing analysis are suggestive but incomplete. The questions that follow -how are these concurrent activations and deactivations related to one another; are they integrated in a meaningful way? -were addressed using connectivity analyses.
The connectivity results revealed strong positive correlations between activity in a primary seed region in the MPFC (located within in the large cluster identified in the GLM contrast, selection was guided by the results of parametric modulation analyses) and inferior frontal and cortical premotor areas. To explore the potential extensions of this network, we tracked the extended connections of the inferior frontal and premotor regions themselves. Using each as a seed in subsequent analyses, we found that these regions were themselves positively correlated with activity in the left amygdala and the amygdala itself was strongly coupled to an extended network that included the right IFG, and IPL and anterior insula in both left and right hemispheres. The connectivity analyses therefore suggested the emergence of a more widespread, large-scale network that might play a role in lyrical improvisation.
Taken together, functional connectivity and GLM results provide a broader context in which to understand the dissociation of activity in medial and lateral prefrontal cortices and the ways in which this pattern might facilitate improvisation: The frontal midline cortices, selectively activated during improvisation, regulate motivational incentive, intentionality and drive 18,19 , and in this context the MPFC operates at the interface of intention and action -synthesizing information, encoding goals and guiding self-generated, stimulusindependent behaviors [20][21][22][23][24] . Normally, the expression of these behaviors is modulated by interactions between medial and lateral prefrontal regions 19,25 -the MPFC provides a signal to the DLPFC, where information is processed prior to its gaining access to the motor system. In this way, the lateral prefrontal regions maintain executive control, consciously monitoring and implementing adjustments in an ongoing performance in order to ensure that actions conform to explicit goals 26,27 .
Here however, conventional interactions between medial and lateral prefrontal cortices appear to be markedly altered: given that BOLD signal in these regions is anticorrelated, the increases in MPFC activity appear to be tightly coupled to decreases in the DLPFC. We propose that this dissociated pattern reflects a state in which internally motivated, stimulus-independent behaviors are allowed to unfold in the absence of conscious volitional control.
There are a number of potential routes from the medial prefrontal cortex to motor effector areas in which the DLPFC could be bypassed. One parallel pathway that provides direct access to the motor system is via the dense projections from the MPFC to the CMA 28,29 , a premotor region that combines cognitive and affective information to orchestrate behavior.
Accordingly, we found that both the ACC and CMA (including the speech-related posterior rostral cingulate zone (RCZp) 30 ) were significantly activated along with the MPFC during improvised performance, while activity in the DLPFC was significantly attenuated. At the same time, activity in MPFC and the cingulate areas was strongly correlated during improvised but not conventional conditions. An alternative, direct route through cingulate pathways into the motor system may allow the medial frontal regions to generate novel, exploratory behaviors 31 , bypassing conventional executive controls and thereby providing the cognitive flexibility necessary for successful improvisation.
It is interesting in this context that self-generated, stimulus independent behaviors appear to be initiated by midline frontal regions well before subjects consciously experience the intention to act 20,32 . In the absence of processing by lateral prefrontal regions -where a sense of agency could be constructed post-hoc -ongoing actions, moment to moment decisions and adjustments in performance may be experienced as having occurred outside of conscious awareness. This is not inconsistent with the experience of many artists who describe the creative process as seemingly guided by an outside agency.
In addition, the patterns we observe may reflect alterations in the activity of attentional systems: deactivations in superior portions of the DLPFC during the improvised condition (in the vicinity of the frontal eye fields) were accompanied by significant decreases in activity in the IPS. Together, these regions constitute elements of a supervisory attentional system, the so-called dorsal attention network 33 . This suggests that the conscious, deliberate, top-down attentional processes mediated by this network may be attenuated during improvisation, consistent with the notion that a state of defocused attention enables the generation of novel, unexpected associations that underlie spontaneous creative activity 11 . What monitoring and attentional processes do occur during improvisation may be mediated by the cingulate system, which remains active while DLPFC and parietal activity is reduced.
Beyond the interaction of the MPFC and DLPFC, the functional interconnections between medial prefrontal cortex, IFG, medial premotor areas and the amygdala, suggest that spontaneous lyrical improvisation is associated with emergence of a network that integrates motivation, language, emotion and motor function. The simultaneous coupling of the amygdala to inferior parietal lobules and insulae indicates that this network also incorporates regions that play a role in multimodal sensory processing and the representation of subjective experience 34 , and that, as a whole, this entire network is more effectively coupled during spontaneous creative behaviorperhaps facilitating what has been described as a psychological 'flow' state 10 (which describes a subject's complete immersion in creative activity, typified by focused self-motivation, positive emotional valence, and loss of self-consciousness).
Supplementary analyses revealed additional, noteworthy patterns: The results of parametric modulation analyses indicated that innovative performance -incorporation of features such as inventive wordplay or novel rhythms into the improvisation -is associated with increased activity in a subset of left hemisphere regions Figure 2) including the posterior and middle MTG and STS, and the MPFC. This suggests that regions that may correspond to the location of the mental lexicon (in which words and their semantic features are stored 35 , likely consistent with subjects' superior performance on verbal fluency tests), and regions that play a role in motivation, drive and self-organized behavior, may play a prominent role in the innovative use of language and rhythm. Interestingly, parametric modulation also highlighted an area not implicated in the GLM contrasts, the left posterior cingulate cortex (PCC), which has been shown to play a role, along with the MPFC, in self-motivated or self-referential behaviors 36 .
We also observed interesting, systematic differences in the patterns of activity in the first and last measures of the eight bar segments that constitute the basic unit of this musical form. Surprisingly we found that activity in the set of left prefrontal, premotor, anterior perisylvian language areas and amygdala reported above, was relatively higher at onset, but that activations in general appeared to shift to the right hemisphere by the final measure. This indicates first of all that the network related to motivation, emotion and language identified above may be more strongly engaged in initiating the improvisation.
What the relative increases in activity in the right hemisphere at the end of each segment indicate is however not clear. It is interesting that many of these increases were found in regions that were deactivated in the principal improvised-conventional contrast reported above. The time dependent increases in activity of frontal eye fields and IPS might reflect a re-emergence of top-down attentional processing at the end of each improvisational sequence, and increasing activity in the dorsolateral prefrontal cortices might reflect an increase in executive functions mediated by these regions. It is possible that rule based behaviors (e.g. attention to metric structure, selection of final lyrical elements) may be more important, and may re-engage these regulatory mechanisms, at the end of each 8 bar segment. It is clear nevertheless that the notion that simple attenuation of attention and executive control supports improvisation may be an oversimplification and that these processes seem to vary in a more complex way over time. The mechanisms underlying these interactions between musical improvisation and temporal structure clearly warrant further investigation.
As noted above, creativity may actually be a biphasic process involving initial free generation and subsequent revision of novel material 1 . Here we have examined only the first, spontaneous or improvisational phase. As we report, improvisation, contrasted with conventional performance, was in general associated with relative decreases in activity in supervisory attentional and executive systems. Were our subjects to actively reevaluate and revise the lyrics they had improvised, we might predict activation of these systems in support of evaluative processes that more likely require attention to and conscious, goal-directed revision of the original material. Indeed, a recent imaging study of graphic design did show activation of executive systems including the DLPFC specifically during subjects' evaluation of their prior creative outputs 1 .
Compared to previous studies of musical improvisation by Ullen and his colleagues 4 , Berkowitz & Ansari 3 and Brown et al. 7 , our results differ in one fundamental way. While elegantly designed in order to enforce tight experimental control, these studies used conditions that were less spontaneous and may have imposed additional attentional and mnemonic demands (e.g improvised material had to be memorized as it was generated and reproduced during a subsequent scanning run 4 ); this might in part account for the activation of the DLPFC reported in these studies. In contrast, we observed significant deactivation of the DLPFC (along with activation of the MPFC) and it is possible that this pattern may emerge when spontaneous improvisation takes place without the superimposition of secondary cognitive tasks.
In summary, the functional reorganization we observe -in which the medial prefrontal cortices may guide behavior in the absence of conscious attention and effect motor control through alternate cingulate pathways -is one feature of a larger network, linking intention, affect, language and action, that may underlie and facilitate the initial, improvisatory phase of creative behaviors. We speculate that the neural mechanisms illustrated here could be generalized to explain the cognitive processes of other spontaneous artistic forms, which can be tested in future studies across disciplines.
Methods
Subjects. Twelve male freestyle artists (mean age, 30.3 yr; range, 23-36 yr) were studied. Participants had at least 5 years of professional experience, defined as performing in front of an audience, or recording projects for public consumption, and receiving payment for this work. The range of professional experience across participants was 5 to 18 years, 9.8 6 4.3 (mean 6 s.d.). All participants were righthanded native speakers of American English. Written informed consent was obtained for all participants under a protocol approved by the Institutional Review Board (NIH 92-DC-0178).
Experimental design. The set of lyrics used in the conventional condition was selected by two co-authors. These lyrics were easy to memorize, and participants had not been exposed to them before the experiments. A recording of material, which was performed on the background instrumental track used in the experiments, was sent to the participants to memorize one week before the experiments. Prior to the imaging experiments, participants were asked to perform phonemic (generating words beginning with a specific letter) and categorical (animal naming) verbal fluency tests 12 . Participants then went through a training session, in order to make sure they performed all experimental conditions correctly before scanning. The participants were asked not to move their heads or other parts of their body during the scan. In order to constrain head motion, foam pads were used for support in the head coil. In both pilot and actual experiments, debriefing indicated that participants' performance was not affected by the motion restraints.
In the conventional condition, the participants were asked to rap the memorized lyrics on the 8-bar instrumental track. In the improvised condition, lyrics were improvised spontaneously, on the same instrumental track. An 8-bar instrumental track at 85 beats per minute was created by a co-author and repeatedly used as the background music for the whole experiment. A two-beat auditory cue and a visual prompt were placed at the beginning of the eighth bar to indicate the end of the 8 bars. Participants performed two sessions during the scan, of which each included 6 blocks (22.53s per block) of improvised and conventional conditions per session in an alternating box-car design.
MRI scanning. T2*-weighted BOLD images were acquired on a General Electric (GE) Signa HDxt 3.0 Tesla scanner (GE Healthcare, Waukesha, WI, USA) with an 8channel High Resolution Brain Coil. Anatomical images were acquired using a magnetization-prepared rapid gradient-echo (MPRAGE) sequence. A single-shot gradient-echo EPI sequence was used for functional imaging: the acceleration factor of ASSET (Array Spatial Sensitivity Encoding Technique) 5 2, TR (repetition time) 5 2000 ms, TE (echo time) 5 30 ms, flip-angle 5 90u, 64364 matrix, FOV (field of view) 5 227 mm, 4 dummy scans. 40 interleaved sagittal slices with a thickness of 4 mm were used to cover whole brain. Because the majority of head motion during overt speech production is in the sagittal plane (especially ''nodding''), off-plane motion was minimized by this setup and the advantage of in-plane image registration 37 was maximized. The audio of participants' performances were recorded by a FOMRI TM II noise canceling optical microphone (Optoacoustics, Or Yehuda, Israel).
Data analysis. Time-locked, denoised auditory recordings were collected during each block from all participants. Syllables produced in each block were measured for both conditions by detecting syllable nuclei based on salient voiced peaks. After the experiment, auditory recordings acquired during improvised blocks were evaluated blindly by two experienced musicians who assessed the creative use of language and rhythm, assigning a consensus score using 10-point scale (Table 2), for use in the parametric modulation analyses.
The structural image of each subject was first segmented and normalized into MNI space using the tissue probability maps (TPMs) in SPM8 (Wellcome Department of Imaging Neuroscience, London, UK, http://www.fil.ion.ucl.ac.uk/spm/) 38 . In-plane registration, slice-time correction and volumetric rigid-body registration were sequentially applied to the functional datasets. Such traditional motion correction algorithms are effective in correcting misalignments caused by bulk head movements, but not motion-related susceptibility artifacts associated with overt speech production. To minimize the latter, spatial independent component analysis (sICA) was applied to the motion and slice-time corrected functional data on each subject level 9 . In sICA, each BOLD image was treated as a mixture of multiple spatially independent signal and noise sources. The number of components in each dataset was estimated by minimum description length (MDL) criterion 39 . The systematic classification of artifactual and neuronal ICA components was based on their degree of spatial clustering, location of major positively weighted clusters and neighborhood connectedness between positively and negatively weighted clusters. The noise components identified by a human-expert using these criterions and their variances were subtracted from the original dataset. Inter-rater reliability was assessed among five raters (including the current rater) by Fleiss' kappa test in an independent dataset consisting www.nature.com/scientificreports SCIENTIFIC REPORTS | 2 : 834 | DOI: 10.1038/srep00834 of 18 subjects. The Fleiss' kappa value of 0.9696 indicated almost perfect agreement among the raters. Afterward, the denoised data were normalized into MNI space at a voxel size of 3 x 3 x 3 mm by applying the transforms derived from the structural image normalization, and smoothed to a target full-width-half-max (FWHM) of 10 mm.
At the subject level, the GLM was implemented using SPM8. Separate regressors were constructed by convolving the box-car function of each condition with the canonical hemodynamic response function. In addition to task regressors, a nuisance covariate of the whole-brain mean signal was used to account for the global BOLD signal fluctuations induced by changes in P CO2 during continuous overt speech production [40][41][42] . To identify the effect of innovative performance, a separate GLM model was built with the addition of a regressor of performance scores for all improvised blocks. To estimate the evolution of improvised performance over time, in addition to the main regressors, additional regressors were added indicating 1 st and 8 th bar for both conventional and improvised conditions. In each case, a group-level voxel-wise random-effects ANOVA model was used to draw statistical inferences at the population level.
A seed based functional connectivity analysis was performed on the residual timeseries of each voxel output from each participant's GLM. A band-pass filter of 0.045-0.1 Hz was applied on the residual in MATLAB (version R2010A, The MathWorks Inc., Natick, Massachusetts) to ensure that estimated connectivity between regions was not affected by high-frequency physiological noise or low-frequency fluctuations caused by scanner signal drifts and stimulus on-off manipulations. The data for each condition were shifted by three volumes to account for the delay (approximately 6 seconds) of the hemodynamic response, and then concatenated. For each condition, a correlation map was generated in AFNI 43 by calculating the Pearson's correlation coefficient between the eigenvector of time series of all voxels within a 5-mm sphere centered at the seed's coordinate, and each voxel's time series in the brain. The correlation coefficients were then Fisher's z-transformed and input in a random-effect ANOVA model to compare the connectivity changes between the two task conditions at a group level in SPM8. For both GLM and connectivity analyses, Monte Carlo simulations were used to determine cluster size threshold for family-wise error correction.
To select the seed of MPFC for the functional connectivity analysis, we took into account both the GLM results (improvised vs. conventional) and parameter modulation indices of innovative performance. Since the cluster of MFPC activation derived from the improvised vs. conventional contrast was large, extending from the frontal pole to the pre-SMA, we divided this cluster into 6 sub-regions, using the division between the medial and superior frontal (lateral) gyri defined in the PickAtlas 44 and between areas along inferior-superior axis corresponding to BA 10, 9 or 8 defined in the Talariach Daemon 45 and in the work of Petrides and Pandya 46,47 . The parameter modulation results indicated that lateral BA 9 sub-region was most strongly associated with the creative use of language and rhythm, and we therefore selected the center of mass (219 49 33) in this sub-region as the MPFC seed. To further explore the extensions of this network, we investigated secondary connectivity patterns of all regions that were more positively connected to the MPFC in the improvised vs. conventional contrast. More details can be found in the legend to Figure 3. | 2018-04-03T03:30:41.700Z | 2012-11-15T00:00:00.000 | {
"year": 2012,
"sha1": "a9b73b8ce41c353156206983cfd83aaaabb0efcb",
"oa_license": "CCBYNCND",
"oa_url": "https://www.nature.com/articles/srep00834.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a9b73b8ce41c353156206983cfd83aaaabb0efcb",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
212406339 | pes2o/s2orc | v3-fos-license | Spiritual skepticism? Heterogeneous science skepticism in the Netherlands
Recent work points to the heterogeneous nature of science skepticism. However, most research on science skepticism has been conducted in the United States. The current work addresses the generalizability of the knowledge acquired so far by investigating individuals from a Western European country (The Netherlands). Results indicate that various previously reported findings hold up: Mirroring North American patterns, climate change skepticism is associated with political conservatism (but only modestly), and scientific literacy does not contribute to skepticism, except about genetic modification (Study 1 only) and vaccine skepticism (Study 2 only). Results also reveal a crucial difference: Religiosity does not consistently contribute to science skepticism, except about evolution. Instead, spirituality is found to most consistently predict vaccine skepticism and low general faith in science—which in turn predicts willingness to support science. Concerns about societal impact play an additional role. These findings speak to the generalizability of previous findings, improving our understanding of science skepticism.
Introduction
The systematic and unwarranted rejection of science is a growing societal problem that affects people and their environments across the globe.Many scientific findings and conclusions-particularly when these contradict people's ideological or moral convictions-are increasingly dismissed by substantial segments of the public, and sometimes also on an institutional level (Eilperin et al., 2019;Nature Editorial, 2017a).Scientific associations and institutions have expressed their concerns about the current "crisis of trust" in science (Nature Editorial, 2017a, 2017b).It is evident that science skepticism represents a major contemporary challenge, one that can have far-reaching societal and environmental consequences.Consider as an example recent measles outbreaks (e.g. in the United States and in the Netherlands; Pierik, 2017) due to insufficient herd immunity, which is a direct consequence of vaccine skepticism (Amin et al., 2017;Wenner Moyer, 2018), or consider the personal lifestyle and political choices that people make-partially as a result of their skepticism about anthropogenic global warming-that degrade the environment (Schleussner et al., 2016).Vaccine hesitancy and climate change have been listed by the World Health Organization (WHO, 2019) as one of the top 10 health threats facing the world in 2019.Mapping the antecedents of science skepticism is thus an important task, which has been taken up by various researchers in the last decade or so (Hornsey and Fielding, 2017;Lewandowsky and Oberauer, 2016;Rutjens et al., 2018a).
However, two important limitations of the growing body of work on science skepticism are (1) that various domains of science are mostly investigated in isolation, with disproportional emphasis being placed on understanding climate change skepticism specifically and ( 2) that most of the conclusions so far are based on North American samples.Regarding the first limitation, the consequence of mostly focusing on this specific topic is that the conclusions that are based on this body of research have been somewhat generalized, so that political ideology (i.e.conservatism) has been appointed as one of the main culprits of science skepticism generally (Rutjens et al., 2018a(Rutjens et al., , 2018b)).Consequentially, research has mostly overlooked other-potentially more potent-antecedents of science skepticism beyond climate science, even when it was pointed out relatively early (e.g.Kahan, 2015;Lewandowsky et al., 2013;Scott et al., 2016) that political conservatism falls short in predicting skepticism about vaccination and genetic modification (GM).
The second limitation is that very little is known about the nature and scope of science skepticism beyond the cultural context of the United States.This is a problem that plagues social science research in general (Henrich et al., 2010), and one that could have severe consequences for our understanding of the ideological underpinnings of science skepticism.As one illustration of the importance of broadening the narrow cultural scope of most previous research, a recent cross-national study found that the all-too-familiar association between political conservatism and climate change skepticism was stronger in the United States than in any of the other nations that were investigated (Hornsey et al., 2018a).In addition, the United States is an outlier among Western countries in terms of religiosity (Gao, 2015;Pew Research Center, 2018).Given that previous work on science skepticism has found that religiosity is overall a robust predictor of skepticism across various domains (e.g.McPhetres and Zuckerman, 2018;Rutjens et al., 2018b), the important question arises whether this association holds up among a more secular population (which is-at least in terms of religiosity-more representative of other countries in the Western world; Pew Research Center, 2018).Moreover, if this is not the case, a second question is what would predict science skepticism of not religiosity.As we will argue later, we expect spirituality to replace religion as an important predictor of various manifestations of science skepticism.
Recent work has started to address the first limitation and found that science skepticism is indeed more heterogeneous than previously assumed, with political conservatism driving climate change skepticism, religiosity driving vaccine skepticism as well as general (low) faith in science, and GM skepticism being associated with non-ideological factors (Rutjens et al., 2018b).Other recent work that investigated skepticism toward various science topics points to a similar conclusion (Drummond and Fischhoff, 2017).Importantly, however, systematic investigations of the relative impact of various potential predictors of skepticism across various topics are still scarce.A first step in addressing the second limitation (low generalizability) has recently been taken in a cross-national comparison of climate science skepticism (Hornsey et al., 2018a) and vaccine skepticism (Hornsey et al., 2018b), respectively, but systematic tests of the heterogeneity of science skepticism conducted outside of the US context are still lacking.
The current work is a first step in filling this gap by simultaneously addressing both limitations.Utilizing two community samples of Dutch participants-one collected in 2018 and the other collected in 2019-we systematically scrutinized science skepticism in four domains: climate change, vaccination, GM, and evolution.We also included measures of general faith in science and willingness to support science, and incorporated the most relevant previously identified predictors of science skepticism: Political ideology, moral purity concerns, 1 religiosity, and scientific literacy (see Rutjens et al., 2018b).We also included a novel predictor, spirituality, which we will briefly elaborate in the next section.In addition, to be as complete as possible in determining the antecedents of science skepticism across domains, we incorporated measures that have previously been shown or argued to inform science skepticism: Conspiracy thinking (Hornsey et al., 2018a(Hornsey et al., , 2018b;;Rutjens et al., 2018a;Sutton et al., 2018), perceived corruption of science (Pechar et al., 2018), concerns about the societal impact of accepting scientific conclusions (Sutton et al., 2018), alongside various demographic variables.Although we took an in principle exploratory approach in the current work, we formulated four general predictions based on various literatures and earlier work (bearing in mind the caveat that most of the work discussed is based on data collected among US samples).
The main prediction was that religiosity plays a more minor role in predicting science skepticism, given that the Netherlands-as are other Western European countries-is relatively secular (Halman and Draulans, 2006;Houtman and Aupers, 2007;Versteeg and Roeland, 2011;Wojtkowiak et al., 2010), as compared with the United States.Instead, we expected contemporary spirituality (i.e.spiritual identity, meaning self-identifying as a spiritual person) to play a more prominent role-similar to religiosity in previous research conducted in the United States.We based this prediction on two observations.First, a large body of survey research indicates that a substantial part of the Dutch population consists of individuals who are referred to as post-Christian spirituals, or more generally as individuals who "believe-without-belonging" (Houtman and Aupers, 2007;Van Mulukom et al., in press;Versteeg and Roeland, 2011).In 2012, a substantial part of the Dutch population indicates to view themselves as either somewhat (31%) or very (12%) spiritual, while the percentage of frequent churchgoers was 19% (De Hart, 2014).Such contemporary spirituality has replaced more traditional religiosity not only in the Netherlands, but in various other Western countries as well (Houtman and Aupers, 2007).Second, contemporary spirituality is characterized by an experiential approach to truth (Hanegraaff, 1996), which reflects the idea that truth-from both an epistemological and an existentialist perspective-can only be found through personal experience, as opposed to reason (i.e.science) or faith (i.e.religion).Indeed, recent research has shown that, in comparison with other groups, spiritual-but-not-religious individuals strongly rely on intuitions (Lindeman et al., 2019).The notion of an experiential approach to truth, which can be used as a definition of contemporary spirituality and New Age belief (Hanegraaff, 1996), reflects a radically different epistemology than that of science, and also different than that of traditional religion, both of which locate truth in the external world.Put differently, the intuitive epistemology of contemporary spirituality will likely be hard to reconcile with (faith in) science, especially in the context of contentious topics such as those central to the current study.
In sum, given that the Netherlands is characterized by relatively low levels of traditional religiosity and relatively high levels of contemporary spirituality-which is characterized by an intuitive epistemology-we expected spirituality to play a more prominent role than religiosity in shaping skepticism about vaccination as well as low faith in science and willingness to support science.
Our second prediction was that political ideology is associated with climate change skepticism, but not with skepticism about vaccination, GM, and evolution (Drummond and Fischhoff, 2017;Hornsey et al., 2018a;Rutjens et al., 2018b).Our third prediction was that scientific literacy is the main predictor of GM skepticism and contributes to vaccination skepticism but does not predict the other forms of skepticism (McPhetres et al., 2019;Rutjens et al., 2018b).Finally, we predicted that evolution skepticism is primarily associated with religiosity (Drummond and Fischhoff, 2017;Rutjens et al., 2018a).Predictions 2-4 were based on separate strands of previous work and theorizing, that-as mentioned previously-has mostly been conducted in the United States.
Study population recruitment
After approval of the study protocol by the Ethics Committee of the first author's institution (#2018-SP-8701), we started recruiting respondents online with the assistance of undergraduate students as part of their coursework.In Study 1 (2018), 329 individuals took part, of which 26 did not complete the survey, 34 failed to correctly respond to an attention check (described below), and one respondent was deleted for detailing her age as 12 years old.This left a final sample of 268 participants.In Study 2 (2019), 245 individuals participated, of which 14 participants failed the attention check and 13 did not complete the study.These participants were not included in the analyses.
Demographics of the remaining participants can be found in Table 1.An attention check was included to make sure participants were paying attention to the wording of the questions.The item read, "We would like to make sure that you are paying attention to the wording of the questions.Please fill in the number that corresponds to 'somewhat disagree'."Participants agreed on a voluntarily basis through advertisements on various (social) media platforms.To avoid self-selection bias, the survey was advertised using neutral terminology.
Materials
The studies were almost identical, but Study 2 consisted of additional measures.All differences between studies are detailed below.The studies consisted of the following measures (largely following Rutjens et al., 2018b).Unless otherwise reported, all items were scored on scales ranging from 1 (strongly disagree) to 7 (strongly agree).Upon completion, participants were thanked for participation.
Outcome variables
Science skepticism.Four items were presented: "Human CO 2 emissions cause climate change"; "Vaccinations cause autism" 3 ; "Genetic modification of foods is a safe and reliable technology"; and "Human beings, as we know them today, developed from earlier species of animals."All items except for the vaccination item were reverse scored.
Faith in science and science support.Participants completed a five-item Faith in Science scale (see Farias et al., 2013;Hayes and Tariq, 2000;Rutjens et al., 2018b).An example item is "Science is the most efficient means of attaining truth" (α = .86).They subsequently completed the following science support item: "According to you, how much money should the government spend on science?"
Predictor variables
Moral purity concerns.Participants completed the moral purity subscale of the moral judgments section of the Moral Foundations Questionnaire (Graham et al., 2009), consisting of three items (e.g."I would call some acts wrong on the grounds that they are unnatural").Reliability was insufficient (Study 1: α = .57;Study 2: α = .52);removing the item about chastity increased reliability slightly in Study 1: α = .60.In Study 2, reliability remained the same, but for consistency, we here also removed the chastity item.The items were scored on 5-point scales ranging from 1 (strongly disagree) to 5 (strongly agree).
Political orientation.Participants indicated their political orientation on two scales ranging from 1 (very left-wing) to 10 (very right-wing) and 1 (very progressive) to 10 (very conservative), respectively.
Religiosity.Participants indicated whether they considered themselves to be a religious person (yes or no); due to an oversight this measure was only included in Study 1.After indicating their affiliation, we asked participants to indicate whether they believe in God or a higher power on a scale ranging from 1 (not at all) to 10 (very much).Next, religious orthodoxy was measured with two 4 items (r = .48 in both studies) that were taken from the orthodoxy subscale of the post-critical belief scale (Fontaine et al., 2003).
Spirituality.Participants then indicated whether they considered themselves to be a spiritual person (Maij and van Elk, 2018) Conspiracy thinking.To measure a general tendency to belief in conspiracies, we adopted a singleitem measure that was shown to have good validity (Lantian et al., 2016).The measure consisted of a short text about various well-known events, which was followed by a single item: "I think that the official version of the events given by the authorities very often hides the truth."(1 = completely false to 9 = completely true), with a higher score reflecting more conspiracy thinking.Mean score on the conspiracy thinking measure was 4.77 (SD = 2.20) in Study 1 and 4.79 (SD = 2.45) in Study 2.
Societal impact concerns.We created two items which were designed to measure the extent to which people worry about the negative societal impact of accepting: The reality of climate change, the safety of vaccines, and the safety of eating GM foods. 5The first item asked participants to indicate for which of the three topics acceptance has dangerous consequences on the short term; they could select one of the topics or select "none of the above."The second item asked participants to do the same, but this time to consider long-term societal consequences.Subsequently, we created societal impact concerns indices for climate change, vaccination, and GM, which could range from 0 (not selected) to 2 (selected twice).
Perceived corruption of science.Two statements were presented for participants to indicate their agreement with: "Science is corrupted by government interference" and "Science is corrupted by corporate interference."
Study 2 only materials
Cognitive reflection test.To complement the scientific literacy measure with a general measure of cognitive ability, we included the cognitive reflection test (CRT; Frederick, 2005).The CRT consists of three items that measure individual differences in intuitive-analytic cognitive style.Intuitive scores on the test have, for example, been shown to correlate with religiosity (e.g.Shenhav et al., 2012).
Media trust.Finally, we measured trust in traditional media and trust in social media news sources, and let participants disclose the three sources of news they use most often.We only included trust in traditional media in the analyses, which was measured on a 10-point scale ranging from 1 (very unreliable) to 10 (very reliable).
Results
In both studies, we used multiple linear regression analyses to assess which variables best predict science skepticism across four domains, faith in science, willingness to support science, holding constant the potential influence of the other predictors, and controlling for demographic variables.Note that we also control for faith in science in predicting science skepticism and science support, that is, included it as a predictor in these analyses.Bivariate correlations are displayed in the supplemental materials section.
Standardized beta coefficients and 95% confidence intervals are reported.All dependent variables were measured on 7-point scales, except science support which was measured on a 10-point scale. Impact
Study 1
See Table 2 for main results.
Climate change.The final regression model explained 15% of the variance.Age, political conservatism, and (low) faith in science were predictors of climate change skepticism, which mirrors 6 previous results obtained with US participants and is line with our second prediction.Note, however, that the association with conservatism is modest.Concerns about the societal impact of accepting the reality of climate change was an additional contributor to skepticism.
Vaccination.The final model explained 25% of the variance.Supporting our first prediction-and in contrast to previous work with US respondents-religiosity did not play a meaningful role in predicting vaccine skepticism, but spirituality did.Our third prediction-that literacy would negatively contribute to vaccine skepticism-was not supported.Conspiracy thinking and concerns about the societal impact of vaccination were additional significant predictors.
Because the main vaccine skepticism measure targeted one specific belief about vaccination (i.e. that vaccines cause autism), we also ran regression analyses on two additional items that assessed more general negative beliefs about two types of vaccination (MMR-measles, mumps, and rubella and hPV-human papilloma virus).The results for MMR vaccine skepticism were almost identical to the results for the initial "vaccines cause autism" statement (see Table 4), which is not surprising given that the vaccines-autism link belief usually refers specifically to MMR vaccination.Spirituality was the strongest predictor, alongside (low) faith in science, as well as social impact concerns.In addition, scientific literacy was a significant predictor, which supports our third prediction.Together, these variables explained 26% of the variance.Results for hPV skepticism were quite similar, although spirituality was a weaker predictor while (low) faith in science and conspiracy thinking were significant predictors.
Genetic modification.The final model explained 33% of the variance.Supporting our third prediction and replicating earlier work among US participants (Rutjens et al., 2018b), scientific literacy was a significant negative predictor.Age, spirituality, conspiracy thinking, and concerns about the societal impact of GM were additional significant predictors.
Evolution.The final model yielded 38% explained variance.This is the only topic where skepticism is most strongly predicted by religiosity, supporting our fourth prediction.In addition, (low) faith in science, political conservatism, and conspiracy thinking contributed to the explained variance.
Faith in science and science support.Although we also included general faith in science as a predictor in the other analyses, we were interested in assessing which variables would contribute to faith in science.Our prediction was that spirituality would play a prominent role.The final regression model explained 23% of the variance.The strongest predictor of (low) faith in science indeed were spirituality and conspiracy thinking; religiosity was a weaker but significant predictor.
The final regression model explained 20% of the variance for science support.The only significant predictor was faith in science.In contrast to previous work among US participants (Rutjens et al., 2018b), religiosity was not associated with science support.Interestingly, before entering faith in science and scientific literacy in the final model, spirituality was a reasonably strong negative predictor of science support, β = -.20,p < .01,95% CI = (-.32,-.07).Adding faith in science strongly reduced the effect of spirituality, suggesting mediation.A bootstrapping analysis of 5000 samples (Preacher and Hayes, 2004;Process Macro Model 4) confirmed that the negative effect of spirituality on the willingness to support science was fully mediated by (low) faith in science, with an indirect effect of -.17 (SE = .03),95% CI = (-.24,-.11).Thus, although we had not predicted this mediation effect, the results support our main prediction that spirituality plays a larger role than religiosity in predicting general faith in science and willingness to support science.
Study 2
See Table 3 for main results.
Climate change.The final regression model explained 29% variance.In contrast to the results of Study 1 and our second prediction, political conservatism was not a significant predictor (β = .12,p = .074) of skepticism.Age, low faith in science, and low trust in traditional media were significant predictors.There was also an unexpected small negative effect of perceived corporate corruption of science.Interestingly, unlike in Study 1, spirituality was a negative predictor, so that more spiritual respondents were less skeptical about anthropogenic climate change.We return to this finding in the discussion.
Vaccination.The final model explained 40% of the variance.As can be seen in Table 3, there were some differences compared with Study 1. First, low faith in science was a significant predictor, while the coefficient of spirituality in the final model was not significant.However, upon closer inspection, we found that spirituality was a significant predictor before faith in science and scientific literacy were added to the model, β = .25,p < .01,95% CI = (.06,.26).A bootstrapping analysis of 5000 samples (Preacher and Hayes, 2004;Process Macro Model 4) confirmed that the initial effect of spirituality was fully mediated by (low) faith in science, with an indirect effect of .09(SE = .03),95% CI = (.05,.15).There was no effect of religious orthodoxy on vaccine skepticism.Taken together, our main prediction was supported, but the indirect effect of spirituality via faith in science was not predicted and should therefore be interpreted with caution.Second, corroborating previous research conducted with US participants (Rutjens et al., 2018b) and supporting our third prediction, scientific literacy contributed significantly to vaccine skepticism.In addition, there were small negative effects of political conservatism and trust in traditional media, and a substantial effect of concerns about the societal impact of accepting vaccination (which was also observed in Study 1).
We next assessed the effects on negative MMR and hPV vaccine attitudes.As can be seen in Table 4, there were some differences with Study 1. Similar to the results above, the results for MMR skepticism showed a (small) effect of faith in science, and no effect of spirituality.However, we again observed that there was an initial effect of spirituality, β = .20,p < .01,95% CI = (.04,.27),which became nonsignificant upon adding faith in science and scientific literacy to the model.A bootstrapping analysis of 5000 samples (Preacher and Hayes, 2004;Process Macro Model 4) confirmed that the negative effect of spirituality was fully mediated by (low) faith in science, with an indirect effect of .06(SE = .02),95% CI = (.02,.12).Scientific literacy was again a significant predictor and there was an additional effect of religious orthodoxy.As in Study 1, societal impact concerns explained additional variance.For hPV skepticism, the effect of spirituality was significant, while faith in science was not.Scientific literacy was a significant predictor, and there was a negative effect of trust in traditional media.
Genetic modification.The final model explained 32% variance.Contrary to our third prediction and the results of Study 1, scientific literacy was not a significant predictor of GM skepticism.Also contrasting the results of Study 1, spirituality was not a significant predictor (β = .12,p = .11).We did observe an effect of low faith in science, as well as an additional small effect of moral purity concerns (which is in line with previous work; e.g.Scott et al., 2016) and of gender (male participants were more skeptical).Finally, concerns about the societal impact of accepting GM contributed substantially to the explained variance.
Evolution.As in Study 1, and supporting our fourth prediction, evolution skepticism was primarily predicted by religiosity.Since we did not have a measure of religious identity in this study, the strongest predictor in this analysis was religious orthodoxy, which together with low faith in science accounted for 19% of the explained variance.None of the additional predictors were significant.
Faith in science and science support.As in Study 1 and confirming our main prediction, spirituality was the strongest negative predictor of general faith in science.There was also a small effect of gender (female participants indicated a stronger faith in science).In addition, conspiracy thinking was a negative predictor and trust in traditional media was a positive predictor.There was also an unexpected small positive effect of perceived corporate corruption of science.The final model explained 24% variance.
As in Study 1, faith in science was the strongest predictor of the willingness to support science.However, contrasting our main prediction and the results of Study 1, there was no initial effect of spirituality on science support.In addition, there was a small negative effect of political conservatism and a small positive effect of religious orthodoxy.None of the additional predictors further contributed to the 12% explained variance in the final model.
Discussion
The primary goal of the current research was to investigate the heterogeneity of science skepticism beyond the US cultural context.Results of two studies partially mirror results obtained previously among US participants (Rutjens et al., 2018b), while at the same time highlighting various important-and largely predicted-cultural differences, most notably regarding the role of religiosity versus spirituality.Overall, the current studies again confirm the heterogeneous nature of science skepticism, as observed previously (Rutjens et al., 2018b).The most important similarities and differences with previous work are discussed below.
Hypothesized effects
Our main prediction was that spirituality would replace religiosity as a key contributor to skepticism about vaccination, low general faith in science, and unwillingness to support science.Results largely supported this prediction.Vaccination skepticism as measured with the "vaccination causes autism" item was predicted by spirituality but not religiosity in Studies 1 (direct effect of spirituality) and 2 (indirect effect of spirituality via faith in science).The same pattern of results was found for MMR vaccine skepticism-although in Study 2 (but not in Study 1) religious orthodoxy was a significant predictor as well-and hPV vaccine skepticism.Low general faith in science was best predicted by spirituality in both studies, although there was a small additional effect of religiosity in Study 1 as well.Unwillingness to support science was predicted by spirituality-via low faith in science-but not religiosity in Study 1.However, this was not the case in Study 2, where besides low faith in science, there was a small effect of religious orthodoxy but no effect of spirituality.In sum, results for vaccine skepticism and general faith in science provide robust evidence for our main prediction, but the evidence provided by the results for science support is inconclusive.
In Study 2, we also found that spirituality was negatively related to climate change skepticism.Although not predicted and no such relation was observed in Study 1, this is an interesting effect that fits previous work on how spirituality is related to pro-environmental attitudes (Garfield et al., 2014).Moreover, this finding also further confirms the importance of acknowledging the heterogeneity of science skepticism (i.e.spiritual individuals are not simply skeptical about science across domains; in some instances, they might even be less skeptical).
The second prediction was that political conservatism predicts climate change skepticism but not other manifestations of science skepticism.Although results of a recent cross-national survey indicate that political conservatism as a main antecedent of climate science skepticism (Hornsey et al., 2018a) might be an exclusively North American phenomenon, we hypothesized that political conservatism would be the main ideological antecedent of climate science skepticism. 7Results provided partial support for our prediction; there was a small but significant effect in Study 1, but the effect of political conservatism in Study 2 was not significant.Thus, the association between political conservatism and climate change skepticism was weaker than that observed in previous research among US participants (Rutjens et al., 2018b; also see Hornsey et al., 2018a).
The third prediction was that low scientific literacy is the main driver of GM skepticism and contributes to vaccine skepticism.GM has been observed to be one area of science in which skepticism is primarily associated with a lack of scientific literacy, as opposed to individual differences in ideology or beliefs (McPhetres et al., 2019;Rutjens et al., 2018b).The current research provides partial support for this hypothesis: Scientific literacy was a predictor of GM skepticism in Study 1 but not in Study 2. A stronger and more consistent predictor of GM skepticism was societal impact concerns, which we will get back to shortly.Spirituality also contributed to GM skepticism in Study 1, which suggests that there might yet be a belief-component to GM skepticism.Future work should further investigate this possibility, ideally across various cultures.We had also predicted that scientific literacy contributes to vaccine skepticism, but support for this prediction was only found in Study 2.
Finally, the current work provides robust evidence for the fourth prediction that religiosity is the prime driver of evolution skepticism.
Additional effects
Recent research has identified conspiracy thinking as an important precursor of vaccine skepticism in particular (e.g.Hornsey et al., 2018b;Jolley and Douglas, 2014).In the current work, we measured conspiracy thinking alongside a number of additional potential predictors of science skepticism.Although conspiracy thinking was a consistent additional predictor of low general faith in science-which is interesting and warrants future work on this relationship-its unique explanatory power in predicting domain-specific science skepticism consistently in both studies was both modest and inconclusive.It is of course possible that a domain-specific measure of conspiracy belief (e.g.exposure to conspiracy content specifically targeting vaccines; Jolley and Douglas, 2017) rather than the current measure-which tapped into a general tendency to engage in conspiracy thinking-would have been a more potent predictor of domain-specific skepticism.
A more robust predictor of vaccine and GM skepticism is the extent to which participants were concerned about the societal impact of accepting the mainstream scientific conclusions that these are safe technologies (Sutton et al., 2018).In other words, perceptions of how dangerous acceptance is uniquely contributed to skepticism.This is a promising observation, given that such perceptions are likely malleable (unlike ideology and belief; see Rutjens et al., 2018a) and as such might be used to inform possible interventions to reduce skepticism beyond merely addressing information deficits or increasing literacy (which is with the exception of GM often not a successful strategy; e.g.Brossard and Lewenstein, 2010;Drummond and Fischhoff, 2017;McPhetres and Zuckerman, 2018;Rutjens et al., 2018b).
Spiritual and/or heterogeneous skepticism
Previous work conducted in the United States has highlighted religiosity as a consistent predictor of vaccine skepticism as well as of general trust in science and attitudes to science (Rutjens et al., 2018b, also see McPhetres andZuckerman, 2018).The current studies find that religiosity does not play a major role, except in predicting skepticism about evolution.Instead, spirituality was the most consistent predictor of vaccine skepticism and general faith in science, and there was some evidence for an association of spirituality with GM skepticism and willingness to support science (Study 1).Importantly, this does not imply that all science skepticism as investigated in the current work is related to spirituality.First, the findings for evolution skepticism show that it is possible to meaningfully distinguish spirituality from religiosity (spirituality did not contribute to evolution skepticism).Second, spirituality did not contribute to climate change skepticism.As mentioned earlier, in Study 2, we even observed a negative relation between spirituality and climate chance skepticism.An additional point about spirituality is that it could mean different things to different people (e.g.Lindeman et al., 2019); future research should look more closely at the various ways in which individuals define their spirituality and how these relate to attitudes toward science.
Thus, corroborating previous work among American samples and confirming the heterogeneity of science skepticism, climate change skepticism is found to be-somewhat-political, while evolution skepticism is primarily fueled by religiosity.In addition, the added explanatory power of the other predictors that were included-such as scientific literacy, societal impact concerns, and also demographics such as age-vary considerably per domain.Moreover, the fact that various factors contributed-to various degrees-to vaccine skepticism, GM skepticism, and general faith in science, further speaks to this heterogeneity and points to the complexity and multi-faceted nature of these beliefs.These observations notwithstanding, however, only one variable was found to contribute most consistently and substantially to general faith in science and skepticism about vaccines; that variable is spirituality.
Limitations and considerations
One important limitation of the current work is its correlational nature.We therefore need to be careful in inferring any causal relations.However, it is worth considering the likely direction of causality that underlies the observed associations; it is unlikely that relatively stable individual differences in political conservatism, religiosity, or spiritual beliefs will change because of fluctuations in skepticism about science.In other words, we cautiously interpret the current results as showing that relatively stable differences in ideologies and beliefs underlie various manifestations of science skepticism.Another limitation concerns the use of convenience samples that are not necessarily representative of the entire Dutch population, although it should be noted that the current samples are demographically more diverse than student samples (see Table 1).
The current research demonstrates that the notion of heterogeneous science skepticism extends beyond the US cultural context and thus speaks to the external validity and generalizability of previous work (Rutjens et al., 2018b).Given the well-documented problems with generalizability in the social sciences (Henrich et al., 2010), the current corroboration and extension of previous work on science skepticism among Dutch individuals is an important first step.Here, it needs to be considered that the Dutch score relatively high on various indices of spirituality as compared with inhabitants of other secularized countries (De Hart, 2014;Van Mulukom et al., in press).Future studies should therefore extend this work by systematically testing the antecedents of science skepticism in samples drawn from a variety of other cultural contexts.
Conclusion
We show that in two community samples drawn from a secular Western European population, the Netherlands, climate science skepticism is modestly related to political conservatism, evolution skepticism is grounded in religiosity, and skepticism about vaccines-as well as low general faith in science-is predominantly grounded in spirituality.Thus, in the secularized cultural context of the Netherlands, contemporary spirituality is a key contributor to science skepticism.Given that science skepticism is on the rise in secularized countries in particular-with the recent decline in vaccine uptake in various non-US Western countries as a prominent example of its consequences (e.g.Hornsey et al., 2018b;Pierik, 2017); only 59% of the public in Western Europe believe that vaccines are safe (Wellcome Global Monitor, 2018)-it is important to further scrutinize the relation between spirituality and science skepticism.
concerns reflect the topic at hand and were measured with regards to the topics of climate change, vaccination, and GM.*p < .05;**p < .01.
Table 2 .
Multiple linear regression analyses of science skepticism, faith in science, and science support-Study 1.
Table 3 .
Multiple linear regression analyses of science skepticism, faith in science, and science support-Study 2. Standardized beta coefficients and 95% confidence intervals are reported.All dependent variables were measured on 7-point scales, except science support which was measured on a 10-point scale.Impact concerns reflect the topic at hand and were measured with regards to the topics of climate change, vaccination, and GM.
Table 4 .
Multiple linear regression analyses of vaccine skepticism-Studies 1 and 2. results of a regression model which includes age and gender, moral purity values, political ideology, religiosity, orthodoxy, and spirituality, faith in science, scientific literacy, conspiracy thinking, perceived corruption of science, societal impact concerns, and-in Study 2-CRT scores and media trust.Results are displayed in Tables2 to 4. We tested and found no evidence for multicollinearity in all analyses (all variance inflation factors < 1.3 in Study 1 and < 1.8 in Study 2).
CRT: cognitive reflection test.Standardized beta coefficients and 95% confidence intervals are reported.All dependent variables were measured on 7-point scales, except science support which was measured on a 10-point scale.Religiosity (dichotomous) was only measured in Study 1. CRT and media trust were only included in Study 2. *p < .05;**p < .01. the | 2020-03-05T10:25:43.545Z | 2019-09-06T00:00:00.000 | {
"year": 2020,
"sha1": "4949e041aaab75e7e9f04378b32a3ab23e3bf3fe",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0963662520908534",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "73c75ce515ff7c58ff85ebf2051e11d2696c7ea7",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
210710167 | pes2o/s2orc | v3-fos-license | The Prognostic Importance of TAPSE in Early and in Stable Cardiovascular Diseases
The identification of predictors of major cardiovascular events (MACES) represents a big challenge, especially in early and stable cardiovascular diseases. This prospective study comparatively evaluated the prognostic importance of left ventricular (LV) and right ventricular (RV) systolic and diastolic function, pulmonary artery pressure (PAP) and pulmonary vascular resistance (PVR) in a stable patient’s cohort with cardiovascular risk factors. The LV ejection fraction, mitral annular plane systolic excursion (MAPSE), tricuspid annular plane systolic excursion (TAPSE), functional mitral regurgitation (FMR), doppler tissue imaging of mitral and tricuspid annulus with systolic and diastolic peaks estimation, tricuspid regurgitation velocity (TRV), pulmonary velocity outflow time integral (PVTI), mean pulmonary artery pressure (MPAP) and PVR were estimated at enrollment. During the follow-up, MACES and all-cause mortality were recorded. 369 subjects with or without previous MACES were enrolled. Bivariate analysis revealed LVEF, TAPSE, MPAP, TRV, PVR, LV diastolic function, and FMR were associated with the endpoints. When computing the influence of covariates to the primary endpoint (all-cause mortality and MACES) through Cox analysis, only LV diastolic function and TAPSE entered the final model; for the secondary endpoint (MACES) only TAPSE entered. TAPSE was able to predict MACES and all-cause mortality in early and stable cardiovascular diseases. The use of TAPSE should be implemented.
Introduction
Many clinical variables and ultrasound parameters play a prognostic role in advanced or unstable cardiovascular diseases [1]. The identification of predictors of major cardiovascular events (MACES) in early and stable cardiovascular diseases is even more interesting [2]. Only a few commonly used echocardiographic parameter expression of left ventricular (LV) systolic and diastolic function, functional mitral regurgitation (FMR), and right ventricular (RV) function play a prognostic role in overall population and in stable patients with previous MACES [3][4][5][6]. Pieces of evidence are increasing in this field and new echocardiographic parameters, such as global longitudinal strain, seem to have an additional value [7].
Until now, to our knowledge, in stable patients with cardiovascular risk factors a comparative study has not been conducted yet. This prospective, observational cohort study was designed to ascertain the prognostic value of old and new ultrasound parameters of LV and RV systolic and diastolic function (principally derived from mitral and tricuspid annular motion), FMR, pulmonary artery pressure (PAP) and pulmonary vascular resistance (PVR) in stable patients with cardiovascular risk factors with or without previous MACES.
Methods
This study was performed in accordance with the Ethical Standards of the 1975 Helsinki Declaration revised in 2013. The study was approved by the Ethics Committee of Modena (protocol number 238/2010, date of approval 13 April 2011) and written informed consent was obtained from participants before the enrollment.
Patients referred to our Echolab, from October 2011 through August 2014, were eligible. The data of patients about medical history and cardiovascular risk factors were collected. Subjects with at least one of the major cardiovascular risk factors or with previous MACES (myocardial infarction and unstable angina, overt heart failure decompensation and stroke) that were stable during the last 12 months, were eligible. Patients with MACES during the last 12 months were excluded. Patients suffering from severe valvular diseases, chronic obstructive pulmonary disease, severe pulmonary hypertension, congenital heart disease, atrial fibrillation, with cardiac stimulators and with bad acoustic windows were also excluded.
According to the recommendation of the American Society of Echocardiography/European Association of Cardiovascular Imaging (ASE/EACVI), at enrollment a complete echocardiographic study was performed including M-mode, two-dimensional, pulsed and continuous Doppler spectral recording, as well as Doppler tissue imaging (DTI) evaluation of the mitral and tricuspid lateral annulus [8,9]. Two-dimensional images were obtained in parasternal long and short-axis views, in the apical 4-and 2-chambers view and in the subcostal view. Chambers size and wall thickness were measured [10,11].
Two echocardiographic systems (Sequoia 512-Acuson Siemens, Mountain View, USA; Vivid E9-GE Healthcare, Norwalk, USA) were used. For each patient the following parameters were estimated: LV ejection fraction (EF), mitral and tricuspid annular plane systolic excursion (MAPSE, TAPSE), FMR, pulmonary valve outflow time-velocity integral (PVTI), tricuspid regurgitation velocity (TRV), mean pulmonary artery pressure (MPAP) and PVR [12]. Mitral and tricuspid pulsed Doppler flow and DTI of the mitral and tricuspid annulus were evaluated. LV and RV systolic and diastolic peaks were estimated and the multiparameter evaluation of LV and RV diastolic function was then assessed.
LV EF was measured through the biplane method of discs; the modified Simpson's rule was obtained from the apical 4-and 2-chamber view by determining the end-diastolic and the end-systolic volume. Reduced LV EF was defined as <55% [13].
MAPSE represents the systolic movement of the base of the LV free wall; it was measured in the apical 4-chambers view, determining with the M-mode guide the maximal excursion of the LV free wall at the junction with the mitral valve plane from the lowest position to the systolic peak. Reduced MAPSE was defined as <1.5 cm [14].
TAPSE represents the longitudinal function of the RV by determining, with the M-mode guide, the maximal systolic excursion of the RV free wall at the junction with the tricuspid valve plane in the apical 4-chambers view. As with other regional methods, it assumes that the displacement of the basal segment is representative of the entire RV function. Reduced TAPSE was defined as <1.7 cm [15].
FMR is a dynamic condition whose severity varies depending on loading conditions. FMR was assessed at rest through vena contracta width in the parasternal long-axis view, effective regurgitant orifice area, left atrial size and regurgitant volume. According to all these parameters, FMR was then classified into four classes: absent, mild, moderate and severe [16].
PVTI was measured placing a pulsed wave sample volume in the right ventricular outflow tract at the level of the pulmonic valve in the parasternal short-axis view.
MPAP was obtained, in the absence of pulmonary outflow tract obstruction and/or pulmonary valve stenosis, when pulmonary regurgitation was performed by applying the simplified Bernoulli equation from the early diastolic peak of pulmonary valve regurgitation [17] or was derived from pulmonary artery systolic pressure [18]. Increased MPAP was defined as >25 mmHg. PVR was calculated from the peak of the tricuspid regurgitation velocity and PVTI placing a pulsed wave sample volume in the right ventricular outflow tract at the level of the pulmonic valve in the parasternal short-axis view [12]. Increased PVR was defined as >3 Woods Units.
LVSyP velocity was measured by determining the DTI systolic peak with the sample volume placed at the junction of the LV free wall with the mitral valve plane. Reduced LVSyP was defined as <9 cm s −1 [19].
RVSyP and RVPrP velocities were measured by determining DTI presystolic and systolic peaks with the sample volume placed at the junction of the RV free wall with the tricuspid valve plane. Reduced RVSyP was defined as <10 cm s −1 [20]; RVPrP is usually greater than RVSyP but a reference value has not been defined yet [21].
The evaluation of LV diastolic function was assessed studying the mitral inflow and DTI of the mitral lateral annulus. E and A peaks, their ratio, the E deceleration time, e' and a' peaks, their ratio and the E/e' ratio were calculated. LV diastolic filling patterns were classified by the combined quantitative analysis of these parameters into four classes: normal, impaired relaxation, pseudonormal and restrictive [9]. Impaired LV relaxation led to low E and e' velocity, high A and a' velocity, decreased E/A ratio and increased E deceleration time. The pseudonormal filling pattern cannot be recognized only with the evaluation of the mitral inflow pattern but needed an additional assessment, such as the Valsalva manoeuvre, e' evaluation and E/e' ratio estimation. The restrictive pattern showed a higher E wave, greater E/A ratio and E/e' ratio [22].
RV diastolic function was assessed studying the tricuspid inflow and DTI of the tricuspid lateral annulus. From the apical 4-chamber view, the Doppler beam should be aligned parallel to the RV inflow; alignment could be facilitated by displacing the transducer medially toward the lower parasternal region. The parameters used to assess RV diastolic function were the same as those used to assess the LV diastolic function. E and A peaks, their ratio, e' and a' peaks, their ratio and the E/e' ratio were estimated. RV diastolic function was then classified according to these parameters into three classes: normal, impaired relaxation and restrictive [11].
All the echocardiographic parameters were measured at end-expiration during quiet breathing, and three measurements on consecutive heart cycles were averaged. Special care was given to obtain an ultrasound beam parallel to the direction of the annular motion and the transvalvular flows, and also to optimize the focus, gain, and compression setting (to obtain the most accurate endocardium visualization). If necessary, echocardiographic parameters were calculated from multiple views. Intraand inter-observer variability was calculated.
The enrolled patients were followed-up with every 6 months through clinical-electrocardiographic evaluation or through hospital databases consultation. During the follow-up period MACES (myocardial infarction and unstable angina, overt heart failure decompensation and stroke) and all-cause mortality were recorded.
Statistical Analysis
Continuous variables are displayed as means ± standard deviation, while categorical data are displayed as frequencies. A two-tailed p value ≤ 0.05 was considered statistically significant, with 95% confidence interval.
Bivariate analysis was used to find which ultrasound parameters were associated with all-cause mortality and MACES (primary composite endpoint) and with MACES (secondary endpoint). An independent-sample t-test was used for continuous variables and a Chi-squared test for categorical variables. Cox regression analysis was used to find a model predictive for the endpoints; only the meaningful parameters found at bivariate analysis were entered into the Cox model, stratifying according to the presence of previous MACES. The sample size was calculated. SPSS/PC release 2013 (IBM Corp., Armonk, NY, USA) was used.
Results
A total of 1667 consecutive patients referred to our Echolab were assessed for study eligibility; of these 369 were enrolled (mean follow-up 1178 ± 391 days). Table 1 shows the clinical features, the prevalence of cardiovascular risk factors, drug treatments and exclusion criteria. Table 2 shows the mean values and frequencies of the estimated ultrasound parameters in the entire cohort and in patients with and without previous MACES. During the follow-up period 55 MACES were recorded (20 myocardial infarctions-unstable anginas, 27 heart failure decompensations and 8 strokes) and 29 patients died (all-cause mortality). Bivariate analysis revealed the following parameters were related to all-cause mortality and MACES: LVEF, TAPSE, MPAP, TRV, PVR, LV diastolic function, and FMR (Table 3). When computing the influence of covariates on the primary composite endpoint (all-cause mortality and MACES) through Cox analysis, LV diastolic function and TAPSE entered the final model (Table 4). When computing the influence of covariates on the secondary endpoint (MACES) through Cox analysis, only TAPSE entered the final model. Both the primary and the secondary endpoint were more frequent in patients with previous MACES (Figure 1a,b). Sample size turned out to be appropriate. During the follow-up period 55 MACES were recorded (20 myocardial infarctions-unstable anginas, 27 heart failure decompensations and 8 strokes) and 29 patients died (all-cause mortality). Bivariate analysis revealed the following parameters were related to all-cause mortality and MACES: LVEF, TAPSE, MPAP, TRV, PVR, LV diastolic function, and FMR (Table 3). When computing the influence of covariates on the primary composite endpoint (all-cause mortality and MACES) through Cox analysis, LV diastolic function and TAPSE entered the final model (Table 4). When computing the influence of covariates on the secondary endpoint (MACES) through Cox analysis, only TAPSE entered the final model. Both the primary and the secondary endpoint were more frequent in patients with previous MACES (Figure 1a,b). Sample size turned out to be appropriate.
Discussion
Cardiovascular risk factors have an early influence on the heart, vessels and lungs [23]. Previous studies demonstrated the prognostic importance of the left heart also in early cardiovascular diseases [3] but we do not know so much about the right heart, for a long time considered a useless bystander.
Concerning the LV, RV has peculiar characteristics such as a larger volume, a greater longitudinal contraction and a smaller mass. RV has a greater dependence on preload (determined by RV stroke volume, tricuspid and pulmonary regurgitation) and on afterload (determined by the forces that oppose RV output and a reflection of PAP and PVR) [24]. PAP is determined by cardiac output, properties of the vasculature (resistance, capacitance and impedance) and atrial filling pressure. The assessment of RV afterload highlights the fascinating role played by PVR, closely influenced by pressure changes [25]. Moreover, LV and RV function are influenced by ventricular interdependence [26]. These interrelations are not well known, but we can hypothesize a clinical role for RV also in the early stages of cardiovascular diseases.
This study simultaneously analyzed old and new echocardiographic indexes of LV and RV function derived from annular motion, PAP and PVR and revealed the lack of importance of most of the considered parameters. In early and stable cardiovascular diseases echocardiography did not provide a powerful prognostic role except for LV diastolic function, for the primary endpoint, and TAPSE, for both the primary and the secondary endpoints. Most of the studied parameters and TAPSE were within the normal range but TAPSE turned out to be a more powerful predictor of outcome than LV function, FMR and PAP, known predictors of MACES and mortality in advanced cardiovascular diseases [4,5].
The clinical importance of RV function in the early stages has not been completely identified. We previously reported in a small cohort of stable outpatients with a poorer echocardiographic evaluation that TAPSE-within the normal range-was able to predict MACES [27]. Moreover, PAP, related to RV function, demonstrated to be a powerful predictor of mortality in the general population of the Olmsted Country [28].
In the overall population of the Copenhagen City Heart Study with cardiovascular risk factors, Modin and colleagues have recently shown that TAPSE was an independent predictor of cardiovascular death as an expression of LV diastolic dysfunction [29]. This study, in a smaller cohort with a more complete echocardiographic evaluation, confirms Modin
Discussion
Cardiovascular risk factors have an early influence on the heart, vessels and lungs [23]. Previous studies demonstrated the prognostic importance of the left heart also in early cardiovascular diseases [3] but we do not know so much about the right heart, for a long time considered a useless bystander.
Concerning the LV, RV has peculiar characteristics such as a larger volume, a greater longitudinal contraction and a smaller mass. RV has a greater dependence on preload (determined by RV stroke volume, tricuspid and pulmonary regurgitation) and on afterload (determined by the forces that oppose RV output and a reflection of PAP and PVR) [24]. PAP is determined by cardiac output, properties of the vasculature (resistance, capacitance and impedance) and atrial filling pressure. The assessment of RV afterload highlights the fascinating role played by PVR, closely influenced by pressure changes [25]. Moreover, LV and RV function are influenced by ventricular interdependence [26]. These interrelations are not well known, but we can hypothesize a clinical role for RV also in the early stages of cardiovascular diseases.
This study simultaneously analyzed old and new echocardiographic indexes of LV and RV function derived from annular motion, PAP and PVR and revealed the lack of importance of most of the considered parameters. In early and stable cardiovascular diseases echocardiography did not provide a powerful prognostic role except for LV diastolic function, for the primary endpoint, and TAPSE, for both the primary and the secondary endpoints. Most of the studied parameters and TAPSE were within the normal range but TAPSE turned out to be a more powerful predictor of outcome than LV function, FMR and PAP, known predictors of MACES and mortality in advanced cardiovascular diseases [4,5].
The clinical importance of RV function in the early stages has not been completely identified. We previously reported in a small cohort of stable outpatients with a poorer echocardiographic evaluation that TAPSE-within the normal range-was able to predict MACES [27]. Moreover, PAP, related to RV function, demonstrated to be a powerful predictor of mortality in the general population of the Olmsted Country [28].
In the overall population of the Copenhagen City Heart Study with cardiovascular risk factors, Modin and colleagues have recently shown that TAPSE was an independent predictor of cardiovascular death as an expression of LV diastolic dysfunction [29]. This study, in a smaller cohort with a more complete echocardiographic evaluation, confirms Modin et al.'s observation about TAPSE and LV diastolic function. They are probably linked to each other, and early expression of pressure and volume overload.
TAPSE is easy to measure, reproducible and has unique characteristics which derive from forces that contribute to RV preload and afterload [15]. Similarly to other regional methods, it assumes that the displacement of the free wall basal segment represents the entire RV function, an assumption that is less valid when there are regional wall motion abnormalities. However, TAPSE has many validations and many studies support its utility [11].
TAPSE represents the great longitudinal contraction of the RV and seems to early perceive vascular stiffness and increased preload and afterload. RV function indexes derived from DTI are not useful in the early stages since they are less load-dependent, while the left heart is a powerful structure whose function is usually preserved, especially if measured with standard techniques [7].
Limits
The cohort was heterogeneous: patients in primary prevention with cardiovascular risk factors and stable patients in secondary prevention were enrolled. Moreover, cardiovascular risk factors decline with different mechanisms of cardiac, vascular and lung function. The echocardiographic evaluation was made only with standard techniques, especially derived from annular motion.
Conclusions
This study confirms that TAPSE has a pivotal role between the LV and RV function. Larger studies are required but pieces of evidence are growing: the simple use of TAPSE in early and stable cardiovascular diseases should be implemented and, for this purpose, TAPSE limits probably should be reconsidered. | 2020-01-16T09:04:46.484Z | 2020-01-15T00:00:00.000 | {
"year": 2020,
"sha1": "3ac839678a455952117a0088ae59d45dadc394bb",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2308-3425/7/1/4/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2e8d669c31cf25f4881b88aba98b1c29aa612760",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
8392116 | pes2o/s2orc | v3-fos-license | Lack of agreement between tonometric and gastric juice partial carbon dioxide tension.
STATEMENT OF FINDINGS: Our goal was to compare measurement of tonometered saline and gastric juice partial carbon dioxide tension (PCO2). In this prospective observational study, 112 pairs of measurements were simultaneously obtained under various hemodynamic conditions, in 15 critical care patients. Linear regression analysis showed a significant correlation between the two methods of measuring PCO2 (r(2) = 0.43; P < 0.0001). However, gastric juice PCO2 was systematically higher (mean difference 51 mmHg). The 95% limits of agreement were 315 mmHg and the dispersion increased as the values of PCO2 increased. Tonometric and gastric juice PCO2 cannot be used interchangeably. Gastric juice PCO2 measurement should be interpreted with caution.
Introduction:
In recent years there has been growing interest in tonometric estimation of gastric intramucosal pH (pHi). More recently, attention has focused on the gradient between intraluminal and arterial PCO 2 . pHi appears to be a useful diagnostic and prognostic tool in critically ill patients, and may also be used as a therapeutic guide. However, intraluminal PCO 2 is the parameter measured to calculate pHi, and it is assumed as equivalent to the PCO 2 of the upper layers of the gastric mucosa. Direct measurement of PCO 2 in gastric juice might offer advantages over tonometry. Tonometer costs could be saved, and equilibration time would no longer be necessary. Additionally, preanalytic factors that account for poor reproducibility, such as inadequate volume of saline in the tonometer, errors in the dwell time of the sample or in the technique used to aspirate saline, mixing of the sample with tonometer dead space and delay in analysis, could be prevented. Nevertheless, to our knowledge few experimental or clinical studies have examined PCO 2 in gastric juice. Moreover, no comparison with simultaneous tonometric samples has been performed. Our goal was to compare simultaneous measurement of PCO 2 in gastric juice and in saline samples from a tonometer. Data from the present study show that gastric juice PCO 2 is systematically higher. Furthermore, differences widen at high PCO 2 values, and data dispersion becomes even more striking. Therefore, tonometric PCO 2 and gastric juice PCO 2 are not interchangeable. Patients and methods: The present study was approved by the local ethics committee, and informed consent was obtained from the next of kin of each patient. We studied 15 consecutive mechanically ventilated patients from a medical/surgical intensive care unit, in whom tonometric monitoring was indicated by attending physicians. All patients were receiving 50 mg intravenous ranitidine every 8 h. Gastric tonometers were filled with saline, which was extracted after 90 min of equilibration time. At the same time, gastric juice was anaerobically extracted from the aspiration port of the tonometer. The initial 20 ml was discarded. PCO 2 in both samples was measured using a blood gas analyzer (AVL 945; AVL List GMBH, Gratz, Austria). These measurements were taken at various time points in each patient, and under various haemodynamic and oxygen transport conditions, All measurements were performed with the patient fasted. Correlation between the two measurements was examined using the Bland-Altman technique. We also performed an in vitro study to quantify the precision and bias for the AVL 945. For this purpose, a stable PCO 2 in saline solution was achieved by bubbling 5% carbon dioxide calibration gas. Results: We performed 112 pairs of measurements in 15 patients. Table 1 shows clinical data and the first values of arterial, tonometered and gastric juice PCO 2 for each patient. Regression analysis demonstrated a significant correlation between both methods of measuring PCO 2 (r 2 = 0.43; gastric juice PCO 2 = -28.79 + [2.55 × tonometric PCO 2 ]; P < 0.0001; Fig. 1). However, the bias calculated as the mean difference of gastric juice and tonometric PCO 2 was 51 mmHg. The 95% limits of agreement were 315 mmHg (Fig. 2). For mean PCO 2 values lesser than 100 mmHg, the bias and the 95% limits of agreement were 19 and 102 mmHg, respectively. As mean PCO 2 increased, the scattering of differences widened (r 2 = 0.71; P < 0.0001).
In an effort to prevent the bias related to multiple measurements per patient, we performed Bland-Altman analysis with the first measurement of each patient. After this the results remained similar (bias 55 mmHg, 95% limits of agreement 216 mmHg). The AVL 945 blood gas analyzer showed a negative bias of 0.97 mmHg and a precision of 2.13 mmHg. This bias was considered negligible, so no further correction was made to saline tonometric values.
Discussion:
The results of the present study show that tonometric PCO 2 and gastric juice PCO 2 are not interchangeable. Gastric juice PCO 2 is systematically higher. At high PCO 2 values the differences widen, and data dispersion becomes even more marked. There is no clear cause for these observations. A possible explanation might be that tonometric PCO 2 is generated over a time interval, whereas gastric juice PCO 2 might reflect rapid changes in mucosal metabolism. Different equilibrium time could also account for data dispersion, but not for the positive bias for gastric juice. Rapid changes should occur in both directions. Another potential confounding factor is the ability of blood gas analyzers to measure PCO 2 in gastric juice. Measurement of PCO 2 in 0.9% saline is an important source of error in the estimation of pHi. Variation in PCO 2 values may occur with different PCO 2 equilibration solutions. For example, bias is -66.5% when the Nova Stat Profile 7 blood gas analyzer (Nova Table 1 Clinical characteristics and first value of arterial, tonometer and gastric juice PCO 2 ARDS, acute respiratory distress syndrome.
Figure 1
Correlation between gastric juice and tonometric PCO 2 . We performed 112 pairs of measurements of gastric juice and tonometric PCO 2 in 15 critical care patients under different haemodynamic and oxygen transport conditions. The linear regression coefficient is significant. However, the slope value indicates systematic overestimation of gastric juice PCO 2 in relation to saline PCO 2 .
Figure 2
Bland-Altman analysis of the differences between gastric juice and tonometric PCO 2 . The bias calculated as the mean difference of gastric juice and tonometric PCO 2 was 51 mmHg. The 95% limits of agreement were 315 mmHg. The bias and the scattering of differences widened as PCO 2 increased.
Biomedical, Waltham, MA, USA) measures concentration of 1.95% of CO 2 equilibrated in normal saline. However, bias changes to +45.4% when 1.95% CO 2 is equilibrated in human albumin solution 4.5%. It would not be surprising if gastric juice components such as proteins, mucopolisaccharides and others interfere with CO 2 solubility and its subsequent measurement by blood gas analyzers. In this way, intersubject and intrasubject variation in gastric juice composition could also account for data dispersion. Fiddian-Green et al [1] measured PCO 2 in gastric contents of anaesthetized dogs. They isolated the stomach from the oesophagus and the duodenum with ligatures, and washed it through a catheter with saline. Then, they instilled 250 ml 0.9% saline and took samples to measure PCO 2 and to estimate pHi. Simultaneously, mucosa pH was recorded with a microglass probe. They found a statistically significant correlation between both methods. However, data dispersion in the graph was considerable.
We were able to exclude analyzer underestimation of PCO 2 in saline as the cause for the present results. In vitro performance of the AVL 945 in blood was good. It showed a negative bias less than 1 mmHg and a precision of about 2 mmHg. We cannot infer from the present data the technique that should be the gold standard for measuring PCO 2 in gastric mucosa. However, the studies that have established the normal values for pHi, prognostic changes and its uses as a therapeutic index have been performed with tonometry. Hence, more data are needed for the routine measurement of PCO 2 in gastric juice.
Introduction
In recent years there has been growing interest in tonometric estimation of gastric pHi. More recently attention has focused on the gradient between intraluminal and arterial PCO 2 . pHi appears to be a useful diagnostic and prognostic tool in critically ill patients, and may also be used as a therapeutic guide [2,3]. However, intraluminal PCO 2 is the parameter measured to calculate pHi, and it is assumed as equivalent to the PCO 2 of the upper layers of the gastric mucosa [4].
Direct measurement of PCO 2 in gastric juice might offer additional advantages over tonometry. First, tonometer costs could be saved. Second, equilibration time would no longer be necessary. Finally, preanalytic factors that account for poor tonometric reproducibility, such as inadequate volume of saline in the tonometer, errors in timing the dwell time of the sample or in the technique used to aspirate the tonometered saline sample, mixing the sample with air from the tonometer dead space, and delays in specimen analysis, might be prevented [5].
Full article
Nevertheless, to our knowledge, very few studies, either experimental or clinical, have examined PCO 2 in gastric juice [1,6,7]. Moreover, no comparison with tonometric samples obtained at the same time has been done. Our goal was to compare PCO 2 obtained simultaneously in gastric juice and in saline samples from tonometers. The results of the present study show that gastric juice PCO 2 is systematically higher, and that for high PCO 2 values this difference widens and data dispersion is even more marked. Therefore, tonometric PCO 2 and gastric juice PCO 2 are not interchangeable.
Patients and methods
The present study was approved by the local ethics committee and informed consent was obtained from the next of kin of each patient.
We consecutively studied 15 mechanically ventilated patients from a medical/surgical intensive care unit, in whom tonometric monitoring was indicated by attending physicians. All patients were receiving 50 mg intravenous ranitidine every 8 h. Gastric tonometers were filled with saline and, after 90 min for equilibration, saline samples were collected, as has previously been described [2]. At the same time, gastric juice was anaerobically extracted from the aspiration port of the tonometer. Initial 20 ml were discarded. A blood gas analyzer (AVL 945) was used to measure PCO 2 in both samples. These measurements were taken at various time points in each patient, and under various haemodynamic and oxygen transport conditions. All measurements were performed with the patient fasted. Correlation between both of measurements was examined using the Bland-Altman technique [8].
We also performed an in vitro study to quantify the precision and bias of the AVL 945. For this purpose, a stable PCO 2 in saline solution was achieved by bubbling 5% carbon dioxide calibration gas. Bias (mean difference from expected PCO 2 ) and precision (standard deviation of the bias) of PCO 2 measured in saline was determined by comparison of measured PCO 2 with expected PCO 2 . The latter was calculated from the carbon dioxide content of the calibration gas and from barometric pressure, according to gas laws. Measurements were repeated six times.
Results
We performed 112 pairs of measurements in 15 patients. Table 1 shows clinical data and the first values of arterial, tonometer and gastric juice PCO 2 taken in each patient. Regression analysis demonstrated a significant correlation between both methods of measuring PCO 2 (r 2 = 0.43; gastric juice PCO 2 = -28.79 + [2.55 × tonometric PCO 2 ]; P < 0.0001; Fig. 1). However, bias, calculated as the mean difference between gastric juice and tonometric PCO 2, was 51 mmHg. The 95% limits of agreement were 315 mmHg (Fig. 2). When mean PCO 2 values were less than 100 mmHg, bias and 95% limits of agreement were 19 and 102 mmHg, respectively. The scattering of differences widened as PCO 2 increased (r 2 = 0.71; P < 0.0001).
The two types of measurements are clearly correlated. However, they cannot be considered as interchangeable, because gastric juice PCO 2 was always higher and the 95% limits of agreement were clinically significant.
In an effort to prevent bias related to multiple measurements per patient we used another approach, taking into account the initial measurement of each patient. Despite this, results continued to be similar (bias 55 mmHg, 95% limits of agreement 216 mmHg).
The AVL 945 blood gas analyzer showed a negative bias of 0.97 mmHg and a precision of 2.13 mmHg. This was considered negligible, so no further correction was done to tonometric values.
Discussion
The present data show that tonometric PCO 2 and gastric juice PCO 2 should not be considered interchangeable, because gastric juice PCO 2 was systematically higher. With high PCO 2 values, this difference widened, and data dispersion became even more marked.
There is no clear cause for these observations. It can be argued that tonometric PCO 2 is a value that is measured over a predetermined period, whereas gastric juice PCO 2 may generate data on minute-to-minute changes in mucosal metabolism. This different equilibrium time could account for data dispersion. However, the positive bias for gastric juice is harder to interpret, because such rapid changes should appear in both directions.
Another potential confounding factor is the ability of blood gas analyzers to measure PCO 2 in gastric juice. Measurement of PCO 2 in saline is an important source of error in the estimation of pHi. The error lies both in the kind of blood gas analyzer used and in the PCO 2 value itself [9,10]. Different solutions might modify PCO 2 measurement. For example, bias is -66.5% when the Nova Stat Profile 7 blood gas analyzer measures concentration of 1.95% of CO 2 equilibrated in normal saline. However, bias changes to +45.4% when 1.95% CO 2 is equilibrated in human albumin solution 4.5% [9].
It would not be surprising if different gastric juice buffers, such as proteins, mucopolisaccharides and others, interfere with CO 2 solubility and its subsequent measurement. In this way, intersubject and intrasubject variation of gastric juice composition could also account for data dispersion. Therefore, analytic issues related to the various constituents of gastric juice could have added to the observed differences.
Fiddian-Green et al [1] measured PCO 2 in gastric content of anaesthetised dogs. They isolated the stomach from the oesophagus and the duodenum with ligatures, and washed it through a catheter with saline. Then they instilled 250 ml saline and intermittently took samples to measure PCO 2 and estimate pHi, which was compared with simultaneous direct mucosa pHi measurement performed using a microglass probe. They found a statistically significant correlation between both methods, but there was considerable data dispersion in the graph. However, in that study, as well as in others in which PCO 2 was measured without a tonometer [6,7], some kind of saline lavage was used. On the other hand, we directly measured PCO 2 in gastric juice, which could produce marked differences in the results.
Differences between tonometric PCO 2 and gastric juice PCO 2 could be due to blood gas analyzers that are calibrated for blood gas analysis, and could systematically underestimate PCO 2 in saline [9,10]. Nevertheless, this explanation is not supported by the present in vitro results. The performance of the AVL 945 was fairly good. It showed a minor negative bias of less than 1 mmHg and a precision of approximately 2 mmHg. Thus, it appears to be a suitable device for saline tonometry.
We ascribe the high PCO 2 values obtained in the present study to shock and subsequent gut hypoperfusion, as can be deduced from the clinical characteristics of our patients. Also, in the presence of hydrochloric acid, duodenal or gastric bicarbonate buffering could produce important amounts of carbon dioxide [11]. Although we cannot exclude spurious generation of carbon dioxide, measurements were performed in the fasted state, the position of the tonometer was checked by X-ray, and ranitidine was administered in each patient. If PCO 2 in gastric juice and in the tonometer were interchangeable, their values would be similar, regardless of CO 2 source or absolute value. However, gastric juice PCO 2 and tonometric PCO 2 were quite different. We believe that this is the novel point in this study.
Gastric juice PCO 2 has been used in some clinical studies [6,7,12]. Mohsenifar et al [12] advocated its advantages over tonometry. However, the present results suggest that the two techniques are not interchangeable. From the present data we cannot infer which should be the gold standard technique for measurement of gastric mucosa PCO 2 . Nevertheless, knowledge regarding pHi has evolved from gastric tonometry; studies regarding normal values, prognostic changes and its uses as a therapeutic guide were performed using tonometry. Hence, until more and clearer data are reported, PCO 2 measurement in gastric juice should be considered with caution. | 2014-10-01T00:00:00.000Z | 2000-06-27T00:00:00.000 | {
"year": 2000,
"sha1": "871daef137ee02a10f1c58f1b079e1354071eac8",
"oa_license": "CCBY",
"oa_url": "https://ccforum.biomedcentral.com/track/pdf/10.1186/cc701",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "871daef137ee02a10f1c58f1b079e1354071eac8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236360197 | pes2o/s2orc | v3-fos-license | Heat Shock Proteins: Classification, Functions and Expressions in Heat Shock Proteins: Classification, Functions and Expressions in Plants during Environmental Stresses. Plants during Environmental Stresses.
Heat shock proteins assist in folding proteins that is a basic cellular constituent responsible for various crucial functions including protein assembly, transportation, folding in normal conditions and denaturation of proteins in stress and in other cellular function. Abiotic factors like increased temperature, drought and salinity negatively affect reproduction and survival of plants. Plants (HSPs), as chaperones, have crucial part in conversing biotic and abiotic stress tolerance. Plants react towards critical changes through biochemical, growth, and physiological mechanisms included expression of stress-reactive proteins, which are regulated by interconnected signaling cascades of transcription factors including heat stress TFs.
INTRODUCTION
HSPs proteins are found mostly in every cell from prokaryotes to eukaryotes. HSPs have been comprehensively studied in animals and humans. Recently their role in plants was thoroughly studied. HSPs were described as a result of heat shock conditions, but now get activate by various stresses like Ultraviolet light, cold, wound healing, drought, salinity and pathogenic infections (Lindquist et al., 1988). The term "heat shock protein" is now incorrect because HSPs are not expressed only under high temperature, also expressed under other stresses. HSPs are essential in maintaining balanced cell internal conditions under optimum and damaged growth conditions about in all living cells (Wang et al., 2004). Many types of HSPs are function as chaperon proteins that assist in folding upon folding of three dimensional proteins or proteins that get denatured by stress within the cell. Therefore many folding proteins are considered as HSPs due to their folding nature in response to stress (Wang et al., 2004). It also functions in the stability of cellular proteins and have role in protein refolding under diverse environmental conditions (Huttner et al., 2012). HSPs that respond to stresses mainly located in cytoplasm. It is suggested that HSPs have dynamic and diversified role in protein homeostasis because of its ubiquitous nature in living cell. HSPs are generally found in fungi plants and animals, HSP transcripts expression were upregulate at extreme temperature (Lindquist et al., 1986). Under normal physiological conditions HSPs are localized in the cytoplasm but translocate to the nucleus under stresses.
Structure and Classification of Plant HSFs and HSPs
Expression level of HSPs regulated specifically by transcription factors called heat shock factors. HSFs usually occur as inactive proteins and share a preserved structure. The basic structural domain of HSF consists of N-terminal binding site of DNA (DBD) that is characterized by central helical motif binds to heat shock elements. HSE at the target promoter thus sequentially initiates gene transcription under stress conditions (Scharf et al., 2012).
In comparison with other vertebrates, plants HSF consist of a large amount of HSF members originated from a complex plant-specific superfamily and are found in a wide range of species. HSFs are characterized into three classes, HsfA, B, and C in plants (Kotak et al., 2004). The HRA/B regions of HsfBs have dense region associated or same as all told nonplant. However category Hsf A associated C have an extended HRA/B region that consists of twenty one and seven (HsfCs) because of an insertion of organic compound residues between the HR-A and HR-B components respectively (Scharf et al., 2012). The carboxyl end activation domains of HSFs are represented by short amide motifs (AHA motifs) that are essentials because the accelerator in several studies (Döring et al., 2000).
Various types of HSPs have been studied in almost all living cells (Bharti et al., 2002). All HSPs have some distinct characteristics including the presence of heat-shock domain at C-terminal. Plant Hsps are classified into five types on basis of approximate molecular weights. Generally plant HSPs consist of N and C-terminal end that is nuclear binding domain region -I,-II and a middle one which is characterized by an amino end and followed by carboxyl-end.
HSP90
This type is one of highly expressed proteins in the plant cells, where its content is 1-2% of total protein levels in cytoplasm (Prasinos et al., 2005) described that HSPs are cytosolic proteins in eukaryotes having conserved amino acid sequences. The basic eukaryotic molecule HSP90 assist the proper folding and maturation of other protein substrates, various among them are main activators of biological circuits. HSP90s has central position in numerous pathways regulating growth, so it is a concern of study in various fields (García-Cardeña et al., 1998).
HSP90 consists of three domains: N terminal with ATP-binding motif, middle and Cterminal end. CTD is responsible for HSP90 dimer formation. HSP90 function as chaperones in dimers regulated by ATP (Wayne et al., 2011). HSP90 participate with HSP70 and act by interacting with unit of chaperons, that includes Hip (interacting proteins) and Hop (organizing proteins). Hsp90s act as mediator in plant abiotic stress signal pathways (Liu et al., 2006) but the mechanisms still are unclear.
HSP70s
HSP70 is the most widely studied member found in all plants and animals (Boorstein et al., 1994). HSP70 called as on basis of molecular weight 70kDa and act as molecular chaperons. HSP70 perform necessary functions in various life forms and its homolog located in cell component such as chloroplast, endoplasmic reticulum, mitochondria and cytosol. HSP70 function as basic chaperon that it assists in folding of regulatory proteins prevent aggregation of denatured proteins (Tompa et al., 2010).
When exposed to elevated temperature it triggers cellular response which is evident in almost all kingdoms of life. This causes the enhanced expression of multigene members that can encode molecular chaperones (Hartl et al., 2009). HSP70 respond by binding to hydrophobic region on surface of semi folded proteins and maintain low protein concentration (Mayer et al., 2005). While HSP70s assembled during heat shock, their essential cognates (HSC70) were expressed due to their involvement in maintaining proteostasis.
Small Heat Shock Proteins (SHSPs)
The sHSPs are unique and evolutionally preserved having molecular weight ranging from 12to 42 k Dalton. HSP20 named so as MW are in the range of 15-22 kDa. Members of this family have distinct 80-100 long amino acid sequences which are not found in other proteins of HSP family. Their function does not depend on ATP and bind to the foreign protein substrates in a co-operative manner. These proteins have several characteristics including the degradation of the proteins that do not have ability to refold (Reddy et al., 2015). The sHSPs cannot refold foreign proteins, but they can bind to semi folded or damaged substrate part thus prevents unfolding of stable protein accumulation (Sun et al., 2002). On basis of cellular location, functions and sequence similarity, sHSP established a more diverse family in comparison to other HSPs. The studies showed sHSP to be localized in the nucleus and other cellular organelles (Waters et al., 2013). The protective role of sHSP against a wide variety of stresses has been rapidly studied in different crop species including Oryza sativa, Solanum lycopersicum , Glycine max (LopesCaitar et al., 2013), Triticum aestivum (Muthusamy et al., 2017), Zea mays (Hu et al., 2010).
Regulation of HSPs
In the plant genome, approximately 7% of the coding regions comprise TFs and mostly are large gene families as compared to animals and yeasts, such as Hsfs (Udvardi et al., 2007). In various critical conditions like high temperature, heat shock factors get activated and move in the cytosol thus undergo trimer formation. These HSF are phosphorylated in signaling cascades and transfer to nucleus where the bind with corresponding cis-elements which is present in upstream part of HSP gene. The mRNA is translated into protein thus increased expressions of HSPs in the cytoplasm ( Fig. 1) (Young et al., 2010). The HSR in plants and mammals is coordinated by a set of highly conserved proteins known as HSPs, and expression of this protein is controlled by HSFs (Snyman et al., 2008). Increased number of HSFs in plants helps them to tolerate various forms of external changes during their lifespan. There are almost fifteen known and studied HSFs in Arabidopsis thaliana and greater than 21 in tomato (Sly) and all are thought to perform significant functions in stress responses. In Sly two Hsf A2 and B1, are heat inducible but the expression of HSFA2 and HSFB1is controlled by HSFA1, which is regarded as the key activator of the HSR (Mishra et al., 2002). HSFA2 called as "work horse" in stress response and is the major HSF during a plant exposed to increase temperature (Mishra et al., 2002).
Abiotic Stress: Signaling Pathways
Plants are subjected to various biotic and abiotic factors, which adversely affect its survival and growth (Mittler et al., 2006). To cope with these conditions, plants have evolved various defense signal strategies that are anatomical (Banon et al., 2004), physical (Wahid et al., 2007) and physiological (Morales et al., 2003). Plants also respond to stress situations by controlling gene expression at the molecular level, enable plants to ensure their existence in critical situations. At molecular and cellular levels, abiotic stress response is directed through signaling pathways that consists of primary and secondary messengers. When exposed to stress the signals get initiated that sequentially activate signaling pathways and phosphorylate transcription factors in signaling pathways (Vij et al., 2007). As cell exposed to stress, receptors like kinases, G-protein-coupled receptors, and regulatory molecules receive those signals which then activates the secondary molecules like calcium thus leads to activation of signaling pathways e-g MAP, CDP, SOS3/protein kinases, TF and stress responsive genes. Components of MAP kinase cascades are the converging molecules in abiotic stress signaling pathways. Several plants are genetically engineered with stress tolerance traits for proper insights and studies of signaling pathways (Akpinar et al., 2012). The significant regulation and the developments succeeding sections briefly describes the salient components like ROS, Ca + and MAP kinase in signaling cascades in plants (Gong et al., 2013).
Abiotic Stress
Abiotic stresses are critical changes like extreme temperatures, drought, ionic imbalance due to salinity, which hinder plants growth, existence, and quality. Nowadays environmental changes have further worsened the situation. The population of world by the year 2050 will be expected ~ 9 billion (Reguera, M et al., 2012), so in such conditions stress resistance plants should be cultivated by using transgenic and omics methods. A number of studies conducted by various scientists reveal the role of HSPs under different stresses in several plant species, such as Arabidopsis, Oryza sativa, Glycine max, Populus and Vitis vinifera and exact functional mechanisms of HSPs in plant stress response have recently become study of concern (Yer et al., 2016).
Drought is the mostly studied adverse environmental factor that produces drastic affects on growth of many crops in climatic change conditions. It negatively influences plant morphology, physiology and molecular mechanisms. This also causes decline in photosynthetic rate, transport of mineral nutrient leading to starvation, imbalance of ion and hormones. Water deficiency affects the quantity and quality of proteins synthesis and as a result, HSPs also get influenced. The expression pattern of HSPs was studied at genome level in the seedling of upland rice plants. expression of HSP70 was elevated during drought condition (Reddy et al., 2014). HSP gene exhibited upregulation in transgenic Arabidopsis and Saccharum officinarum during drought condition (Yer et al., 2016). Effects of combined stress conditions were studied in both irrigated and non-irrigated cotton and showed that transcripts of HSP accumulated in non-irrigated type. HSP genes expressed at higher rate in drought-tolerant plants in comparison to sensitive as Cicer arietinum. The same expression observed in Populus plants . HSP17.7 showed drought tolerance in genetically engineered rice crops (Agrawal et al., 2016).
Nearly 50% half of the world irrigated land, is affected by salinity. Studies showed that many HSPs were induced and up-regulated under salinity like HSP70 in rice seedlings (Ngara et al., 2014) and HSP70, their subtypes -9,-12,-33 in Populus (Manaa et al., 2011). Furthermore, HSP40 in Oryza sativa (Wang et al., 2018) and in poplar almost all types of HSPs were up-regulated against salinity (Manaa et al., 2011). Different HSPs in Arabidopsis like HSP 90 (Xu, J et al., 2013), HSP100 and small HSPs in Oryza sativa (Muthusamy et al., 2016) showed tolerance against salinity. Thus studies exhibited that response of HSPs towards salinity stress is also genotype and specie dependent.
High environmental temperature affects almost all organisms and especially plants, because they are sessile and more exposed to change in climatic conditions (Bita et al., 2013). Temperature stress is most widely studied and affected among plants. Increased temperature has affected the protein folding, arrangements and cause denaturation. As a result of high temperature production of free radicals that is taken as secondary stress (Fragkoste fanakis et al., 2015). Free radicals or ROS act as stress signal in plants and produce HSR and other stress responsive proteins. The interactions between HSPs and ROS have been widely studied in many plant species (Driedonks et al., 2015).
A number of studies have identified the activation of heat stress genes under high temperature stress. Hsps of different molecular weight showed diverse expressions under heat stress conditions. HSP90.1 has been reported in Oryza sativa and Arabidopsis (Prasad et al., 2010) and its all classes (A, B, and C) in Glycine max (Xu et al., 2013). Under normal conditions, regulation levels of all HSPs were monitored and it negatively regulates HSF (Yamada et al., 2007). Under heat effects the mostly reported types are HSP70 and HSP60 chaperonin families, which keep protein in proper folding positions using ATP as a source of energy (Hartl et al., 2011). HSP70 located in cytosol was involved in heat tolerance in Arabidopsis (Jungkunz et al., 2011).
High expression of HSP70s have been investigated under high temperature in a number of plant crops; such as cotton (Song et al., 2018), vegetables like, tomato, potato; ornamental (Huang et al., 2019) (Liu et al., 2018) grain such as wheat (Wu et al., 2018). In normal condition as well as under high-temperature and drought situations (combine effect) different studies have demonstrated the expression of HSP60 present in chloroplast. It has a role in Rubisco assembly, chloroplast development and protection . sHSPs and HSP40 were up-regulated under high-temperature in many plant species (Huang et al., 2019). Different types of HSPs respond to abiotic stresses in different ways and their expression pattern were studied in different species with the help of biomolecular techniques illustrated in table 1. The activation of plant HSP20 during stress is directly related to cellular Ca 2+ concentration. Elevated level of Ca 2+ activates calmodulins and/or MAP kinases and followed basic signal pathway. This activation could help to identify certain regulatroy elements present in the promoter region, inducing its expression level (Swindell et al., 2007).
Biotic Stress
Recently sHSPs have been studied in plant biotic stresses. HSP20s known as stress responsive proteins, and generally functions are same as chaperons. Their participation has been reported to against the crop pathogens most important among them are viruses, bacteria and nematodes. Some genetic basis has explained that these folding proteins play a crucial role in plant defense syste. One hypothesis is that these chaperones are involved in accumulation of various resistance proteins, and so for the defense signal cascade coordination. However there is still need the better explanation of HSP20 cascade mechanism and function in plants biotic stress (Lopes-Caitar et al., 2016). Biotic factors like pathogens that are viruses, bacteria, fungi and other micro-organisms also affects plant quality, growth and development. These microorganisms attack their hosts and deprive of their nutrition that in severe cases death of plants. They cause huge pre-and post-harvest losses of crop plants.
In comparison with plants animals have their adaptive defense mechanism which helps them to cope with critical changes such as foreign particles and can recognized the past infections, while plants have only innate immunity. Still plants try to adopt advanced strategies against these stresses (Singla et al., 2016). These defense strategies are incorporated in the plant's genome, which can code a large number of resistance genes. This is one of the adaptive mechanism plants adapt through regulation of HSPs.
Plant productivity got affected by the pathogenic bacteria, one of the important factors among biotic stresses. Bacteria harm their host by entering xylem vessel blocking supply of water and other nutrients resulting in wilting and browning of plants. Plants react to pathogenic bacteria by using their first line defense pattern pathogen-associated resistance. Plants also have evolved innate immunity that is Effector-triggered immunity. When pathogenic bacteria attacks plants this immunity get triggers R-protein that translate through resistance gene, R protein can identify the pathogen effectors thus effective mechanism initiate against the invaders specially bacteria as explained in Fig.2 (Dodds et al., 2010).
HSPs have role against virulent bacteria strains as studied in different species of plants against some bacterial infections. Ralstonia solanacearum causes infection against Nicotiana tabaccum sHSP class HSP17 was initiated and accumulated in virulent strain (Maimbo et al., 2007). Pathogenesis related protein expressed highly after infection even in avirulent state. HSP20 show down-regulation in presence of PR proteins even in virulent state of non-pathogenic bacteria. In contrast, HSP90 has a positive interaction with above mentioned pathogen in tobacco. In Arabidopsis, small HSPs classes were studied and their expression pattern was down-regulated (Bricchi et al., 2012).
Later studies have explored the downregulated expression in Arabidopsis was due to hormone that is salicylic acid as it also get affected by pathogens to cause infection (Pavlova et al., 2009). Viruses need living machinery of plants or neighboring cell for the spread of their infection (Rybicki et al., 2015).
The expressions of HSP depend on the pathogen strains as well as on the time after inoculation.
In some cases, viruses target the expression of HSPs and their subcellular targeting in cell to develop an infection in plants. Viruses infect the rice plants through pathogen Rice strip virus (RSV) that is source of other diseases that cause rigidity in various plant organs like stem and leaves along with strips along veins. Various investigations revealed the relation of HSP20 with pathogen Rice strip virus in which virus cause host infectivity at cellular level. Some heat shock proteins including HSP70 have positive interaction with RSV, mean if HSP70 silenced it lowers the viral infection (Bolhassani et al., 2019). In previous studies, interesting report had pointed out another pathogen virus "Tomato Yellow leaf curl Virus have role in regulating programmed cell death by deactivating SlyHSF2. As a result of this inactivation, HSP90 gene is silenced which reduced the programmed cell death and plant remain in healthy conditions for long time after viral replication and infection.
Furthermore, the "Root knot nematode" coat protein localization from the cytoplasm to the nucleus was linked with HSP70 of S. lycopersicum. The expression of HSP70 get silenced in the Solanum lycopersicum to control virus attack. Contrarily to this HSP90 downregulation promoted viral infection (Gorovits et al., 2017). Latest study on Potato virus Y in potato heat-tolerant and sensitive plants exhibited that HSPs expression get induce in both extreme temperature conditions. Similarly, the expression of PR (pathogen related) proteins was also highly suppressed the viral infection (Makarova et al., 2018). Resistance was developed against viral infection in Cytosin peptidemycin for controlling Rice Black Streaked Dwarf Virus". various inhibitory enzymes, defense genes and HSP also upregulate to control the virus infection (Yu L et al., 2018). Nematodes species such as the root-knot nematode cyst nematodes and root-lesion nematode cause more harm to agriculture. HSPs are involved in resistance to phytonematodes (Li et al., 2015). Studies analyzed the sequenced data of Gossypium after pathogen Rotylenchulus reniformis attack at interval of 3 to almost 12 days. About 23 HSPs and 41 HSPs were induced in susceptible genotypes and resistant genotypes respectively. Pathogens attack the plants causing root necrosis, nutrients and water deficiency. In response to biotic stress position of cis-acting elements in the upstream area of Hsp gene got activated but it depends on the distance from the site of transcription. HSP17.7, in exogenously produced tobacco showed that HSE were activated within 83 base pairs (bp) but the expression upregulates beyond 83 bp (Escobar et al., 2003). This was further confirmed by study in HSP17.6 and HSP18.6, where they expressed within 108 and 49 bps respectively (Barcala et al., 2008). The research shows that the HSE and other regulatory motif regions that also interact with TFs and influenced the expression of sHSP in biotic stresses. HSP90 down regulated in tomatoes showed tolerance to nematodes infection (Bhattarai et al., 2007). On the other hand, HSP90, stimulate the infection caused by nematodes.
CONCLUSION
HSPs have ubiquitous nature and role in maintaining the protein homeostasis and stability of cell. HSPs have different types in plants and other species that perform different functions. Due to climate change when cell exposed to biotic and abiotic stresses, they affect plants growth and development or eventually leads to death. When expose to stress plants HSPs respond in different ways. Heat stress initially acts on the quaternary structure of protein thus affecting its folding and denatured protein. Different functional genomic studies have been performed that identified various components under different stresses. Plants respond to stress after transcription regulation of HSPs at molecular level. Different signaling passages get activated in response to stress which phosphorylates other transcription factors and also heat shock factors that transcribed into proteins thus express under stress in different ways. Multiple number of genes affected under stress conditions thus it imply there is not a single marker that get functional against stress. HSPs have crucial role in reaction of stress and can be used in development of transgenic plant. In controlled experimental conditions in laboratory HSPs studies were performed on model plants. HSPs have widely distributed types and each of them have significant role across different interconnected pathways. The response of HSP is genotypical specifically at tissue level. The expression study analysis of HSP types in various stresses were up and down regulated. There is a crosstalk study between various hormonal pathways, but its exact nature during simultaneous biotic and abiotic stress still need to be identified. There is no definite set of marker get identified that will predict the tolerance mechanism against stress Researchers should identified exact markers with a definite degree of confirmation. There is need to understand the exact mechanism how HSPs participate in sensing stress signals transduction, and transcriptional regulations of several stress genes. Although much work has been conducted on plant abiotic stress and its relation with signaling pathways still there is need of efforts in modern molecular proteomics and transcriptomic tools, to get more perceptions about molecular mechanism(s) of basic various signaling pathways. | 2021-07-27T00:05:36.256Z | 2021-05-26T00:00:00.000 | {
"year": 2021,
"sha1": "5245119207717534489198ad750f1b7dbf3c8529",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.35691/jbm.1202.0183",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "2ebf26cb70bab5b7022669b1c4b96b76b089b00c",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
247779151 | pes2o/s2orc | v3-fos-license | On Kernelized Multi-Armed Bandits with Constraints
We study a stochastic bandit problem with a general unknown reward function and a general unknown constraint function. Both functions can be non-linear (even non-convex) and are assumed to lie in a reproducing kernel Hilbert space (RKHS) with a bounded norm. This kernelized bandit setup strictly generalizes standard multi-armed bandits and linear bandits. In contrast to safety-type hard constraints studied in prior works, we consider soft constraints that may be violated in any round as long as the cumulative violations are small, which is motivated by various practical applications. Our ultimate goal is to study how to utilize the nature of soft constraints to attain a finer complexity-regret-constraint trade-off in the kernelized bandit setting. To this end, leveraging primal-dual optimization, we propose a general framework for both algorithm design and performance analysis. This framework builds upon a novel sufficient condition, which not only is satisfied under general exploration strategies, including \emph{upper confidence bound} (UCB), \emph{Thompson sampling} (TS), and new ones based on \emph{random exploration}, but also enables a unified analysis for showing both sublinear regret and sublinear or even zero constraint violation. We demonstrate the superior performance of our proposed algorithms via numerical experiments based on both synthetic and real-world datasets. Along the way, we also make the first detailed comparison between two popular methods for analyzing constrained bandits and Markov decision processes (MDPs) by discussing the key difference and some subtleties in the analysis, which could be of independent interest to the communities.
Introduction
Stochastic bandit optimization of an unknown function f over a domain X has recently gained increasing popularity due to its widespread real-life applications such as recommendations (Li et al., 2010), cloud resource configurations (Thananjeyan et al., 2021), and wireless power control (Chiang et al., 2008). At each time t, an action x t is selected and then a (noisy) bandit reward feedback y t is observed. The goal is to maximize the cumulative reward, or equivalently minimize the total regret due to not choosing the optimal action in hindsight. A classic model of this problem is the multi-armed bandits (MAB) where X consists of finite independent actions. To handle a large action space and correlation among actions, MAB was later generalized to linear bandits where X is now a subset of R d and f is a linear function with respect to a ddimensional feature vector associated with each action. However, in many aforementioned applications, the unknown function f cannot be well-parameterized by a linear function. To this end, researchers have turned to nonparametric models of f via Gaussian process or reproducing kernel Hilbert space (RKHS), which are able to uniformly approximate an arbitrary continuous function over a compact set (Micchelli et al., 2006). In this paper, as in (Chowdhury and Gopalan, 2017;Srinivas et al., 2009), we consider the agnostic setting (i.e., frequentist-type), where f is assumed to be a fixed function in an RKHS with a bounded norm (i.e., a measure of smoothness). We call this setting frequentist-type kernelized bandits (KB).
In addition to a non-linear or even non-convex function f in the above practical applications, another common feature is that there often exist additional constraints in the decision-making process such as hardconstraint like safety or soft-constraint like cost. To this end, there have been exciting recent advances in the theoretical analysis of constrained kernelized bandits. In particular, (Sui et al., 2015;Berkenkamp et al., 2016;Sui et al., 2018) propose algorithms with convergence guarantees, while (Amani et al., 2020), to the best our knowledge, is the first work that establishes regret bounds for their developed algorithm, although under the Bayesian-type 1 setting. These algorithms mainly focus on KB with a hard constraint such as safety, i.e., the selected action in each round needs to satisfy the constraint with a high probability. As a result, compared to the unconstrained case, additional computation is often required to construct a safe action set in each round, which not only incurs additional complexity burdens, but often leads to conservative performance.
Motivations. In practice, there are also many applications that involve soft constraints that may be violated in any round. The goal is to maximize the total reward while minimizing the total constraint violations. To give a concrete example, let us consider resource configuration in cloud computing platforms where the objective is to minimize the cost while guaranteeing that the latency is below a threshold, e.g., 95% percentile latency. In this case, the latency of a job could be above the threshold as long as the fraction of violations is small, e.g., less than 5%. Another example is throughput maximization under energy constraints in wireless communications where energy consumption constraint is often a soft cumulative one. In both examples, one fundamental question is whether the nature of soft constraints can be utilized to design constrained KB (CKB) algorithms with the same complexity as the unconstrained case while attaining a better reward performance compared to the hard constraints. Furthermore, existing provably efficient algorithms (Sui et al., 2015;Berkenkamp et al., 2016;Sui et al., 2018;Amani et al., 2020) largely build on upper confidence bound (UCB) exploration, which often has inferior empirical performance compared to Thompson sampling (TS) exploration. Hence, another key question is whether one can design provably efficient CKB algorithms with general explorations. In summary, the following fundamental theoretical question remains open:
Can a finer complexity-regret-constraint trade-off be attained in CKB under general explorations?
Contributions. In this paper, we take a systematic approach to affirmatively answer the above fundamental question. In particular, we tackle the complexity-regret-constraint trade-off by formulating KB under soft constraints as a stochastic bandit problem where the objective is to maximize the cumulative reward while minimizing the cumulative constraint violations and maintaining the same computation complexity as in the unconstrained case. Our detailed contributions can be summarized as follows.
• We develop a unified framework for CKB based on primal-dual optimization, which can guarantee both sublinear reward regret and sublinear total constraint violation under a class of general exploration strategies, including UCB, TS, and new effective ones (e.g., random exploration) under the same complexity as the unconstrained case. We also show that by introducing slackness in the dual update, one can trade regret to achieve bounded or even zero constraint violation. This framework builds upon a novel sufficient condition, which not only facilitates the design of new CKB algorithms but provides a unified view in the performance analysis.
• We demonstrate the superior performance of our proposed algorithms via numerical experiments based on both synthetic and real-world data. In addition, we discuss the benefits of our algorithms in terms of various practical considerations such as low complexity, scalability, robustness, and flexibility.
• Finally, we provide the first detailed comparison between two popular methods for analyzing constrained bandits and MDPs in general. Specifically, The first one is based on convex optimization tool as in (Efroni et al., 2020;Ding et al., 2021), which is also the inspiration for our paper. The other one is based on Lyapunov-drift argument as in (Liu et al., 2021a;Liu et al., 2021b). We discuss the key difference in terms of regret and constraint violation analysis in these two methods and highlight the subtlety in applying a standard queueing technique (i.e., Hajek lemma (Hajek, 1982)) to bound the constraint violation in the second method. We believe this provides a clear picture on the methodology, which is of independent interest to the communities.
Related Work
In the special cases of KB, such as multi-armed bandits (MAB) and linear bandits (e.g., KB with a linear kernel), there is a large body of work on bandits with different types of constraints, including knapsack bandits (Agrawal and Devanur, 2016;Badanidiyuru et al., 2013;Wu et al., 2015), conservative bandits (Wu et al., 2016;Kazerouni et al., 2016;Garcelon et al., 2020), bandits with fairness constraints (Chen et al., 2020;Li et al., 2019), bandits with hard safety constraints Pacchiano et al., 2021;Moradipari et al., 2019), and bandits with cumulative soft constraints (Liu et al., 2020(Liu et al., , 2021a. Among them, the bandit setting with cumulative soft constraints is the closest to ours in that the goal is also to minimize the cumulative constraint violation. In particular, (Liu et al., 2021a) considers linear bandits under UCB exploration and a zero constraint violation is attained via the Lyapunov-drift method. However, it is unclear how to generalize it to handle general exploration strategies; see a further discussion in Section 5. Broadly speaking, our work is also related to reinforcement learning (RL) with soft constraints, i.e., constrained MDPs. In particular, our analysis is inspired by those on constrained MDPs (Efroni et al., 2020;Ding et al., 2021) (which is another popular method to handle constrained bandits and MDPs via convex optimization tools), but has significant differences. First, in those works, the constraint violation is O( √ T ). In contrast, ours can attain bounded and even zero constraint violations by introducing the slackness in the dual update. Second, they only consider UCB-type exploration, but our algorithms can be equipped with various exploration strategies (including UCB), thanks to our general sufficient condition. Third, they focus on either tabular or linear function approximation settings. In contrast, both objective and constraint functions we consider can be nonlinear. There are also recent works on constrained MDPs that claim to achieve bounded or zero constraint violation Liu et al., 2021b) based on the Lyapunovdrift method. However, as in the bandit case, it is unclear how to generalize it to handle general explorations beyond UCB.
Finally, we remark that our work is mainly a theory-guided study. In a more practical area of KB, i.e., Bayesian optimization (BO), there have been many BO algorithms developed for the constrained setting; see (Eriksson and Poloczek, 2021;Gelbart et al., 2014;Hernández-Lobato et al., 2016) and the references therein. Although these algorithms have demonstrated good performance in various practical settings, their theoretical performance guarantees are still unclear.
Problem Formulation and Preliminaries
We consider a stochastic bandit optimization problem with soft constraints, i.e., max x∈X f (x) subject to g(x) ≤ 0, where X ⊂ R d and both f : X → R and g : X → R are unknown functions 2 . In particular, in each round t ∈ {1, 2, . . . , T }, a learning agent chooses an action x t ∈ X and receives a bandit reward feedback r t = f (x t )+η t , where η t is a zero-mean noise. The learning agent also observes a bandit constraint feedback c t = g(x t ) + ξ t , where ξ t is a zero-mean noise. To capture the feature of soft constraints, the goal here is to maximize the cumulative reward (i.e., T t=1 f (x t )) while minimizing the cumulative constraint violation (i.e., T t=1 g(x t )) throughout the learning process. Learning Problem. Define cumulative regret and constraint violation as follows: where x * := argmax {x∈X :g(x)≤0} f (x) and [·] + := max{·, 0}. The goal is to achieve both sublinear regret and sublinear constraint violation. In fact, we will establish bounds on the following stronger version of regret. Specifically, let π be a probability distribution over the set of actions X , and let E π [f (x)] := x∈X f (x)π(x) dx and E π [g(x)] := x∈X g(x)π(x) dx. We compare our achieved reward with the following optimization problem: where both f and g are known, and π * is its optimal solution. Now, a stronger regret is defined as (1) Clearly, we have R(T ) ≤ R + (T ). Throughout the paper, we assume the following commonly used condition in constrained optimization; see also (Liu et al., 2021a;Yu et al., 2017;Efroni et al., 2020).
This is a quite mild assumption since it only requires that one can find a probability distribution over the set of actions under which the expected cost is less than a strictly negative value. This is in sharp constraint to existing KB algorithms for hard constraints that typically require the existence of an initial safe action (Sui et al., 2018;Amani et al., 2020).
In this paper, we consider the frequentist-type regularity assumption that is typically used in uncontrained KB works (e.g., (Chowdhury and Gopalan, 2017;Srinivas et al., 2009)). Specifically, we assume that f is a fixed function in an RKHS with a bounded norm. In particular, the RKHS for f is denoted by H k , which is completely determined by the corresponding kernel function k : X × X → R. Any function h ∈ H k satisfies the reproducing property: h(x) = h, k(·, x) H k , where ·, · H k is the inner product defined on H k . Similarly, for the unknown constraint function g, we assume that g is a fixed function in the RKHS defined by a kernel function k, and the RKHS for g is denoted by H k . We assume that the following boundedness property holds throughout the paper.
Assumption 2 (Boundedness). We assume that f H k ≤ B and k(x, x) ≤ 1 for any x ∈ X and that the noise η t is i.i.d. R-sub-Gaussian. Similarly, we assume that g H k ≤ G and k(x, x) ≤ 1 for any x ∈ X and that the noise ξ t is i.i.d. R-sub-Gaussian.
2 Our main results can be readily generalized to the multi-constraint case with a properly chosen norm.
Algorithm 1: CKB Algorithm Input: V , ρ, φ 1 = 0, µ 0 (x) = µ 0 (x) = 0, σ 0 (x) = σ 0 (x) = 1, ∀x, exploration strategies A f and A g 1 for t = 1, 2, . . . , T do 2 Based on posterior models, generate f t and g t using A f and A g , respectively Choose primal action x t = argmax x∈X z φt (x); observe r t and c t 7 Update dual variable: Posterior model: update (µ t , σ t ) and ( µ t , σ t ) via GP regression using new data (x t , r t , c t ) Gaussian Process Surrogate Model. We use a Gaussian process (GP), denoted by GP(0, k(·, ·)), as a prior for the unknown function f , and a Gaussian likelihood model for the noise variables η t , which are drawn from N (0, λ) and are independent across t. Note that this GP surrogate model is used for algorithm design only; it does not change the fact that f is a fixed function in H k and that the noise η t can be sub-Gaussian (i.e., an agnostic setting (Srinivas et al., 2009) (Rasmussen, 2003), the posterior distribution for f is GP(µ t (·), k t (·, ·)), where v∈[t] , and R t is the (noisy) reward vector [r 1 , r 2 , . . . , r t ] T . In particular, we also define σ 2 t (x) := k t (x, x). Let K A := [k(x, x ′ )] x,x ′ ∈A for A ⊆ X . We define the maximum information gain as γ t (k, X ) := max A⊆X :|A|=t 1 2 ln |I t + λ −1 K A | where I t is the t × t identity matrix. The maximum information gain plays a key role in the regret bounds of GP-based algorithms. While γ t (k, X ) depends on the kernel k and domain X , we simply use γ t whenever the context is clear. For instance, if X is compact and convex with dimension d, then we have γ t = O((ln t) d+1 ) for 2ν+d(d+1) ln t) (where ν is a hyperparameter) for Matérn kernel k Matérn , and γ t = O(d ln t) for linear kernel (Srinivas et al., 2009). Similarly, the learning agent also uses a GP surrogate model for g, i.e., a GP prior GP(0, k(·, ·)) and a Gaussian noise N (0, λ). Conditioned on a set of observations H t = {(x s , c s ), s ∈ [t]}, the posterior distribution for g is GP( µ t (·), k t (·, ·)), where µ t and k t are computed in the same way as µ t (·) and k t (·, ·).
A Unified Framework for Constrained Kernelized Bandits
In this section, leveraging primal-dual optimization, we propose a unified framework for both algorithm design and performance analysis. In particular, we first propose a "master" algorithm called CKB (constrained KB), which can be equipped with very general exploration strategies. Then, we develop a novel sufficient condition, which not only provides a unified analysis of regret and constraint violation, but also facilitates the design of new exploration strategies (and hence new CKB algorithms) with rigorous performance guarantees.
Algorithm. We first explain our "master" algorithm CKB in Algorithm 1, which is based on primal-dual optimization. Let the Lagrangian of the baseline problem g(x)] and the associated dual problem is defined as D(φ) := max π L(π, φ) with the optimal dual variable being φ * := argmin φ≥0 D(φ). Note that since both f and g are unknown, the agent has to first generate estimates of them (i.e., f t and g t , respectively) based on exploration strategies A f and A g , which capture the tradeoff between exploration and exploitation (line 2). Then, both estimates will be truncated according to the range of f and g, respectively (lines 3-4) (where Proj is the projection operator). The truncation is necessary for our analysis, but it does not impact the regret bound since it will not lead to loss of useful information. Then, lines 5-6 correspond to the primal optimization step that approximates D(φ t ) (i.e., approximate L byL with f and g replaced byf t andḡ t ). The reason behind line 6 is that one of the optimal solutions for max πL (π, φ t ) is simply argmax x (f t (x) − φ tḡt (x)). Then, line 7 is the dual update that minimizes D(φ t ) with respect to φ by taking a projected gradient step with 1/V being the step size. The parameter ρ is chosen to be larger than the optimal dual variable φ * , and hence the projected interval [0, ρ] includes the optimal dual variable. This can be achieved since the optimal dual variable is bounded under Slater's condition, and in particular, (Beck, 2017, Theorem 8.42). Finally, line 8 is the posterior update via standard GP regression for both f and g as computed in (2) and (3) Remark 3.1 (Computational complexity). CKB enjoys the same computational complexity as the unconstrained case (e.g., (Chowdhury and Gopalan, 2017)) since the additional dual update is a simple projection and the primal optimization keeps the same flavor as the unconstrained case, i.e., without constructing a specific safe set as in existing constrained KB algorithms designed for hard constraints.
We call CKB a "master" algorithm as it allows us to employ different exploration strategies (or called acquisition functions) (i.e., A f and A g ). Therefore, one fundamental question is: How to design efficient exploration strategies such that favorable performance can be guaranteed? In the following, we take a two-step procedure to address this question. We first combine UCB-type exploration with CKB to gain useful insights. This, in turn, will facilitate the development of a novel sufficient condition, which not only is satisfied under very general exploration strategies, but also enables a unified analytical framework for showing both sublinear regret and sublinear constraint violation.
Before that, we first introduce standard UCB and TS explorations under GP as in (Chowdhury and Gopalan, 2017).
Definition 3.2 (GP-UCB and GP-TS Explorations). Suppose the posterior distribution for a black-box function h in round t is given by GP( µ t−1 (·), k t−1 (·, ·)) and β t is a time-varying sequence.
Warm Up: CKB with GP-UCB Exploration
In this section, we instantiate CKB with GP-UCB exploration called CKB-UCB, as a warm-up. In particular, in CKB-UCB, A f is a GP-UCB exploration (see Definition 3.2) with a positive β t sequence (i.e., optimistic with respect to reward), and A g is a GP-UCB exploration with a negative β t sequence (i.e., optimistic with respect to cost). This instantiation enjoys the following performance guarantee.
Remark 3.4. The (reward) regret here is the stronger version, i.e., R + (T ). Compared to the unconstrained case, the regret bound has an additional term ρG √ T , which roughly captures the impact of the constraint. As in the unconstrained case, one can plug in different γ T and γ T to see that both regret and constraint violation are sublinear for commonly used kernels. For example, for a linear kernel, both γ T and γ T are on the order of d ln(T ). Finally, the standard "doubling trick" can be used to design an anytime algorithm (i.e., without the knowledge of T ) with regret and constraint violation bounds of the same order.
Proof Sketch of Theorem 3.3. We first obtain the following key decomposition that holds for any This is achieved by utilizing the dual variable update and some necessary algebra. This bound will be the cornerstone for the analysis of both regret and constraint violation. Note that T 1 +T 2 is similar to the standard regret decomposition, with an incorporation of the constraint function weighted by φ t (or φ). Assume that we already have a bound on it, i.e., T 1 + T 2 ≤ χ(T, φ) with high probability, and χ(T, φ) is an increasing function of φ. This leads to the following inequality (with V = G √ T /ρ) for all φ ∈ [0, ρ]: Then, the regret bound can be obtained by choosing φ = 0 in (6), and hence, R . Inspired by (Efroni et al., 2020), we will resort to tools from constrained convex optimization to obtain the bound on V(T ). First, we have 1 ] for some probability measure π ′ by the convexity of probability measure. Then, we have where the equality holds by choosing φ = ρ if T t=1 g(x t ) ≥ 0, and otherwise φ = 0, and the inequality holds by bounding RHS of (6) with φ = ρ since (6) holds for all φ ∈ [0, ρ] and χ(T, φ) is increasing in φ. Then, based on the above result, we can apply the tool from constrained convex optimization (cf. (Beck, 2017, Theorem 3.60)) to obtain V(T ) ≤ 1 ρ χ(T, ρ) + G √ T . The reason why we can apply this result is that E π [h(x)] for any fixed h is a linear function with respect to π (and is thus convex). Finally, it remains to find χ(T, φ) that bounds T 1 + T 2 . This can be achieved by using results from unconstrained GP-UCB algorithm (cf. (Chowdhury and Gopalan, 2017)). In particular, we have T 1 ≤ 0 and . Finally, plugging χ(T, 0) and χ(T, ρ) into R + (T ) and V(T ) yields the bounds on regret and on constraint violation, respectively, which completes the proof.
A Sufficient Condition for Provably Efficient Explorations
The above analysis reveals that the key step in obtaining sublinear performance guarantees of Algorithm 1 is to find a sublinear bound on χ(T, φ) that bounds T 1 + T 2 . Motivated by this, in this section, we will establish a sufficient condition on the exploration strategies (i.e., A f and A g ), which guarantees a sublinear χ(T, φ) and hence sublinear regret and sublinear constraint violation. In particular, we show that existing strategies such as GP-UCB and GP-TS both satisfy the sufficient condition. More importantly, this sufficient condition also leads to the development of new exploration strategies (such as random exploration).
We first present the intuition behind the key components of the sufficient condition. Inspired by (Kveton et al., 2019), we mainly focus on the following three nice events to bound T 1 + T 2 in (4)-(5): where Suppose that events E est and E conc t hold with high probability. Then, it is easy to see that the estimates are close to the true functions, and hence, one can derive a bound on T 2 in (5). Now, suppose that events E est and E anti t hold with some positive probability. Then, one can see that the estimates are optimistic compared to the true functions when evaluated at the optimal points. This probabilistic optimism is the key to bounding T 1 in (4). Note that GP-UCB exploration is optimistic with probability one by definition (see Definition 3.2), and hence, T 1 ≤ 0 always holds.
Define the filtration F t as all the history up to the end of round t.
We are now ready to present the sufficient condition for exploration strategies.
With the above sufficient condition, we have the following general performance bounds.
Assume that CKB is equipped with exploration strategies that satisfy the sufficient condition in Assumption 3. Then, under Slater's condition in Assumption 1 and regularity assumptions in Assumption 2, CKB achieves the following bounds on regret and constraint violation with probability at least 1 − α − p 1 for any α ∈ (0, 1): In the following, we will show that Theorem 3.6 provides a unified view of the performance for various CKB algorithms.
It is expected that GP-UCB exploration satisfies the sufficient condition and hence our CKB-UCB also has performance guarantees given by Theorem 3.6. In particular, we see that Theorem 3.6 enjoys the same order of constraint violation as in Theorem 3.3. The regret bound has the same order as Theorem 3.3, with an additional term due to the unified analysis (i.e., ρc g (T ) T γ T ) .
Corollary 3.7. GP-UCB with β t = β t and β t = − β t for A f and A g , respectively, satisfies the sufficient condition.
Proof. By Definition 3.2, f t (x) = µ t−1 (x) + β t σ t−1 (x) and g t (x) = µ t−1 (x) − β t σ t−1 (x). From this, we can directly obtain that E conc t and E anti t hold with probability one. Moreover, the boundedness condition naturally holds.
We can also show that the standard GP-TS exploration in Definition 3.2 satisfies the sufficient condition. Here, we mainly consider the case when π * concentrates on a single point, which allows us to apply the standard anti-concentration results 3 . Thus, we can instantiate CKB with GP-TS explorations, called CKB-TS, that also enjoys the guarantees in Theorem 3.6.
Corollary 3.8. GP-TS with β t = β t and β t = − β t for A f and A g , respectively, satisfies the sufficient condition when π * concentrates on a single point.
Proof. By Definition 3.2, we have that given the history up to the end of round t−1, f t (x) ∼ N (µ t−1 (x), β 2 t σ 2 t−1 (x)) and g t (x) ∼ N ( µ t−1 (x), β 2 t σ 2 t−1 (x)). Thus, for any fixed x ∈ X , by concentration of Gaussian distribution, we have and hence, using the union bound over all x, we obtain ∀x, g,t = 2 β t ln(|X |t). Hence, by union bound, we have P t (E conc t ) ≥ 1 − p 2,t with p 2,t = 2/t 2 . Moreover, when π * concentrates on a single point, by standard anti-concentration result of Gaussian distribution (e.g., Lemma 8 in (Chowdhury and Gopalan, 2017)), we Similarly, we also have P t (E anti t,g ) ≥ p. By independent sampling of f t and g t , we have P t E anti t ≥ p 3 with p 3 = p 2 . The boundedness condition holds due to T t=1 p 2,t ≤ 2 T t=1 1/t 2 ≤ π 2 /3 := C ′ and p 4 = O(p 2 ).
The sufficient condition also enables us to design CKB algorithms with new exploration strategies. In the following, inspired by (Vaswani et al., 2019), we propose a new GP-based exploration strategy, which aims to strike a balance between GP-UCB and GP-TS explorations.
In contrast to GP-UCB, RandGP-UCB replaces the deterministic confidence bound by a randomized one. Compared to GP-TS, RandGP-UCB uses "coupled" noise in the sense that all the actions share the same noise Z t rather than "decoupled" and correlated noise in GP-TS. This subtle difference will not only help to eliminate the additional factor ln(|X |) in GP-TS due to the use of union bound, but also allow us to deal with a general π * . One possible disadvantage of RandGP-UCB (compared to GP-TS) is that GP-TS could be offline oracle-optimization efficient for the step in line 6 of Algorithm 1 while RandGP-UCB (GP-UCB also) is not, which shares the standard pattern as in linear bandits.
. By concentration of Gaussian, we have thanks to the "coupled" noise. Hence, we have P t E conc g,t = 2 β t √ ln t. Thus, by the union bound, we have P t (E conc t ) ≥ 1 − p 2,t with p 2,t = 2/t 2 . By the anti-concentration of Gaussian, we have P Since the noise Z t and Z t are independent, we have P t E anti t ≥ p 3 with p 3 = p 2 . Then, the boundedness condition holds due to C ′ = π 2 /3 and p 4 = O(p 2 ).
Thus, one can instantiate CKB with RandGP-UCB exploration to obtain a new algorithm called CKB-Rand with performance guarantees given by Theorem 3.6. Note that RandGP-UCB with other distributions D can also satisfy the sufficient condition (as discussed in Appendix A).
Further Improvement on Constraint Violations
In the previous section, we have shown that our proposed CKB algorithm (Algorithm 1) is able to attain a sublinear regret (nearly the same order as the unconstrained case) and a sublinear constraint violation when employed with various exploration strategies. One natural question to ask is whether we can further improve the constraint violation bound. In the following, we will show that with a minor modification in Algorithm 1, one can achieve a bounded and even zero constraint violation by trading the regret slightly (but still the same order as before). The modification is to introduce a slackness given by ε in the dual update in Algorithm 1, i.e., φ t+1 = Proj [0,ρ] Intuitively speaking, this can be viewed as if one is working on a new pessimistic constraint function. After obtaining the constraint violation under this new hypothetic constraint, one can subtract εT to find the true constraint violation under the true function g. The catch here is that one needs also change the baseline problem to the following one: max π {E π [f (x)] : E π [g(x)] + ε ≤ 0} so that it matches the new pessimistic constraint function. Let π * ε be the optimal solution to this problem and the obtained regret is only with respect to π * ε rather than π * . Thus, we need to further bound the following difference T E π * [f (x)] − T E π * ε [f (x)] to obtain the true regret bound. To this end, we have the following result.
To show this, we let π ε (x) := (1 − ε δ )π * (x) + ε δ π 0 (x), where π * is the optimal solution to the original baseline problem and π 0 is the Slater's policy satisfying Slater's condition. First, we note that π ε is a feasible solution to the new baseline problem introduced above. To see this, we note that π ε (x) ≥ 0 and Since π * ε is the optimal solution while π ε is a feasible one, we have where in the last step, we use the boundedness of f . Therefore, one can properly choose ε such that the subtraction of εT in the constraint violation can cancel the leading term O( √ T ) (hence bounded or even zero constraint violation) while only incurring an additional additive term of the same order in the regret.
Algorithm 2: Algorithm in the Lyapunov-drift method
Input: V , ε, Q(1) = 0 1 for t = 1, 2, . . . , T do 2 Generate estimate f t (x), g t (x) and truncate them tof t ,ḡ t 3 Pseudo-acquisition function: Choose action x t = argmax x∈X z t (x); observe reward r t , and cost c t 5 Update virtual queue: Posterior model update: using observations to update model
Discussion on Alternative Method
To the best of our knowledge, there exist two popular methods for analyzing constrained bandits or MDPs. They are both based on primal-dual optimization and only differ in the analysis techniques. The first one is based on convex optimization tools as in (Efroni et al., 2020;Ding et al., 2021) and our paper. The other one is based on Lyapunov-drift arguments as in (Liu et al., 2021a;Liu et al., 2021b). For simplicity, we call the first method convex-opt method and the second one as Lyapunov-drift method. Before we provide further discussion, one thing to note is that all existing works only deal with UCB-type exploration for tabular or linear functions, while our paper is the first one that studies general functions with general exploration strategies. Now we first briefly explain the main idea behind the Lyapunov-drift method when applied to our setting (for the UCB exploration only). It basically has the same algorithm as the convex-opt method. One minor change is that in Lyapunov-drift method, the dual variable is not truncated by ρ and is denoted by Q(t), since this dual update is similar to a typical queue length update in queueing theory, i.e., truncated at zero; see Algorithm 2. To bound the regret, Lyapunov-drift method decomposes it as the following one, where ε ≤ δ/2 is the slackness as in the last section.
From this, one can see that Term 1, Term 2, and Term 4 can be easily bounded under UCB-type exploration. In particular, by optimism and well-concentration off t , one has Term 2 ≤ 0 and Term 4 = O( √ T ) (we ignore γ T term in this section for simplicity). Moreover, Term 1 enjoys the bound as in Claim 1. Thus, the only challenge is to bound Term 3, which cannot be naturally bounded by greedy selection as in the standard way, since in the constrained case, the greedy selection is with respect to the combined function. To handle this, one needs the following result, which not only helps to bound Term 3, but also is the key in bounding the constraint violation.
Lemma 5.1. Let ∆(t) := L(Q(t + 1)) − L(Q(t)) = 1 2 (Q(t + 1)) 2 − 1 2 (Q(t)) 2 . For any π, we have Thus, one can see that the first term on the RHS of (8) exists in Term 3 if one chooses π = π * ε . By the optimismḡ t (x) ≤ g(x) and the definition of π * ε , with a telescope summation, one can easily bound Term 3, hence the regret bound. Comparison in regret analysis. Compared to the regret decomposition in our paper, i.e., (4) and (5), (7) in the Lyapunov-drift method is more tailored to UCB-type exploration in the sense that the Term 3 is upper bounded separately using the optimism. As a result, it is unclear to us how to generalize it to handle general exploration strategies where one often need to bound Term 2 + Term 3 + Term 4 together and optimism does not hold in general. In contrast, our decomposition (4) and (5) basically keep the same fashion as in the unconstrained case, which enables us to utilize this structure to handle general exploration strategies.
We now turn to the constraint violation. By the virtual queue length update in Algorithm 2, the key step behind the constraint violation bound is to bound Q(T + 1). To see this, by the virtual queue length update in Algorithm 2, we have where in the last step we uses the well-concentration ofḡ t . To bound the remaining term Q(T + 1), the Lyapunov-drift method resorts to a classic tool in queueing theory, i.e., Hajek lemma (Hajek, 1982), to bound the virtual queue length at time T + 1. The idea behind it is simple: if the queue length drift ∆(t) is negative whenever the queue length is large, then Q(T + 1) is bounded. To establish the negative drift, one resorts to (8) again by choosing π = π 0 . By the definition of π 0 (Slater's policy), the optimismḡ t (x) ≤ g(x) and boundedness off t , one can easily establish a negative drift, and hence the constraint violation. Comparison in constraint violation analysis. Instead of using Hajek lemma, we directly utilize the convex optimization tool to obtain the constraint violation as in (Efroni et al., 2020;Ding et al., 2021), which is conceptually simpler. Moreover, the current constraint violation analysis in the Lyapunov-drift method also relies on the optimism ofḡ t , which does not hold in general explorations beyond UCB. Finally, when applying Hajek lemma to bound the virtual queue length, there exists a subtlety that makes the standard expected version of Hajek lemma fail due to the correlation of virtual queue length Q(t) andḡ t . We give more details on this subtlety in Appendix D.
Simulation Results
In this section, we conduct simulations to compare the performance of our algorithms (i.e., CKB-UCB, CKB-TS, and CKB-Rand, that is, CKB with GP-UCB, GP-TS, and RandGP-UCB explorations, respectively) with existing safe KB algorithms based on both synthetic and real-world datasets. In particular, we
Synthetic Data and Light-Tailed Real-World Data
Synthetic Data. The domain X is generated by discretizing [0, 1] uniformly into 100 points. The objective function f (·) = p i=1 a i k(·, x i ) is generated by uniformly sampling a i ∈ [−1, 1] and support points x i ∈ X with p = 100. With the same manner, we generate the constraint function g. The kernel is k se with parameter l = 0.2. Other parameters include B, R and γ t are set similar as in the unconstrained case (e.g., (Chowdhury and Gopalan, 2017)). Light-Tailed Real-World Data. We use the light sensor data collected in the CMU Intelligent Workplace in Nov 2005, which is available online as Matlab structure 4 and contains locations of 41 sensors, 601 train samples and 192 test samples. We use it in the context of finding the maximum average reading of the sensors. In particular, f is set as empirical average of the test samples, with B set as its maximum, and k is set as the empirical covariance of the normalized train samples. The constraint is given by g(·) = −f (·) + h with h = B/2.
We perform 50 trials (each with T = 10, 000) and plot the mean of the cumulative regret along with the error bars, as shown in Fig. 1. Regret. Our three CKB algorithms achieve a better (or similar) regret performance compared to the existing safe BO algorithms (see Figures 1(a) and 1(b)). Among the three CKB algorithms, CKB-Rand appears to have reasonably good performance at all times. Constraint violation. Since we have V(T ) = 0 under all the algorithms, we study the total number of rounds where the constraint is violated, denoted by N . In the synthetic data setting, our proposed CKB algorithms have N ≤ 5 over T = 10, 000 rounds; in the real-world data setting, CKB-UCB enjoys N = 0 and CKB-Rand has an average N = 38 over a horizon T = 1, 000. Furthermore, we plot the stronger cumulative constraint violations given by T t=1 [g(x t )] + as shown in Figure 1(c), from which we can see that all CKB algorithms achieve sublinear performance even with respect to this stronger metric. Practical considerations. Our proposed CKB algorithms have the same computational complexity as the unconstrained case. In particular, they scale linearly with the number of actions in the discrete-domain case 5 . On the other hand, StageOpt scales quadratically due to the construction of the safe set, and SGP-UCB requires the additional random initialization stage, which leads to linear regret at the beginning of the learning process. Moreover, standard methods for improving the scalability of unconstrained KB can be naturally applied to our CKB algorithms. Finally, both StageOpt and SGP-UCB require the knowledge of a safe action (i.e., one that satisfies the constraint) in advance, and moreover, StageOpt requires f to be Lipschitz and needs to estimate the Lipschitz constant, which impacts the robustness. In contrast, CKB algorithms only require a mild Slater's condition as in Assumption 1, which does not necessarily require the existence of a safe action.
Heavy-Tailed Real-World Data
We further compare different constrained KB algorithms in a new real-world dataset, which demonstrates a heavy-tailed noise. Note that sub-Gaussian noise is required in all the existing theoretical works (including our work). We use this dataset to test the robustness of various constrained KB algorithms. The experimental results tend to show that our three CKB algorithms are more robust in terms of heavy-tailed noise, which is common in practical applications. The detail of this real-world dataset is deferred to Appendix B. Regret. We plot both cumulative regret and time-average regret in this setting (see Figures 2 (a) and (b)). We can observe that in the presence of heavy-tailed noise, our three CKB algorithms have significant performance gain over existing safe KB algorithms. Constraint violation. We focus on the strong metric, i.e., the number of rounds where the constraint is violated, denoted by N . We have that CKB-Rand enjoys an average N = 21 and CKB-UCB has an average N = 47 within the horizon of T = 10, 000. We also plot stronger cumulative constraint violations given by T t=1 [g(x t )] + as shown in Figure 2(c), from which we can see that all CKB algorithms achieve sublinear performance even with respect to this stronger metric.
Proof of Theorem 3.3
Before we present the proof, we first obtain the following lemma on the dual variable.
Lemma 7.1. Under the update rule of φ t in Algorithm 1, we have for any φ ∈ [0, ρ], sampling (cheap) and the "L-BFGS-B'" optimization method.). In fact, to attain the same order of regret bound, the solution to the acquisition maximization problem need not be exact. Instead, it only needs to maximize the acquisition function within C/ √ t accuracy for some constant C at each step. This will translate into an additional C √ T term in the regret bound.
Proof. By the dual variable update rule in Algorithm 1 and the non-expansiveness of projection to [0, ρ], we have Summing over T steps and multiplying both sides by V 2 , we have Hence, which completes the proof. Now, we are ready to present the proof of Theorem 3.3.
Proof of Theorem 3.3. Under Slater condition in Assumption 1, we have the boundedness of the optimal dual solution by standard convex optimization analysis (cf. (Beck, 2017, Theorem 8.42 where the last inequality holds by the boundedness of f (x). Note that the reason why we can use convex analysis is that E π [h(x)] for any fixed h is a linear function with respect to π (and is thus convex). Now, we turn to establish a bound over R(T ) + φ T t=1 g(x t ). First, note that We can further bound (10) by using Lemma 7.1. In particular, we have where (a) holds since φ t ≥ 0 and E π * [g(x)] ≤ 0; (b) holds by adding and subtracting terms; (c) follow from Lemma 7.1 to bound the last term; (d) holds by the fact φ 1 = 0, the boundedness ofḡ t and the definitions of T 1 and T 2 , i.e., Plugging (11) into (10), yields for any φ ∈ [0, ρ], First, assume that we already have a bound on T 1 + T 2 , i.e., T 1 + T 2 ≤ χ(T, φ) with high probability, and χ(T, φ) is an increasing function in φ. This directly leads to the following inequality (with V = G √ T /ρ) for any φ ∈ [0, ρ]: Based on this key inequality, we can analyze both regret and constraint violation. Regret. We can simply choose φ = 0 in (15), and obtain that with high probability Constraint violation. To obtain the bound on V(T ), inspired by (Efroni et al., 2020), we will resort to tools from constrained convex optimization. First, we have 1 g(x)] for some probability measure π ′ by the convexity of probability measure. As a result, we have where [a] + := max{0, a}, and the first equality holds by choosing φ = ρ if T t=1 g(x t ) ≥ 0, and otherwise φ = 0, and the second inequality holds by upper bounding RHS of (6) with φ = ρ since (15) holds for all φ ∈ [0, ρ] and χ(T, φ) is increasing in φ.
Then, we will apply the following useful lemma, which is adapted from Theorem 3.60 in (Beck, 2017).
Thus, since (17) satisfies (18) and E π [h(x)] for any fixed h is a linear function with respect to π, by Lemma 7.2, we have We are only left to bound T 1 +T 2 by χ(T, φ). To this end, we will resort to standard concentration results for GP bandits. First, by (Chowdhury and Gopalan, 2017, Theorem 2), we have the following lemma.
Lemma 7.3. Fix α ∈ (0, 1], with probability at least 1 − α, the followings hold simultaneously for all t ∈ [T ] and all x ∈ X Thus, based on this lemma and the definition of GP-UCB exploration, we have with high probability, f t (x) ≥ f (x) and g t (x) ≤ g(x) for all t ∈ [T ] and x ∈ X . This directly implies thatf t (x) ≥ f (x) andḡ t (x) ≤ g(x) for all t ∈ [T ] and x ∈ X (i.e., optimistic estimates), which holds by |f (x)| ≤ B and |g(x)| ≤ G and the way of truncation in Algorithm 1. Now, to bound T 1 in (12), we have where (a) holds by the fact that estimates are optimistic, i.e.,f t (x) ≥ f (x) andḡ t (x) ≤ g(x) for all t ∈ [T ] and x ∈ X ; (b) holds by the greedy selection of Algorithm 1. Now, we turn to bound T 2 . In particular, we have where (a) holds by Lemma 7.3 and the definition of GP-UCB exploration, i.e., f t (x) = µ t−1 (x) + β t σ t−1 (x) and g t (x) = µ t−1 (x) − β t σ t−1 (x). Note that truncation also does not affect this step; (b) holds by Cauchy-Schwartz inequality and the bound of sum of predictive variance (cf. (Chowdhury and Gopalan, 2017, Lemma 4)). Note that we have also used the fact that β t and β t is increasing in t.
8 Proof of Theorem 3.6 Before we present the proof, we introduce a new notation to make the presentation easier. In particular, we let h(π) := E π [h(x)] for any function h and π t is a dirac delta function at the point x t .
Proof of Theorem 3.6. As shown in the proof of Theorem 3.3, all we need to do is to find a high probability bound over T 1 + T 2 under the sufficient condition in Assumption 3. Under our newly introduced notation, we have where z φt (·) := f (·) − φ t g(·) and z φt (·) :=f t (·) − φ tḡt (·), and similar definitions for z φ and z φ . Let ∆ φt (π) := z φt (π * ) − z φt (π) = (f (π * ) − φ t g(π * )) − (f (π) − φ t g(π)). Then, we define the 'undersampled' set as g,t ) (similarly α φ (π) := c f,t σ t−1 (π) + φc g,t σ t−1 (π)). Let u t = argmin π∈St α φt (π). Thus, conditioned on E est and E conc t , we have where (a) holds since under event holds by the greedy selection in Algorithm 1; (c) follows from u t ∈S t . Thus, conditioned on E est , we have where (a) holds by definition of p 2,t , the fact that φ, φ t ≤ ρ and the boundedness of functions; (b) follows from Eq. (22) and the fact that given F t−1 , α φt (u t ) is deterministic; (c) holds by the following argument: E t [α φt (π t )] ≥ E t α φt (π t )|π t ∈S t P t π t ∈S t ≥ α φt (u t )P t π t ∈S t , which holds by the definition of u t and the fact that α φt (u t ) and S t are both F t−1 -measurable; (d) holds by definition α ρ (π t ) := c f,t σ t−1 (π t ) + ρc g,t σ t−1 (π t ) and the fact that both φ, φ t are bounded by ρ. Hence, the key is to find a lower bound on the probability P t π t ∈S t . In particular, conditioned on E est , we have where (a) holds by the greedy selection in Algorithm 1 and π * ∈S t since ∆ φt (π * ) = 0. Note that S t is the complement of the 'undersampled' setS t ; (b) holds given E est ∩ E conc t , for all π j ∈ S t z φt (π j ) ≤ z φt (π j ) + α φt (π j ) ≤ z φt (π j ) + ∆ φt (π j ) = z φt (π * ); (c) holds since |g(x)| ≤ G for all x and |f ( Putting everything together, we have now arrived at that conditioned on E est , where the last inequality follows from the boundedness condition in the sufficient condition. In order to obtain a high probability bound, inspired by (Chowdhury and Gopalan, 2017), we will resort to martingale techniques. Let us define the following terms Definition 8.1. Define Y 0 = 0, and for all t = 1, . . . , T , where I{·} is the indicator function. Now, we can show that {Y t } t is a super-martingale with respect to filtration F t . To this end, we need to show that for any t and any possible p 4 E t [α ρ (x t )] + (4B + 4ρG)p 2,t . For F t−1 such that E est holds, we already obtained the required inequality as in Eq. (23). For primal-dual optimization. Armed with our developed sufficient condition, this framework not only allows us to design provably efficient (i.e., sublinear reward regret and sublinear total constraint violation) CKB algorithms with both UCB and TS explorations, but presents a unified method to design new effective ones. By introducing slackness, our algorithm can also attain a bounded or even zero constraint violation while still achieving a sublinear regret. We further perform simulations on both synthetic data and real-world data that corroborate our theoretical results. Along the way, we also present the first detailed discussion on two existing methods for analyzing constrained bandits and MDPs by highlighting interesting insights. One interesting future work is to generalize our results to kernelized MDPs (Yang et al., 2020).
A Flexible Implementations of RandGP-UCB
In this section, we will give more insights on the choices of D, i.e., sampling distribution for Z t . In particular, we consider the unconstrained case for useful insights with black-box function being f . By the definition of RandGP-UCB, for each t, the estimate under RandGP-UCB is given by where Z t ∼ D. First, by Lemma 7.3, we have with high probability f (x) ≤ µ t−1 + β t σ t−1 (x), which directly implies that in order to guarantee E anti t happens with a positive probability, one needs to make sure that P(Z t ≥ β t ) ≥ p 3 > 0. Thus, one simple choice of D is a uniform discrete distribution between [0, 2β t ] with N points. Then, it can be easily checked that P t E anti t ≥ p 3 > 0 and also P t (E conc t ) = 1 with c (2) f,t = 2β t . In addition to uniform discrete distribution, one can also use discrete Gaussian distribution within a range [L, U ] as long as U , L are properly chosen. Of course, there are many other choices as long as the insight shown above is satisfied, and hence RandGP-UCB provides a lot of flexibility in the algorithm design.
B Details on Heavy-Tailed Real-World Data
This dataset is the adjusted closing price of 29 stocks from January 4th, 2016 to April 10th 2019. We use it in the context of identifying the most profitable stock in a given pool of stocks. As verified in Chowdhury and Gopalan (2019), the rewards follows from heavy-tailed distribution. We take the empirical mean of stock prices as our objective function f and empirical covariance of the normalized stock prices as our kernel function k. The noise is estimated by taking the difference between the raw prices and its empirical mean (i.e., f ), with R set as the maximum. The constraint is given by g(·) = −f (·) + h with h = 100 (i.e., h ≈ B/2). We perform 50 trials (each with T = 10, 000) and plot the mean along with the error bars.
C Proof of Lemma 5.1
Proof. Note that by the update rule of the virtual queue in Algorithm 2 and non-expansiveness of projection, we have ∆(t) ≤ Q(t)(ḡ t (x t ) + ε) + 1 2 (ḡ t (x t ) + ε) 2 . Now we will bound the RHS as follows.
where (a) holds by the boundedness ofḡ t ; (b) holds by the greedy selection in Algorithm 2. Reorganizing the term, yields the required result.
D Subtlety in Applying Hajek Lemma to Constraint Violation
As stated before, the key step behind the constraint violation is to establish a negative drift of the virtual queue and then by Hajek lemma, one can show that the virtual queue is bounded in expectation, which in turn can be used to establish a zero constraint violation with a proper choice of slackness variable (i.e., ε) in the virtual queue update. However, the negative drift condition in the standard Hajek lemma (cf. Lemma 11 in Liu et al. (2021a)) requires a conditional expectation, i.e., condition on all large enough Q, the expected drift is negative. Then, if one directly applies the standard Hajek lemma, she would proceed as follows. The goal is to show that E [∆(t) | Q(t) = Q] ≤ −cQ for all large Q and c is some positive constant. Recall the bound on ∆(t) in (8), by the boundedness and let π = π 0 , the key is to show that To illustrate the idea, we simply suppose that the Slater's condition is satisfied at a single point x 0 and ε = 0. To show the above inequality, she may choose the following direction.
For Term (ii), it is easily bounded by Term (ii) ≤ −δ via Slater's condition since g(·) is a fixed function. To bound Term (i), she may resort to the standard self-normalized inequality for linear bandits and the definition of UCB exploration (cf. Abbasi-Yadkori et al. (2011)). By these standard results, she can show that for any fixed α ∈ (0, 1], the following holds: P{∀x, ∀t,ḡ t (x) ≤ g(x)} ≥ 1 − α.
That is,ḡ t is optimistic with respect to g. Then, by setting α = 1/T and using the boundedness assumption of bothḡ t and g, she may conclude that Term (i) = O(1/T ). Unfortunately, the bound on Term (i) is ungrounded since it is obtained by treating the conditional expectation in Term (i) as an unconditional expectation. The subtlety here is that one cannot remove the condition on Q(t) in Term (i), sinceḡ t is not independent of Q(t) as both of them depend on the randomness before time t. Given a particular Q(t), it roughly means that we are taking expectation conditioned on a particular history (i.e., a sample-path). Under this particular history, (26) does not necessarily hold, and moreover, the concentration ofḡ t given Q(t) is hard to compute in this case. As a result, the conditional expectation for Term (i) is hard to compute in general.
One correct way. Instead of applying the standard expected version of Hajek lemma, one can consider removing the expectation in Hajek lemma by directly showing thatḡ t (x 0 ) ≤ −c almost surely under the "good event". This is exactly the approach used in (Liu et al., 2021b) (cf. Lemma 5.6). In this way, one can show that with a high probability (i.e., under good event), a negative drift exists and hence the constraint violation bound with high probability. | 2022-03-30T01:15:48.181Z | 2022-03-29T00:00:00.000 | {
"year": 2022,
"sha1": "76066a3a189b5c705ddd348033eb24b5c9faa2d3",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "76066a3a189b5c705ddd348033eb24b5c9faa2d3",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
119150323 | pes2o/s2orc | v3-fos-license | First passage sets of the 2D continuum Gaussian free field
We introduce the first passage set (FPS) of constant level $-a$ of the two-dimensional continuum Gaussian free field (GFF) on finitely connected domains. Informally, it is the set of points in the domain that can be connected to the boundary by a path on which the GFF does not go below $-a$. It is, thus, the two-dimensional analogue of the first hitting time of $-a$ by a one-dimensional Brownian motion. We provide an axiomatic characterization of the FPS, a continuum construction using level lines, and study its properties: it is a fractal set of zero Lebesgue measure and Minkowski dimension 2 that is coupled with the GFF $\Phi$ as a local set $A$ so that $\Phi+a$ restricted to $A$ is a positive measure. One of the highlights of this paper is identifying this measure as a Minkowski content measure in the non-integer gauge $r \mapsto \vert\log(r)\vert^{1/2}r^{2}$, by using Gaussian multiplicative chaos theory.
Introduction
The continuum Gaussian free field (GFF) is a canonical model of a Gaussian field satisfying a spatial Markov property. It first appeared in Euclidean quantum field theory, where it is known as bosonic massless free field [Sim74,Gaw96]. In the probability community the study of the 2D continuum GFF has reflourished in the 2000's due to its various connections to Schramm's SLE processes [SS13,She05,Dub09b,MS16a,MS16b,MS16c,MS17], Liouville quantum gravity measures [DS11,BSS14] and Brownian loop-soups [LJ10,LJ11].
The seminal papers that connected SLE processes and the free field showed that SLE 4 can be seen as a level line of the GFF -more precisely, in [SS09] Schramm and Sheffield showed that the level lines of the discrete GFF converge in law to SLE 4 and in [SS13] they gave a purely continuum definition of the limiting coupling, giving rise to the study of level lines of the continuum GFF.
In [ASW17], it was further shown that level lines give a way to define other geometric subsets of the GFF. More precisely, in [ASW17] the authors introduce two-valued local sets A −a,b . Heuristically, the set A −a,b corresponds to the points in the domain that can be connected to the boundary by some path along which the height of the GFF remains in [−a, b]. The mathematical definition of these sets is based on thinking of the 2D GFF as a generalization of the Brownian motion, and it relies on the strong Markov property of the free field. In the case of the BM the same geometric heuristic defines the set [0, T −a,b ], where T −a,b is the first exit time from the interval [−a, b], and in [ASW17] it is proved that A −a,b satisfies many properties expected from the analogy with these exit times.
In the current article, we introduce a further geometric subset of the GFF: the first passage set (FPS) A −a . Heuristically, it corresponds to the points in the domain that can be connected to the boundary by paths along which the height of the GFF is greater or equal to −a, i.e., it is the analogue of [0, T −a ], where T −a is the one-sided first passage time of a BM. We provide an axiomatic characterization of the continuum FPS, a construction using iterations of level lines as in [ASW17] and study several of its properties.
There are two key aspects which make the FPS interesting to study. First of all, compared to most of the geometric subsets of the GFF studied so far, this set is large in the sense that the restriction of the GFF to this set is not zero. This not only requires new ways of working with this set, but also introduces interesting phenomena -as one of the key results we show that even though the GFF on this set is non-trivial, it is measurable with respect to the underlying set. Even more, we show that in fact this measure can be identified with the Minkowski content measure of the underlying set in the gauge r → | log(r)| 1/2 r 2 . Secondly, in the case of the FPS the geometric definition given above can be made precise in the following sense: in a follow-up article [ALS18c], we show that the first passage sets of the metric graph GFF introduced in [LW16] converge to the continuum FPS. This will, among other things, allow us to identify the FPS with the trace of a clusters of Brownian loops and excursions, and to prove convergence results for the level lines of the GFF. Moreover, in a subsequent work we will use the results of these two papers to prove an excursion decomposition of the 2D continuum GFF [ALS18a].
Finally, let us mention that the FPS have already proved useful in studying Gaussian multiplicative chaos (GMC) measures of the GFF: in [APS17], the authors confirm that a construction of [Aïd15] converges to the GMC measure. This was done using the fact that GFF can be approximated by its FPS of increasing levels. Moreover, in [APS18], and using the same FPS based construction, the authors prove that a "derivative" of subcritical GMC measures coincides with a multiple of the critical measure. This confirmed a conjecture of [DRSV14].
1.1. Overview of results. Let us give a more detailed overview of the results presented in the paper. To do this, first recall the local set coupling of a random set A with the Gaussian free field Φ in a domain D. It is a coupling (Φ, A) that induces a Markovian decomposition of Φ. That is to say Φ can be written as a sum between Φ A and Φ A , where Φ A is a random distribution that is a.s. harmonic on D\A and conditional on (A, Φ A ), Φ A is a zero boundary GFF on D\A. We denote by h A the harmonic function corresponding to Φ A outside of A. The local set condition implies that conditional on A and Φ A , the GFF Φ restricted to D\A is given by the sum of h A and Φ A .
The two valued local sets (TVS) A −a,b for a simply connected domain D, studied in [ASW17], can be then defined as the only thin local sets of the GFF such that h A ∈ {−a, b} . Here, thin means that at Φ A contains no extra mass on A, i.e. for any smooth function f , we have that (h A , f ) = (Φ A , f ).
Our first task is to generalize two-valued sets to multiply-connected domains and to more general boundary conditions u, and to show that the main properties of TVS proved in [ASW17] remain true also in this setup. The generalization to more general boundary conditions requires only slight modifications in the definitions, and slight extensions of the proofs. The case of multiply-connected domains, however, requires both new ideas and technical work. In particular, as technical result of independent interest we prove in Proposition 3.16 that level lines in multiply-connected domains are continuous up to and at hitting a continuation threshold.
We next introduce the first passage set (FPS). For a zero boundary GFF, the FPS of level −a, denoted A −a with a ≥ 0 is then defined as a local set of the Gaussian free field on D satisfying the following properties: • Conditional on A −a , the law of the restriction of the GFF on D to D\A −a is that of a GFF on D\A −a with boundary condition −a, or in other words h A −a = −a. • The GFF on A −a is larger than −a, in the sense that for any positive test function f we have that (Φ A −a + a, f ) ≥ 0 -that is to say Φ A −a + a is a positive measure. The full definition for general boundary conditions u is given in Definition 4.1. There, we also define the FPS in the other direction: A b will heuristically correspond to the local set A such that h A = b and the GFF on A is smaller than b. As proved in Theorem 4.3 in the setting of more general boundary conditions, the first passage set Figure 1. A simulation of four nested First passage sets. The first passage set A −λ with λ = π/8 is in dark blue. The difference between A −2λ and A λ is in lighter blue, difference between A −2λ and A −3λ in green and yellow depicts the missing part of A −4λ . Image done by B. Werness.
• is unique in the sense that any other local set with the above conditions is a.s. equal to the FPS, and thus, it is a measurable with respect to the GFF it is coupled with; • is monotone in the sense that for all a ≤ a almost surely A −a ⊂ A −a ; • as in the case of the Brownian motion can be constructed as a limit of two-valued local sets, A −a = lim b→∞ A −a,b .
In fact the relation to two-valued sets is even stronger: we will show that the intersection of two FPS A −a and A b is precisely A −a,b in the simply connected case. Proposition 4.14 generalizes this result form multiply connected domains and it is the key to show the uniqueness of TVS.
One can show that A −a has zero Lebesgue measure but, contrary to A −a,b , its Minkowski dimension is 2 and A −a is not a thin local set, i.e., Φ "charges" A −a . Also, quite surprisingly Φ A −a is measurable function of just the set A −a itself. Even more, this measure is in fact equal to one halves times its Minkowski content measure in the gauge r → | log(r)| 1/2 r 2 (Proposition 5.1). Notice that in fact both of these statements are non-trivial! Our proof uses the recent construction of Liouville quantum gravity measures via the local sets [APS17], the fact that GFF is a measurable function of of its Gaussian multiplicative chaos measures [BSS14], and a deterministic argument to link different measures on fractal sets, that could turn out to be useful in a more general setting. As a cute consequence we observe in Corollary 5.3 that the GFF can be seen as a limit of recentered Minkowski content measures of a sequence of growing random sets.
Finally, our techniques allow us also to compute explicitly the laws of several observables. In Propositions 4.8 and 4.11, we compute the extremal distance between FPS or TVS started from a given boundary to the rest of the boundary. This is the continuum analogue of some of the results obtained in the metric graph setting [LW16], where the extremal distance replaces the effective resistance. In an upcoming paper [ALS18b], we further use these techniques to calculate the law of the extremal distance between the CLE 4 loop surrounding zero and the boundary, and the joint laws between different nested loops.
1.2. Outline of the paper. The rest of the article is structured as follows: Section 2 contains the preliminaries: a summary of general potential theory objects, two-dimensional continuum GFF, local sets and basic results about Gaussian multiplicative chaos. The only novel parts are Propositions 2.7, 2.8 that heuristically allow us to parametrize local set processes using their distance to a a part of the boundary.
In Section 3, we extend the theory of two-valued local set to the finitely-connected case. This will require a detailed study of the generalized level lines in multiply-connected domains. After that, in Section 4, we define and characterize the continuum FPS and prove several of its properties. Finally, in Section 5 we show that the measure Φ A −a + a corresponds to a constant times the Minkowski content measure (in a certain gauge) of the underlying set.
Preliminaries
In this section, we describe the underlying objects and their key propertiesju. First, we go over the conformally invariant notion of distance in complex analysis -the extremal length; then we discuss the continuum two-dimensional GFF and its local sets. The only new contribution of this section is Proposition 2.7.
We denote by D ⊆ C an open planar bounded domain with a non-empty and non-polar boundary. By conformal invariance, we can always assume that D is a subset of the unit disk D. The most general case that we work with are domains D such that the complement of D has at most finitely many connected component and no complement being a singleton. Recall that by the Riemann mapping for multiply-connected domains [Koe22], such domains D are known to be conformally equivalent to a circle domain (i.e. to D\K, where K is a finite union of closed disjoint disks, disjoint also from ∂D).
2.1. Extremal distance. In multiply-connected domains, the natural way to measure distances between the components is the extremal length (it is a particular case of extremal distance) and its reciprocal conformal modulus. Both of the quantities are conformally invariant and extremal distance is the analogue of the effective resistance on electrical networks [Duf62]. We introduce it shortly here and refer to [Ahl10], Section 4 for more details.
If ρ(z)|dz| is a metric on D conformal equivalent to the Euclidean metric, we will denote by the ρ-length of a path γ, and by the ρ-area of D. The extremal distance between B 1 and B 2 is defined as The conformal modulus M(B 1 , B 2 ) is then defined as EL(B 1 , B 2 ) −1 . We state here also a theorem giving an explicit formula for the extremal distance using the Dirichlet energy .
This theorem gives in particular a relation between the extremal distance and the boundary Poisson kernel. To explain this, we define the Green's function G D of the Laplacian (with Dirichlet boundary conditions) in D. It is often useful to write where g D (z, ·) is the bounded harmonic function with boundary values given by (2π) −1 log(|z − x|) for x ∈ ∂D. It can be shown that the Green's function is conformally invariant. Additionally, note that in simply connected domains, g D (z, z) equals the log conformal radius: The Green's function can be used to define the Poisson kernel. In the case of domains with locally analytic boundary, the boundary Poisson kernel is defined as where ∂ nx respectively ∂ ny are the normal derivatives at x respectively y. If D and D are domains with locally analytic boundaries and f is a conformal transformation from D to D , then One can see the boundary Poisson kernel as a measure on ∂D × ∂D rather than a function, by setting H D (dx, dy) = H D (x, y)dxdy, where on the right-hand side dx and dy denote the length measure on ∂D. This measure is conformal invariant by (2.2). Also note that it has infinite total mass due to diagonal divergence. For a domain D with general boundary, we can define the boundary Poisson kernel H D (dx, dy) as the push-forward measure of the boundary Poisson kernel on a domain D with locally analytic boundary, under a conformal transformation taking D → D. This is true even in the case ∂D has locally infinite length, e.g. in the case where the boundary "looks like" an SLE 4 curve.
Finally, notice that from the definition using the Green's function and Theorem 2.1 we see that the extremal length introduced above can be expressed using the boundary Poisson kernel. Indeed, let B be a union of finitely many boundary components. Then 2.2. The continuum GFF. The (zero boundary) Gaussian Free Field (GFF) in a domain D can be viewed as a centered Gaussian process Φ indexed by the set of continuous functions with compact support in D, with covariance given by the Green's function: In this paper Φ always denotes the zero boundary GFF. We also consider GFF-s with non-zero Dirichlet boundary conditions -they are given by Φ + u where u is some bounded harmonic function that is piecewise constant 1 boundary dataon ∂D.
Because the covariance kernel of the GFF blows up on the diagonal, it is impossible to view Φ as a random function. However, it can be shown that the GFF has a version that lives in the Sobolev space H −1 (D) of generalized functions, justifying the notation (Φ, f ) for Φ acting on functions f (see for example [Dub09b]). In fact, one can explicitly calculate that if Φ is a GFF in D ⊆ D then: Moreover, let us remark that it is in fact possible and useful to define the random variable (Φ, µ) for any fixed Borel measure µ, provided the energy µ(dz)µ(dw)G D (z, w) is finite.
Finally, there is an important numerical constant λ, that depends on the normalization of the GFF and is in the current setting (where G D (z, w) ∼ −(2π) −1 log(|z − w|) as w → z) is given by where 2λ is the height gap of the GFF [SS09,SS13]. Sometimes, other normalizations are used in the literature: if G D (z, w) ∼ c log(1/|z − w|) as z → w, then λ should be taken to be (π/2) × √ c.
2.3. Local sets: definitions and basic properties. Let us now introduce more thoroughly the local sets of the GFF. We only discuss items that are directly used in the current paper. For a more general discussion of local sets and thin local sets (not necessarily of bounded type), we refer to [SS13,Wer16,Sep17]. Even though, it is not possible to make sense of (Φ, f ) when f = 1 A is the indicator function of an arbitrary random set A, local sets form a class of random sets where this is (in a sense) possible.
A is a random closed subset of D and Φ A a random distribution that can be viewed as a harmonic function when restricted to D\A. We say that A is a local set for Φ if conditionally on Throughout this paper, we use the notation h A : D → R for the function that is equal to Φ A on D\A and 0 on A.
Let us list a few properties of local sets (see for instance [SS13,Aru15,AS18a] for derivations and further properties): (1) Any local set can be coupled in a unique way with a given GFF: satisfy the conditions of this Definition 2.2. Then, a.s. Φ A = Φ A . Thus, being a local set is a property of the coupling (Φ, A), as Φ A is a measurable function of (Φ, A).
(2) If A and B are local sets coupled with the same GFF Φ, and (A, Φ A ) and (B, Φ B ) are conditionally independent given Φ, then A∪B is also a local set coupled with Φ. Additionally, is a local set coupling, the sets A n are increasing in n and there exists k ∈ N such that the cardinal of connected components of A n ∪ ∂D is bounded by k. Then, A n is also a local set and Φ An → Φ An in probability as n → ∞. (4) Let (Φ, A n ) be such that or all n ∈ N, (Φ, A n ) is a local set coupling and the sets A n are decreasing in n. Then, A n is also a local set and Φ An → Φ n An a.s. as n → ∞. The property (3) follows from the fact that under the conditions on A n Beurling estimate (Theorem 3.76 of [Law08]) ensures that G D\An → G D\A as n → ∞. Property (4) just follows from inverse martingale theorem.
Often one is interested in a growing sequence of local sets, which we call local set processes.
Definition 2.4 (Local set process). We say that a coupling (Φ, (η t ) t≥0 ) is a local set process if Φ is a GFF in D, η 0 ⊆ ∂D, and η t is an increasing continuous family of local sets such that for all stopping time τ of the filtration F t := σ(η s : s ≤ t), (Φ, η t ) is a local set.
Let us note that in our definition η t is actually a random set. In the rest of the paper, we are mostly interested in local set processes that are equal to the trace of a continuous curve. In those cases, we are going to denote by η(t) the tip of the curve at time t. In other words, in our notation Local processes can be naturally parametrized from the viewpoint of any interior point z: the expected height h ηt (z) then becomes a Brownian motion. More precisely, we have that: Proposition 2.5 (Proposition 6.5 of [MS16a]). For any z ∈ D if (η t ) t≥0 is parametrized such that (G D − G D\ηt )(z, z) = t, then (h ηt (z)) t≥0 has the law of a Brownian motion.
Remark 2.6. Notice that whereas G D diverges on the diagonal, the difference of Green's functions can be given a canonical sense, using (2.1). In fact when D, and D\η t are simply connected domains, it is a difference of logarithms of conformal radii: In fact, one can also parametrize local set processes η t using their distance to the boundary. As the boundary values of the GFF away from η t do not change, it is natural to look at normal derivatives. In order to obtain a conformally invariant quantity, notice that if B ⊆ ∂D and h is a harmonic function, then by Green's identities the quantity B ∂ n h can be given a conformally invariant meaning: B ∂ n h = D ∇h∇ū whereū is the harmonic extension of the function that takes the value 1 on B and 0 on ∂D\B.
We will first consider the case, where the local set process is parametrized by its extremal distance to a whole boundary component: has the law of a Brownian bridge from 0 to 0 with length EL(B, ∂D\B). Moreover, the same holds (with the appropriate definitions) for any D finitely connected domain with all boundary components larger than a point.
Proof. As the conformal modulus is the reciprocal of the extremal length, a simple calculation shows that it is enough to prove the first claim.
Using the conformal invariance both of the quantity W t , the Gaussian free field and the extremal length, it suffices to work in a circle domain and consider the case where B is equal to the union of circles. In fact, for simplicity, we will only prove the case where B is equal to a circle of radius 1. We can moreover assume that the circles of radius 1 + ε, denoted by B ε are contained in D for all ε > 0 small enough. In this case ε −1 Bε h ηt converges a.s. to ∂ n h ηt . Now write h ηt = Φ − Φ ηt , and note that for any ε > 0 we have that: Note that when ε → 0, both terms individually diverge. However, their difference converges to Moreover, this convergence is uniform in all sets η t that are at extremal distance of at least δ to B for some δ > 0, Now, by Theorem 2.1 and the fact that ∂ nx G D (x, y)dx is the Poisson kernel we have that (2.4) is equal to M(B, (∂D ∪ η t )\B) − M(B, ∂D\B). Further, note that a.s. ε −1 B h ηt (z) → B ∂ n h ηt (z)dz, and that for all λ ∈ R, by first conditioning on η t , We conclude by bounding σ 2 ε − σ 2 ε,t by M(B, ∂D\B) − M(B, (∂D ∪ η t )\B) ± δ and taking expected value. Now, let us see how to extend this proposition to the case where the local set process is parametrized by its "distance" to a part of the boundary. One of the obstacles here is that when the growing set that we want to parametrize is of 0 Euclidean distance from this part of the boundary, then the naive conformal modulus between the set and this boundary part diverges. However, similarly to the reduced extremal distance (see Chapter 4.14 of [Ahl66]), the difference between the moduli is still non-trivial.
Let us explain this a bit more precisely: assume that ∂D can be partitioned as B 1 ∪ B 2 , where B 2 is a connected subset of ∂D (that is not necessarily a whole boundary component). Let B ⊆ D a closed set that remains at positive distance from B 2 and that intersects B 1 . Define as u 0 to be the bounded harmonic function that takes value 0 on B 2 and 1 on B 1 , and u B to be the bounded harmonic function taking again value 0 on B 2 but is equal to 1 on B 1 ∪ B. Then u 0 − u B is a bounded harmonic function that takes values 0 in B 1 ∪ B 2 and u 0 − 1 in B. In particular, even when the conformal modulus between B 1 and B 2 is 0. Thus, we can define M (B 2 , B 1 ) − M (B 2 , B 1 ∪ B) as B 2 ∂ n (u 0 − u B ) even when both terms individually are infinite.
Using this observation, it is possible to use a proof similar to that of Proposition 2.7 to parametrize local sets using only a part of a boundary component: Proposition 2.8. Let D be a finitely connected circle domain, (Φ, η t ) a local set process with Φ a GFF in D. Now, let us partition ∂D in two sets: B 1 and B 2 a connected subset of ∂D\B 1 of positive length and suppose that η 0 ⊆ B 1 .
Then, if η t is parametrized by the difference of conformal moduli (as the individual conformal moduli may not exist) andū t is defined as the harmonic function taking the value 1 on B 2 and the value 0 on B 1 ∪ η[0, t]. Then, the process W t = B 2 ∂h η[0,t] has the law of a Brownian motion.
Moreover, the same holds (with the appropriate definitions) for any finitely connected domain with all boundary components larger than a point.
2.4. Gaussian multiplicative chaos. Finally, let us summarize the definition and some properties of the Gaussian multiplicative chaos (GMC) associated to the GFF. The GMC measures were first introduced in the realm of self-interacting Euclidean field theories [HK71], and named by Kahane in his seminal article [Kah85]. We refer to e.g. [Aru17] for more detailed proofs and properties for the GMC measures, and to [RV14] for an overview of the GMC measures and their applications.
To define the GMC measure, one usually passes through an approximation procedure. Denote by Φ ε (z) the circle-average process of the GFF: i.e. the GFF tested against the unit measure on the circle of radius ε around z. For ε > 0 and γ ∈ R, we then set The √ 2π factor comes from the fact that in the GMC literature the GFF is normalized differently (i.e. usually with covariance of that behaves like − log |z − w| near the diagonal).
For us an important property is that in the subcritical regime the GMC measures depend analytically on γ. In fact, the follwing theorem (which can be, for example, found in Section 1 of [Aru17]) suffices our needs: Proposition 2.9. For any γ ∈ (− √ 2, √ 2) and any continuous compactly supported function f on D, there exists a modification of ((µ ε γ , f )) |γ|< √ 2 , and a deterministic sequence ε k → 0 such that a.s. and in L 2 , ((µ ε k γ , f )) |γ|< √ 2 converges in the space where all the limits are a.s. and in L 2 .
Two-valued local sets
Next, we discuss a specific type of local sets introduced in [ASW17]: two-valued local sets. In [ASW17], these sets were defined and studied in the case of the zero boundary GFF; we will extend this definition to general boundary conditions and to n-connected domains. We will also calculate the size of the set seen from interior points and from boundary components that do not intersect the set.
First, it is convenient to review a larger setting, that of bounded type local sets (BTLS) introduced in [ASW17]. These sets are thin local set A, for which its associated harmonic function h A remains bounded. Here, by a thin local set (see [Wer16,Sep17]) we mean the following condition: • For any smooth test function f ∈ C ∞ 0 , the random variable (Φ, f ) is almost surely equal to This definition assumes that h A belongs to L 1 (D\A) which is the case in our paper. For the general definition see [Sep17]. In order to say that the union of two thin sets is thin, it is more convenient to use a stronger condition. Indeed, it is not hard to show that (see Proposition 4.3 of [Sep17] for a proof): • If h A is L 1 (D\A) and for any compact set K ⊆ D, the Minkowski dimension of A ∩ K is strictly smaller than 2 then A is thin. Now, we can define the bounded type local sets.
Definition 3.1 (BTLS). Consider a closed subset A of D and Φ a GFF in D defined on the same probability space. Let K > 0, we say that A is a K-BTLS for Φ if the following four conditions are satisfied: (1) A is a thin local set of Φ.
(3) Almost surely, each connected component of A that does not intersect ∂D has a neighborhood that does intersect no other connected component of A. If A is a K-BTLS for some K, we say that it is a BTLS.
Generalized level lines.
One of the simplest family of BTLS are the generalized level lines, first described in [SS13], that correspond to SLE 4 (ρ) processes.
Let D := H\ n k=1 C k , where (C k ) 1≤k≤n is a finite family of disjoint closed disks, be a circle domain in the upper half plane. Further, let u be a harmonic function in D. We say that η(·), a curve parametrized by half plane capacity, is the generalized level line for the GFF Φ + u in D up to a stopping time τ if for all t ≥ 0: ( * * ): The set η t := η[0, t ∧ τ ] is a BTLS of the GFF Φ, with harmonic function satisfying the following properties: h ηt + u is a harmonic function in D\η t with boundary values −λ on the left-hand side of η t , +λ on the right side of η t , and with the same boundary values as u on ∂D. The first example of level lines comes from [SS13]: let u 0 be the unique bounded harmonic function in H with boundary condition −λ in R− and λ in R + . Then it is shown in [SS13] that there exists a unique η satisfying ( * * ) for τ = ∞, and its law is that of an SLE 4 .
Several subsequent papers [SS13, MS16a, WW16, PW17b] have studied more general boundary data in simply-connected case and also level lines in a non-simply connected setting [ASW17]. The following lemma is a slight variant of the latter, stating existence of level lines until it either accumulate at another component, or hit the continuation threshold 2 on R. It is a consequence of Theorem 1.1.3 of [WW16] and Lemma 15 of [ASW17].
Lemma 3.2 (Existence of generalized level line targeted at ∞). Let u be a bounded harmonic function with piecewise constant boundary data such that u(0 − ) < λ and u(0 + ) > −λ. Then, there exists a unique law on random simple curves (η(t), t ≥ 0) coupled with the GFF such that ( * * ) holds for the function u and possibly infinite stopping time τ that is defined as the first time when η hits or accumulates at a point x ∈ ∂D\R or hits a point x ∈ R such that x ≥ 0 and u(x + ) ≤ −λ or x ≤ 0 and u(x − ) ≥ λ. We call η the generalized level line for the GFF Φ + u and it is measurable Remark 3.3. In Section 3.3 we will be able to show that η is continuous up to τ even if η(τ ) / ∈ R.
In simply-connected domains Theorem 1.1.3 of [WW16] and Lemma 15 of [Dub09a] also give us precise information on the subset of the boundary where the level line can hit: Proposition 3.4 (Hitting of level lines in simply-connected domains). Let D = H and let u be a bounded harmonic function with piecewise constant boundary data such that u(0 − ) < λ and u(0 + ) > −λ. Let η be a generalized level line in D starting from 0. If either u ≥ λ or u ≤ −λ on some open interval J ⊂ ∂D, then η t stays at a positive distance of any x ∈ J. Moreover, if u ≤ −λ on a neighborhood in R to the left of x, or u ≥ λ on a neighborhood in R to the right of x, then almost surely η t stays at a positive distance of x.
Remark 3.5. Observe that the boundary points described by this lemma correspond exactly to points, from where one cannot start a generalized level line of −Φ − u.
A simple, but important corollary of this result allows us to check whether a level line can enter a connected component of the complement of a bounded-type local set. This observation was key in [ASW17], where it was used for simply-connected domains and it followed just from the facts that 1) a generalized level line does not hit itself; 2) it has to exit such a component in finite time. We will prove the generalization of this lemma to finitely-connected setting below in Lemma 3.17; the same proof could be used in the simply-connected setting. In Section 3.3, we extend all these results to finitely-connected domains, in particular, we extend the definition of generalized level lines by showing that they remain continuous until the stopping time τ . That is to say, that level lines remain continuous up to its accumulation point, even if it is on other boundary component. To do this, we will however first have to gain a better understanding of certain type of BTLS in simply-connected domains, called two-valued local sets.
3.2. Two-valued local sets in simply connected domains. Another family of useful BTLS is that of two-valued local sets. In [ASW17], two-valued local sets of the zero boundary GFF were introduced in the simply connected case, which we assume to be D for convenience. Two-valued local sets are thin local sets A such that the harmonic function h A takes precisely two values. More precisely, take a, b > 0, and consider thin local sets It is somewhat more convenient to assume that the two-valued local sets and first passage sets introduced later also by convention contain the boundary.
[ASW17] dealt with the construction, measurability, uniqueness and monotonicity of two-valued local sets in the case of the zero boundary GFF in simply connected domains. Here we state a slight generalization of this main theorem for more general boundary values.
In this respect, let u be a bounded harmonic function with piecewise constant boundary values. Take a, b > 0 and define u −a,b to be the part of the boundary where the values of u are outside of [−a, b]. As long as u −a,b is empty, the harmonic function h A still takes only two values −a and b. Otherwise, we also allow for components where some of the boundary data for h A (corresponding to u −a,b ) is not equal to −a or b. More precisely, the complement of the two-valued set A u −a,b has two types of components O: (1) Those where ∂O ∩ ∂D is a totally disconnected set. In these components h A u −a,b + u takes the constant value: −a or b.
(2) Those where ∂O ∩ ∂D ⊂ u −a,b . In these components h A u −a,b + u takes boundary values u on the part ∂O ∩ ∂u −a,b and has either constant boundary value −a or b on the rest of ∂O, in such a way that h A u −a,b + u is a bounded harmonic function that is either greater or equal to b or smaller or equal to −a throughout the whole component. The next proposition basically says that all the properties of the zero-boundary case generalize to the general boundary: Proposition 3.7. Consider a bounded harmonic function u as above. If |a + b| ≥ 2λ and [min(u), max(u)] ∩ (−a, b) = ∅, then it is possible to construct A u −a,b = ∅ coupled with a GFF Φ . Moreover, the sets A u −a,b are • unique in the sense that if A is another BTLS coupled with the same Φ, such that a.s. it satisfies the conditions above, then A = A u −a,b almost surely; • measurable functions of the GFF Φ that they are coupled with; The proof is an extension of the proof of Proposition 2 and the arguments in Sections 6.1 and 6.2 of [ASW17]: Proof. Construction: We know from [ASW17] that the condition |a + b| ≥ 2λ is necessary. Also, if [min(u), max(u)] ∩ (−a, b) = ∅ does not hold, then the empty set satisfies our conditions. Thus, suppose that |a + b| ≥ 2λ and [min(u), max(u)] ∩ (−a, b) = ∅. Notice that as soon as we have constructed the basic sets with |b + a| = 2λ, the rest of the proof follows exactly as in the construction in Section 6.2 of [ASW17] -one just iterates inside the components. Moreover, in this basic case, one can only concentrate on A u −λ,λ as for any other (a, b) with b = −a + 2λ it is enough to construct A u+a−λ −λ,λ .
Let us now build A u −λ,λ . To do this, partition the boundary ∂D = n k=1 B k such that each B k is a finite segment, throughout each B k the function u is either larger or equal to λ, smaller or equal to −λ, or is contained in (−λ, λ), and n is as small as possible. Call n the boundary partition size. Notice that n is finite by our assumption. We will now show the existence by induction on n.
In fact, the heart of the proof is the case n = 2, so we will start from this. If u is, say, larger than λ on B 1 and smaller than −λ on B 2 , then by Lemma 3.2 we can draw a generalized level line from one point in ∂B 1 to the other one, by Proposition 3.4 it almost surely finishes at the other point of ∂B 1 and decomposes the domain into components satisfying (2).
So suppose u is larger than λ on B 1 but in (−λ, λ) on B 2 . Then, we can similarly start a generalized level line from one point in ∂B 1 targeted to the other one. Again, we know that it finishes there almost surely. It will decompose the domain into one piece that satisfies the condition (2) and possibly infinitely many simply connected pieces that have a boundary partition size equal to 2. We can iterate the level line in each of these components. Now for any z ∈ D∩Q 2 , denoting the local set process arising from the construction and continuing always in the connected component containing z by A z t , we have (say from Proposition 2.5) that h At (z) is a martingale. We claim that from this it follows that any z is in a component satisfying (1) or (2) above after drawing a finite number of generalized level lines. Indeed, fix some z ∈ D ∩ Q 2 ; then any level line iterated in a component containing z that stays on the same side of z than the previous level line will have a larger harmonic measure than the previous one; as the sign of the level line facing z changes, we see that h At (z) changes by a bounded amount. This can happen only a finite number of times and thus the claim. Hence we have shown the construction in the case n = 2. Now, if n = 1, then the only possible case is that u takes values in (−λ, λ). In this case the generalized level lines can be started and ended at all points of the boundary. In choosing any two different points on the boundary and drawing a level line, we will decompose D into simplyconnected components such that their boundary partition size equal to 2 (See Figure 3). For n ≥ 3, we must have at least two B k , say B 1 and B 2 (not necessarily adjacent) such that |u| ≥ λ on them. We then start our generalized level line from a possible starting point in ∂B 1 towards a possible target point in ∂B 2 . By Lemmas 3.2 and Proposition 3.4 it stops at either at its target point or at a point between two B i , B j such that on both |u| ≥ λ. One can verify that in each of these cases, in each component cut out the boundary partition size is strictly smaller than n.
Let us now make the following remarks: Proof. Uniqueness, measurability and monotonicity: follow exactly as in the zero-boundary case, i.e. one follows the proof of the Proposition 2 in [ASW17]. The only small difference is in the uniqueness part: The first step is to show that A u −a,b ⊂ A . To do this one uses the construction and Lemma 3.3 as in the zero boundary case. Now, to show the opposite inclusion, notice that by conditions (1) and (2) we have that in any connected component O of D\A u −a,b the boundary values of h A u −a,b + u are either larger or equal to b or smaller or equal to −a. In particular, we can just use Lemma 9 of [ASW17], where instead of k we use u + a/2 − b/2 (the proof is exactly the same in this case).
Remark 3.8. In fact in the monotonicity statement, one could also include the changes in the harmonic function: if u, u are two bounded harmonic function with piecewise constant boundary data, take The proof just follows from the construction and Lemma 3.3. In [AS18b], the authors studied further properties of the TVS of the GFF in a simply connected domain with constant boundary condition. Let us mention two results that are important for us in analyzing generalized level lines in finitely-connected domains. First, it was proved in [AS18b] that A u −λ,λ is a.s. equal to the union of all level lines of Φ + u (see Lemma 3.6 of [AS18b]). The exact same proof works in this context and implies that: Lemma 3.9. Let Z be any countably dense set of points on ∂D. Then A u −λ,λ is equal to the closure of the union of all the level lines of Φ + u going between two different points of Z.
Remark 3.10. Note that the level line of Φ + u going from x to y is equal to the level line of −Φ − u going from y to x (Theorem 1.1.6 of [WW16]).
Second, the TVS A −a,b are locally finite for a, b ≤ 2λ, again for zero boundary GFF (Proposition 3.15 of [AS18b]). A rather direct generalization is as follows: Lemma 3.11. In simply connected domains, A u −λ,λ is locally finite, i.e., a.s. for each ε > 0 there are only finitely many connected components of D\A u −λ,λ with diameter larger than ε. This result will be proved in more generality in Proposition 4.15 (relying on the results of the paper [ALS18c]), but we will sketch a direct argument here too: Proof sketch: First assume that the boundary condition changes twice and that one boundary value corresponds to either −λ or λ, and the other to a constant v ∈ (−λ, λ). Let us argue that in this case A u −λ,λ is locally finite, by reducing it to the case of A v −λ,λ (that has the same law as A −λ−v,λ−v ). Indeed, consider Φ a GFF in D and η a generalized level line of Φ + v joining two different boundary points. Then, Figure 4. To the left the set A u −λ,λ where u is equal to −2λ on an arc, and equal to 0 on the rest of the boundary. To the right the set A described in Lemma 3.12. Note that the boundary values of h A + u on A are λ, −λ, equal to u on the parts of the boundary where u / ∈ (−λ, λ) and can only change its sign at the points drawn in blue, i.e. at the points where A intersects ∂(1 − ε)D.
But by Proposition 3.15 of [AS18b], we know that A −λ−v,λ−v is locally finite, and moreover by following the proof of that proposition, we can deduce that if A u −a,b is locally finite inside some simply-connected set, it is locally finite in all simply-connected sets. Now, the proof follows from two observations: • By uniform continuity, a continuous curve in D parametrized by [0, 1] only separates finitely many components of diameter larger than ε for any ε ≥ 0; • From the construction of TVS with piecewise-constant boundary conditions given above, we see that after drawing a finite number of level lines (that are continuous up to their endpoint by Proposition 3.4) we can construct a local set A ⊂ A u −λ,λ such that the connected components of D\A either already correspond to those of D\A u λ,λ , or where the boundary condition is as above: it changes twice between a part that is ±λ, and a part where it takes a value in (−λ, λ).
These two lemmas allow us to explicitly describe a part of A −λ,λ in an ε− neighborhood of the boundary with a local set: Lemma 3.12. Fix ε > 0 and let Z be any countably dense set of points on ∂D. Define A as closed union of the level lines of Φ + u and −Φ − u starting in D and stopped the first time they reach distance ε from ∂D. Let A be equal to the union of the connected components of A u −λ,λ \((1 − ε)D) that are connected to ∂D. Then almost surely, A is equal to A. Furthermore, A is a local set such that A ∩ ∂((1 − ε)D) is a finite set of points.
If we define O to be the connected component of D\A containing 0, then the boundary values of h A + u are, in absolute value bigger than or equal to λ. Furthermore, they are piece-wise constant, change their value only finitely many times, and change their sign only at points situated on ∂((1 − ε)D).
Proof.
A is a local set as it can be written as the closed union of countable many local sets. Now, by Lemma 3.9, the set A u −λ,λ is equal to the union of all level lines with starting and endpoints in Z. But taking the intersection of this union of level lines with D\(1 − ε)D, and throwing away parts that are not connected to the boundary, gives exactly the union of level lines stopped at distance ε from ∂D.
Thus, to show that A ∩ ∂(1 − ε)D is a finite set of points, it suffices to prove this claim for A, the connected component of A u −λ,λ ∩ (D\((1 − ε)D)) that is connected to ∂D. Notice that this number of intersection points is bounded by twice the number of "excursions" of A u −λ,λ between two boundary points that intersect (1 − ε)D, where by an excursion we mean a connected set of A u −λ,λ ∩ D that intersects ∂D only at its two endpoints. However, by Lemma 3.11 A u −λ,λ is a.s. locally finite. This implies that there are a.s. also only finitely many excursions of A u −λ,λ intersecting (1 − ε)D, as one can associate to each such excursion a unique connected component of D\A u −λ,λ (for example the component that is separated from the point 1 by this excursion).
3.3.
Generalized level lines in finitely connected domains. In this section, we prove that the generalized level lines are continuous up to their stopping time τ , i.e. that any accumulation point described in Lemma 3.2 is in fact a hitting point. WLoG, let us work in the circle domain D = H\ n k=1 C k , where (C k ) 1≤k≤n is a finite family of disjoint closed disks. We first need to extend Proposition 3.4 to finitely-connected domains, showing that we cannot accumulate near the points that cannot be hit by a generalized level line: Moreover, if u ≥ λ on a neighborhood in ∂D clockwise of x, or u ≤ −λ on a neighborhood in ∂D counter-clockwise of x, then almost surely η t stays at a positive distance of x.
Proof. Let us start from the first claim. To do this, consider some compact interval J ⊂ J and parametrize η t using the conformal modulus t = M (J , η t ∪ ∂H) − M (J , ∂H) as in Proposition 2.8. Letū t be the harmonic function that equals to 1 on J and to 0 on η t ∪ ∂H.
WLoG assume that the boundary values on J satisfy u ≥ λ. Suppose for contradiction that η comes arbitrarily close to J . This means by Proposition 2.8 that W t = J ∂ n h ηt is a Brownian motion on [0, ∞). We argue that J ∂ n h ηt is bounded from below. Notice that this would give the desired contradiction, as then W t would be a Brownian motion that remains bounded from below for all times.
Let u be the harmonic function that is equal to λ in J and to u in ∂D\J. Because u ≤ u, the harmonic function h ηt + u has boundary values on η t that are smaller than or equal to λ. Leth t be the bounded harmonic function with boundary values equal to λ in η t ∪ J and to sup |u| + 1 > λ in the rest of the boundary. As the minimum ofh t − (h ηt + u ) is attained in J, we have that J ∂ n (h ηt + u ) ≥ J ∂ nht . To finish the first claim, we note that J ∂ n u < ∞, and ∂ nht is increasing in t.
For the second claim, assume WLoG that u ≥ λ on some neighborhood in ∂D clockwise of x. Let us denote this neighborhood by J L . If u ≥ λ also on a neighborhood counter-clockwise, then we are done by the first claim. Otherwise, let ε > 0 be very small, in particular smaller than the distance between any boundary components. Notice that we can start a generalized level lineη of Φ + u from x stopped atτ ε the first time the level line reaches distance ε from the connected component it started from, or has reached its continuation threshold on the same connected component of ∂D that contains x. Note thatη cannot hit J L before timeτ ε by absolute continuity w.r.t. to the simply-connected case (see Lemma 16 of [ASW17]). On the other hand, it a.s. hits the counterclockwise boundary neighborhood if u ∈ (−λ, λ) on this neighborhood, and a.s. does not hit it if u ≤ −λ.
In both cases, by the first claim of this proposition, we have that η cannot intersect nor accumulate in J L ∪η t before either accumulating or hittingη(τ ), or finishing by accumulating somewhere on the boundary ∂D at positive distance from x. We now argue that η cannot accumulate atη(τ ) without hitting. Indeed, because Lemma 3.2 allows us to further growη for a positive amount s after timeτ , the first claim of this proposition implies again that η cannot accumulate nearη(τ ), before hitting or accumulatingη(τ + s). So it remains to argue that η cannot hitη(τ ). This now follows, as hittingη(τ ) would in particular mean that η is continuous up to this hitting time τ 1 and thus stay in some open neighborhood ofη(τ ) after some time s < τ 1 . Thus, by the absolute continuity of the GFF in this neighborhood, i.e., the same proof as for Lemma 16 of [ASW17], one can show that η cannot hitη(τ ).
The following lemma tells us that when level lines approach the boundary they hit it in only one point and thus they are continuous until that time. This implies that, it is not possible for level lines to accumulate in the boundary.
Proposition 3.14. Let η be a generalized level line of GFF Φ + u in D starting from 0. Let T p be the set of boundary points on ∂D\R that η could potentially hit, i.e. set of points x for which u > −λ in a neighborhood in ∂D counter-clockwise of x and u < λ in some neighbourhood in ∂D clockwise of x. Then a.s. either η hits T p or stays at a positive distance from it.
It is useful to observe that the set T p is precisely the set of points for which one can start a level line of −Φ − u. The following lemma says that when we start simultaneously a generalized level line of Φ + u and a level line of −Φ − u from different boundary components, then these level lines either agree on a continuous curve joining two boundary conditions or stay at a positive distance from each other.
Lemma 3.15. Let η be a level line of Φ + u started at x ∈ R and let η be a level line of −Φ − u started at y ∈ D\R, stopped at times τ, τ respectively, that correspond to the first times they intersect a connected component of ∂D different to their starting point (or when they hit the continuation threshold on their own component). Then, either η ∩η = ∅ or η ∩η is a connected set that intersects R and the connected component of ∂D that contains y. In the latter case, the level lines η and η are continuous until the first time they intersect a connected component of the boundary that does not contain their respective starting point.
Proof. Let us note that for any fixed s ∈ R, η stopped at the first time it hits η s (or at τ ) is a level line of Φ + u + h η s . Thus, by Lemma 3.13 the only point it can hit or accumulate in η s is in its tip: η (s). To argue it cannot accumulate at η (s), we can use the same argument as in the proof of the second claim in Lemma 3.13: we can continue the level line η for another positive time u to see that η(t) cannot accumulate at η (s) before accumulating or hitting at η(s + u). Thus, η can either hit η (s), or stay at a positive distance of η s . Now, let us note that there is no rational s such that that η (s) / ∈ η, but η s ∩ η = ∅. Indeed, if there were such a time s, then η would hit a point in η s different from η (s). It is clear that the same holds when we switch the roles of η and η. Thus, we see that if η, η intersect at some time-points s, s respectively, then η ([s , τ ]) ⊂ η and η([s, τ ]) ⊂ η . But this means in fact that in this case η([σ, τ ]) = η ([σ , τ ]), where σ, σ are respectively the last times before τ, τ where the level lines η, η touch the component of the boundary containing their starting point. Morever, we see that if η and η intersect they hit the same points but in inverse orders, and are thus both continuous up to and including the hitting time of the other boundary.
We can now prove Proposition 3.14.
Proof of Proposition 3.14. Fix ε > 0 very small (say much smaller than the minimal distance between two connected components of C\D). By Lemma 3.12 and the absolute continuity of the GFF (Proposition 13 and Corollary 14 of [ASW17]), for all C k we can construct a local set A k in an ε-neighbourhood of C k such that its boundary values are either ≥ λ or ≤ −λ in some open interval around any boundary point of A k that is of distance smaller than ε to the boundary. By Lemma 3.13 the generalized level line η of Φ + u started at 0 will stay at a positive distance from all these points. Thus, it can only get infinitely close to one of C k by first hitting or accumulating at one of the points of A k that is exactly at distance ε from a boundary component C k . Notice that by Lemma 3.12 there are only finitely many of such points. Moreover, each such point belongs to either a level line of Φ + u or a level line of −Φ − u started from the boundary component C k . By Lemma 3.13, we know that the level line η will stay at a positive distance from the first type of points. Finally, by Lemma 3.15 we know that if it gets infinitely close to one of the other type of points, it will actually agree with this level line until hitting the boundary component containing its starting point.
For further reference, let us resume the existence and continuity of level lines in finitely-connected domains in a proposition -it follows directly by combining Lemmas 3.2 and 3.13 with Proposition 3.14.
Proposition 3.16 (Existence and continuity of generalized level lines). Let u be a bounded harmonic function with piecewise constant boundary data such that u(0 − ) < λ and u(0 + ) > −λ. Then, there exists a unique law on random simple curves (η(t), t ≥ 0) coupled with the GFF such that ( * * ) holds for the function u and possibly infinite stopping time τ that is defined as the first time when η hits a point x ∈ ∂D\R or hits a point x ∈ R such that x ≥ 0 and u(x + ) ≤ −λ or x ≤ 0 and u(x − ) ≥ λ. We call η the generalized level line for the GFF Φ + u, and it is measurable w.r.t Φ.
Moreover, η t is continuous on [0, τ ], and it can only hit points from which we can start a generalized level line of −Φ − u (on ∂D\R these are the points x such that u < λ on some interval of ∂D clockwise of x and u > −λ on some interval of ∂D counterclockwise of x).
As a consequence, we can prove the generalization of Lemma for finitely-connected domains, the only additional input being the continuity up to its stopping time, and the precise description of the hitting points stated in the proposition above. Notice that we allow for the situation where on some boundary components the value is ≥ λ and on some it is ≤ −λ. One of the key ingredients is Lemma 2.3 (2), that reads as follows: if A, B are conditionally independent local sets, then the boundary values on A ∪ B do not change at any point that is of positive distance of the boundary of ∂A ∩ ∂B.
Proof. Define as E z the event where on any connected component of ∂O(z) the boundary values of (h A + u) | O(z) are either everywhere ≥ λ or everywhere ≤ −λ. Suppose for contradiction that on the event E z , η([0, ∞]) ∩ (O(z)) = ∅ with positive probability.
Take ε > 0 and τ := τ (ε) the first time such that η(τ ) ∈ O(z) and is at distance ε > 0 of ∂O(z). Note that under our assumption for small enough ε, the event {τ < ∞} ∩ E z has non-zero probability. One can verify thatη(t) := η(t+τ ) is a generalized level line of Φ A∪ητ +(Φ A∪ητ +u). As the generalized level-line is a simple continuous curve, it stays at a positive distance of ∂O(z) ∩ η τ . Additionally, from Lemma 2.3 (2) it follows that the boundary values of (Φ A∪ητ +u) are ≥ λ or ≤ −λ around any point on ∂O(z) that is at positive distance from η τ . Thus, by Lemma 16 of [ASW17], η cannot exit O(z) through any point that is at positive distance of η τ . But, by Proposition 3.16, we have thatη(∞) ends at a point on ∂D that is different from any of its previously visited points, staying continuous up to and at the moment it hits the boundary, giving a contradiction.
A particularly useful corollary is the following. We will now prove the existence and measurability. To prove the uniqueness, we will first in fact prove the uniqueness of the FPS in the next section; it will be a consequence of Lemma 4.14. Finally, monotonicity follows from uniqueness as in the proof of Proposition 2 of [ASW17]. Until having proved uniqueness, we mean by A u −a,b always the set constructed just below. The proof of existence is in its spirit very similar to the proof of Proposition 3.7, that itself is modeled after Section 6.1 of [ASW17]. However, we do need extra arguments to treat the multiplyconnected setting.
Proof. Construction Again, we can assume that we are in the non-trivial case, in other words that [min(u), max(u)] ∩ (−a, b) = ∅. As in the proof of Proposition 3.7, it suffices to construct A u −λ,λ . This time we need a double induction. Let N be the number of boundary components and as in proof of Proposition 3.7. We take the minimal partition of any boundary component B i as B i = n i k=1 B k i , such that each B k i is a finite segment, throughout each B k i the function u is either larger or equal to λ, smaller or equal to −λ, or contained in (−λ, λ). Recall that we called n i boundary partition size of B i . We will now use induction on pairs (N, max i≤N n i ).
The case (1, n) is given by Proposition 3.7. The key case is (N, 2), so let us prove this by inducting on the number of boundary components N .
On any B i satisfying |u| ≥ λ, draw a generalized level line starting from one point of ∂B 1 i to the other. If it hits some other boundary component, we have reduced the number of boundary components in each of the domains cut out and we can use induction hypothesis. Otherwise, by Proposition 3.16, it ends at the other point of ∂B 1 i and reduces the boundary partition size of this boundary component to 1. Hence we can suppose that the only boundary components with boundary partition size equal to 2 have one part with u ∈ (−λ, λ). Now, pick any such component, say, B 1 and suppose u is larger than λ on B 1 1 . Then, we can start a generalized level line from one points on ∂B 1 1 towards the other one. If the generalized level line hits some other component or cuts the domain into subdomains with strictly less than m boundary components, we can use induction hypothesis. Otherwise, we have finished all components O such that ∂O ∩ u −a,b is non-zero. It now remains to see that all 'inner components' are also finished in finite time. This follows similarly to the proof of Proposition 3.7 by using the fact that h At (z) is a bounded martingale and converges almost surely.
Suppose now (N, n) satisfy N ≥ 2, n ≥ 3. Then we can similarly to the proof of Proposition 3.7 pick a generalized level line on some boundary component with boundary partition size bigger than 2 such that by drawing it we either reduce the boundary partition size to 2 for any subdomain with N ≥ 2, or reduce the number of boundary components in each subdomain. Using a finite number of such lines we have reduced to either (N, 2), or (N , 3) with N smaller than 3.
It remains to treat the case (N, 1), if all components satisfy |u| ≥ λ, we are done. Otherwise, in any component with u ∈ (−λ, λ) we can start a level line from any point for some short amount of time. This will either reduce the setting to (N, 3), (N, 2) or reduce the number of boundary components.
Examining closely the proof the following holds: Proof. Measurability of the sets A u −a,b with respect to the GFF just follows from the measurability of the level lines used in the construction and the measurability result of Proposition 3.7.
First passage sets of the 2D continuum GFF
The aim of this section is to define the first passage sets of the 2D continuum GFF, prove its characterization and properties. We first state an axiomatic definition of the continuum FPS inspired by its heuristic interpretation: i.e. the FPS stopped at value −a is given by all points in D that can be connected to the boundary via a path on which the values of the GFF do not decrease below −a. From this description, it is clear that it induces a Markovian decomposition of the GFF: the field outside of it is just a GFF with boundary condition equal to −a. In other words, the FPS is a local set, that we denote by A −a , and its harmonic function has to satisfy h A −a = −a as we stop at value −a. Finally, the question is how to translate the property for the values, as the GFF is not defined pointwise. The right way is to ask the distribution Φ A −a + a to be a positive measure.
The set-up is again as follows: D is a finitely-connected domain where no component is a single point and u is a bounded harmonic function with piecewise constant boundary conditions. Here is the definition for general boundary values: Definition 4.1. Let a ∈ R and Φ be a GFF in the multiple-connected domain D. We define the first passage set of Φ of level −a and boundary condition u as the local set of Φ such that ∂D ⊆ A u −a , with the following properties: (1) Inside each connected component O of D\A u −a , the harmonic function h A u −a + u is equal to −a on ∂A u −a \∂D and equal to u on ∂D\A u −a in such a way that (3) Additionally, for any connected component O of the complement of A u −a , for any ε > 0 and z ∈ ∂O and for all sufficiently small open ball U z around z, we have that a.s.
Notice that if u ≥ −a, then the conditions (1) and (2) correspond more precisely to the heuristic and are equivalent to (3) is not necessary. This condition roughly says that nothing odd can happen at boundary values that we have not determined: those on the intersection ∂A u −a and ∂D. This condition enters in the case u < −a: in [ALS18c] we want to take the limit of the FPS on metric graphs and it comes out that it is easier not to prescribe the value of the harmonic function at the intersection of ∂D and ∂A u −a . Notice that in contrast we did prescribe the values at intersection points for two-valued sets.
Moreover, in this case the technical condition
Remark 4.2. One could similarly define excursions sets in the other direction, i.e. stopping the sets from above. We denote these sets by A u b . In this case the definition goes the same way except that (2) should now be replaced by Φ Let us also remark that in [APS17] the sets A b are unluckily denoted by A b , and that they can be obtained as A −u −b of −Φ. Theorem 4.3. The FPS A u −a of level −a exists and is unique in the sense that if (Φ, A ) is an FPS of level −a for the GFF with boundary condition u, then A = A u −a a.s. We start from the existence of the FPS. Here, we provide a purely continuum construction using the two-valued sets A −a,b . Another approach would be to consider the scaling limit of the metric graph FPS when the mesh size goes to zero, as is done in [ALS18c].
Proposition 4.4. Denote by A u −a,n the two-valued local sets coupled with the GFF Φ in the domain D. Then for every a ≥ 0 the local set A u −a := ∪ n∈N A u −a,n is an FPS of level −a. Proof. Let A u −a be as in the statement. Then A u −a is the closed union of nested measurable local sets so it is a measurable local set: it is a local set by Lemma 2.3 and measurable as a limit of measurable functions.
We first prove the condition (1) of the Definition 4.1. Take a countable dense set in D, (z i ) i∈N , and note that almost surely for all i ∈ N, z i / ∈ A u −a . Consider n > sup u. It suffices to show that for any z i , there will be some finite n such that the component of the complement of A u −a,n containing z i does not take the value n. Indeed, when this happens, then by the definition and uniqueness of two-valued sets above, it would take a value as described in (1) for allñ ≥ n. Now, as a process in n, h A u −a,n (z i ) is a lower bounded martingale, and thus it converges almost surely. It can, however, only converge when for some n it belongs to the component of the complement of A u −a,n not taking the value n. Hence we deduce the condition (1).
The condition (3) just comes from the fact that the value at the intersection points is prescribed by the definition of two-valued sets and it satisfies the appropriate condition.
It remains to prove (2), i.e. that Φ A u −a − h A u −a ≥ 0. Note that for all positive f ∈ C ∞ 0 and all n ≥ 2, we have that (Φ A u −a,n − h A u −a , f ) ≥ 0. Thus, we conclude using Lemma 2.3 (iii).
Let us make the following observation about the construction above: (i) In the construction, we only need to use generalized level lines whose boundary values are in [−a, ∞), moreover these generalized level lines never hit themselves. (ii) For a fixed point z ∈ D, it will belong to a component of the complement of A u −a,n with value n only for a finite number of n. Thus, we need only a finite number of level lines to construct the loop of A u −a surrounding z. We now want to use these remarks and the techniques of [ASW17] to prove the uniqueness of the FPS: Proof. First let us prove that if A is a local set such that almost surely Φ A ≥ 0, then a.s. A is a polar set. Given this condition, we have that (Φ A , 1) ≥ 0, and due to the Markov property we know that E[(Φ A , 1)] = 0, thus a.s. (Φ A , 1) = 0. Additionally, we know that G D ≥ G D\A , using again the strong Markov property we get that Hence almost everywhere G D\A = G D , and thus A is a polar set. We now prove the uniqueness of the FPS. Assume first that A u −a ⊆ A . We claim that then A \A u −a is a polar set. Indeed, consider B := A \A u −a . From Lemma 2.3 (2), B is a local set of the zero boundary GFF Φ A u −a . Moreover, one can check that from our conditions on the FPS, it follows that h A u −a + u ≤ h A + u and hence (Φ A u −a ) B ≥ 0. Thus, by the previous argument B is polar. Now, it suffices to prove that a.s. A u −a ⊆ A . We prove the monotonicity using arguments similar to those of Section 6 in [ASW17]. Take A an FPS for u ≥ u and a ≥ a. Suppose by contradiction, A u −a is not contained in A . Then choosing a countable dense set in D, (z i ) i∈N , there must be some z i such that with positive probability during the construction of A u −a a generalized level line enters the component O(z i ) of the complement of A containing z i . Thus, there should be some finite n ∈ N such that with positive probability, η, the n th -level line pointed towards z i , is the first one to enter O(z i ) and η ∩ O(z i ) = ∅. This is, however, in contradiction with Lemma 3.17 and the remark just after the proof: indeed, the boundary values of h A inside O(z) are equal to −a − u ≤ −a − u and by the remarks above the boundary values of η are in [−a, ∞). Thus, the uniqueness follows and thus monotonicity just follows from the construction given in Lemma 4.4.
Remark 4.6. Let us remark that in fact Φ A u −a −h A u −a is almost surely a non-trivial positive measure, unless u ≤ −a on D. Indeed, suppose for example that we work in D and with a zero boundary GFF. In this case, it is known that the circle average (ρ 0 1−r , Φ) around 0 of radius 1 − r converges to 0 as r → 0. But the circle average w.r.t. to h A −a is constantly equal to −a, and the variance of (ρ 0 1−r , Φ A u −a ) also converges to 0. A different way of seeing this is the following. Since is a non-trivial positive measure with positive probability. Then, in order to sample A u −a , we can first explore A u −a/2 and then further explore A −a/2 conditional independently in each connected component of the complement of A u −a/2 . Using that, one can argue that the FPS is nontrivial on every dyadic square it intersects.
4.1. Distance to interior points and boundary for A u −a . We will now give an exact description of the law of the distance of A u −a to interior points and to boundary components. This can be seen as a continuum analogue of Corollary 1 of [LW16]. The proofs follow from Propositions 2.5 and 2.7, that give a way to parametrize the FPS using the distance to an interior point or a boundary component respectively.
Proposition 4.7. Let a > 0 and D a n-connected domain. Moreover, let u be a bounded harmonic function with piecewise constant boundary data and z ∈ D. Take W t to be a Brownian motion started from u(z) and with life-time g D (z, z). If u ≥ −a, then is distributed like the one-sided hitting time of the level −a by W t .
Proof. This follows exactly as the proof of Proposition 20 in [ASW17].
Similarly, we can calculate the extremal distance between boundary components, analogously to Proposition 5 in [LW16].
Proposition 4.8. Let a be a positive number, D a finitely-connected domain, and B a union of connected components of ∂D. Moreover, let u be a bounded harmonic function with piecewise constant boundary data changing finitely many times such that u on ∂D\B is a constant equal to u e ≤ −a. Let W t be a Brownian bridge with starting point: Proof. From the construction of first passage sets (see the observations after the proof of Proposition 4.4), we know that the first passage set can be constructed by using only level lines with boundary values in ≥ −a that do not touch ∂D\B. We can parametrize the part of the construction, that always continues in the connected component containing ∂D\B on its boundary, using its extremal length to ∂D\B. We denote the resulting local set process by (A t ) 0≤t≤τ . Here τ is the first time that this component stops growing. In other words τ := EL(B, ∂D\B) − EL(B ∪ (A u −a \∂D), ∂D\B)) and moreover τ is the first time that satisfies the following property: Restricted to the connected component of O of A τ \D such that ∂D\B ⊆ ∂O, h Aτ + u is the bounded harmonic function with boundary value −a in ∂O\B and u e ∈ B. Using Proposition 2.7 for the underlying GFF Φ, we deduce that is a Brownian bridge from u s to u e and of length EL(B, ∂D\B). From an explicit calculation using Green's identities and Theorem 3.1 we see that: Moreover, similarly we obtain that is equivalent to W τ = −a. Indeed, we can calculate using Theorem 2.1 that Hereū is the bounded harmonic function with values 0 in ∂D\B and 1 in B ∪ A τ . The same calculation yields that for all times smaller than τ , W τ > −a, from where we conclude.
Let us point out the following corollary Corollary 4.9. Let A u −a be an FPS with boundary condition u of Φ, where u is a bounded harmonic function with piecewise constant boundary data. Then A u −a ∩ D is at positive distance of any connected component of ∂D where u ≤ −a. Furthermore, for any ε > 0 and any allowed boundary condition u, there exists a strictly positive probability that the connected component of A u −a started from a given boundary component remains at distance ε from that boundary component.
Notice that if only a part of the boundary satisfies u ≤ −a, then we also know that the FPS stays at a positive distance of any point on this interval. Indeed, this follows from the level line construction: we know from the proof of Theorem 3.19 and the remarks following the proof that for A u −a,b any connected set of u −a,b is entirely part of the boundary of a component of D\A −a,b ; on the other hand we also know that any component with h A u −a,b + u ≤ −a in the complement of A u −a,b , will also be a component of the complement of A u −a . Putting this together we conclude: Corollary 4.10. Let A u −a be an FPS with boundary condition u of Φ, where u is a bounded harmonic function with piecewise constant boundary data. Then A u −a ∩ D is at positive distance of any connected J ⊆ ∂D where there is an open neighborhood J ε such that u(x) ≤ −a for all x ∈ J ε ∩ ∂D.
Finally, let us mention that one can, via the same proof, also prove an analogue of Proposition 4.7 for two-valued local sets:.
Proposition 4.11. Let a, b be positive with a + b ≥ 2λ , D a n-connected domain, and B a union of connected components of ∂D. Moreover, let u be a bounded harmonic function with piecewise constant boundary data changing finitely many times such that u on ∂D\B is a constant equal to u e / ∈ (−a, b). Let W t be a Brownian bridge with starting point:
4.2.
Level lines as boundaries of FPS. Now, let us see that level line can be identified with the boundary of certain FPS. Let D be finitely connected domain and ∂ ext D be the outermost connected component of ∂D, that is to say the one that separates D from infinity. We consider two boundary points x 0 = y 0 ∈ ∂ ext D that split ∂ ext D in two boundary arcs, B 1 and B 2 such that y 0 , B 2 , x 0 come in clockwise order. Assume that u is a bounded harmonic function with piecewise boundary values which are smaller than or equal to −λ on B 2 , inf B 1 u > −λ and inf ∂D\∂extD u ≥ λ. Note that thanks to Lemma 3.2 there is a generalized level line η of Φ + u starting at y 0 and targeted at x 0 . Proof. From the monotonicity of two-valued sets (Theorem 3.19) and the construction of the FPS (Lemma 4.4) we see that for any n ≥ a ∨ b, we have that A u −a,b ⊆ A u −a,n ⊆ A u −a and, furthermore, Moreover, as by construction A u −a,b is connected to the boundary, we deduce that A u −a,b is contained on the union of connected components of A u −a ∩ A u b that are connected to the boundary. We will now prove the opposite inclusion. To do this, it suffices to show that for every connected does not intersect ∂O. We are now ready to prove the uniqueness of two-valued sets A u −a,b for general boundary data in n-connected domains. See Theorem 3.19 for the setting and precise statement.
Uniqueness of two-valued sets. In the proof of Theorem 3.19, we showed the existence of a twovalued set. Denote this set by A u −a,b . Suppose A is another two-valued set coupled with the same GFF, i.e. it satisfies the condition ( ) given just before Theorem 3.19.
First, notice that A has to be connected to the boundary: indeed, suppose for contradiction that there is a component B of A that is not connected to the boundary. Consider the component O of D\B that has B on part of its boundary, and let B = ∂O ∩ B. WLOG suppose that the boundary conditions on B are equal to −a. Then, as in the last paragraph of proof of Lemma 4.14, we see that the FPS of height −a also contains B, and that moreover B is also not connected to the boundary as a subset of the FPS. However, from the construction (Proposition 4.4) and uniqueness of the FPS (Proposition 4.5) we know that the FPS is connected to the boundary. Now, inspecting the proof of Lemma 4.14, we conclude that exactly the same proof gives that almost surely the union of the components of A u −a ∩ A u b that are connected to boundary is equal to A . However, we know from Lemma 4.14 that it is equal to A u −a,b and the claim follows.
Finally, let us state here the proposition about A −a,b are locally finite. In fact this relies on a result from [ALS18c], that proves the local finiteness of first passage sets, using a proof that uses the relation of FPS to Brownian loop-soups. We should maybe stress that none of [ALS18c] depends on local finiteness of A −a,b .
Proposition 4.15. Consider a finitely-connected domain with any piece-wise constant boundary condition u. For any a, b ≥ 0, the two-valued set A u −a,b is almost surely locally finite. Proof. Proposition 5.7 of [ALS18c] gives us the local finiteness of A u −a for any choice of a ≥ 0, of a piece-wise constant boundary condition u and of a finitely-connected domain D. But now, note that if O is a simply connected component of D\A u −a , then it is also a connected component of either D\A u −a or D\ A u b -indeed, this follows from the uniqueness of the FPS, as when constructing say, A u −a from A u −a,b one keeps all the components that have boundary condition less than or equal to −a. As there are only finitely many non-simply connected components of D\A u −a , we conclude. Remark 4.16. Let us stress that if one restricts oneself to the simply-connected case, then one can prove local finiteness of A u −a without relying on the construction or properties of FPS and TVS in multiply connected domains. In fact, the proof in [ALS18c] just uses properties of the Brownian loop-soup. In particular, this provides an alternative proof of Proposition 3.11.
The Minkowski content measure of the FPS
The aim of this section to identify the measure as a Minkowski content measure in a certain gauge: Theorem 5.1. The measure ν A u −a is a measurable function of A u −a . Moreover, it is proportional to the Minkowski content measure in the gauge r → | log(r)| 1/2 r 2 . More precisely, almost surely for any continuous f compactly supported in D, Let us stress that there are two non-obvious statements in this theorem: (1) the fact that the measure ν A u −a can be obtained as a measurable function of A u −a ; (2) and the identification of this function as the Minkowski content measure of A u −a in a certain gauge. As a simply corollary, we can deduce an almost sure statement on the dimension of A u −a : Corollary 5.2. If the harmonic function u is not everywhere less or equal to −a (i.e. if A u −a \∂D is non-empty), then A u −a is a.s. of Minkowski dimension 2. Moreover, observe that the expected value of ν A −a is equal to a in the sense that for any continuous function f , the expectation of (f, ν A −a ) is equal to a times D f (z)dz. Thus, the fact that the measures Φ A −a converge to Φ as a → ∞, can be interpreted as saying that the GFF is a limit of recentered Minkowski content measures of certain growing random sets.
Corollary 5.3. For any n ≥ 1, let A n be random set with the law of the FPS A −n in D, and denote by ν An its Minkowski content measure in the gauge | log(r)| 1/2 r 2 . Then, the recentered measures ν An − E[ν An ] converge in law in H −1 (D) to a zero boundary GFF on D.
Proof. Note that Φ − ν An − n has the law of a GFF in Φ An . As A n D for the Haussdorf topology, Equation (2.3) and Dominated convergence theorem imply that E Φ An 2 H −1 (D) converges to 0, giving the result.
Let us also make two remarks.
Remark 5.4. In Theorem 5.1, we restricted to functions supported compactly away from ∂D, because by construction, ν A u −a (∂D) = 0, yet ∂D might be irregular enough to have a positive, or even infinite Minkowski content measure. Thus, ν A u −a is rather the Minkowski content measure of A u −a \∂D. Remark 5.5. In fact, the theorem holds in a more general setting (with basically the same proof ). Indeed, take A to be a local set of the GFF Φ such that there exists a deterministic K such that where h A is harmonic of D\A and ν A is a non-negative measure supported on A. Then, ν A is given by the Minkowski content measure of A\∂D as in Theorem 5.1.
The rest of this section is devoted to the proof of Theorem 5.1. We start by looking at the decomposition of the Gaussian multiplicative chaos (GMC) measure induced by the Markovian decomposition of the field w.r.t the FPS. Then, we use this decomposition to see how the geometry of the FPS encodes its height and to give a short, but somewhat unexpected proof of why Φ A u −a is a measurable function of A u −a . Next, we observe that a careful analysis of the proof gives in fact an explicit expression of Φ A u −a − h A u −a in terms of A u −a , and finally we identify this expression with the Minkowski content measure of A in the gauge r → | log(r)| 1/2 r 2 . 0 In several of the arguments it will be more convenient to work with A u b , but notice that by the symmetry of the GFF this is equivalent.
5.1. Decomposition of the GMC using the FPS. In [APS17], the decomposition of the GMC measures w.r.t. to the FPS and w.r.t. more general local sets was used to view the GMC measure of the GFF as a multiplicative cascade. In particular, it was observed that in the case of the FPS one recovers a construction of [Aïd15].
As in [APS17] the decomposition of the GMC measures was stated w.r.t. the sets A u b instead of A u −a (as this allows to consider positive values of γ), we will do the same here.
. Then from the Proposition 4.1 of [APS17] (see also Theorem 2.5 in [APS18]) it follows that: Proposition 5.6. Take 0 ≤ γ < 2, i.e., 0 <γ < 2 √ 2π. Then, for all continuous function f compactly supported in D, where g D was defined in (2.1). Moreover, conditioned on A u b , we have that As from condition (1) of the definition of the FPS it follows that in the coupling (Φ, A u −λ ), the harmonic extension h A u −a is a measurable function only of the set A u −a , we infer a direct but useful corollary: Observe that this means that in order to construct the Liouville measure µ γ we do not need the positive measure ν A u −a .
5.2.
How to read the height of the FPS from its geometry. We have seen that the FPS is measurable w.r.t. the underlying free field. Moreover, we just saw that h A u −a is a measurable function only of A u −a itself. This tells us in particular that knowing the boundary conditions u and the set A u −a is sufficient to implicitly find the value of −a: for example if x ∈ D, it is given by, for example, u(x) + h A u −a (x). A natural question is whether one can also determine −a in an explicit way by just looking at the geometry set A −a . The following proposition says that this is indeed the case, and it presents some of the basic tools later used to show how to construct Φ A u −a as an explicit function of A u −a . As this is more of a little side-story, we state and prove it in the simplest setting of the unit disk and the zero boundary GFF Φ. A similar result can be stated and proved more generally for finitely-connected domains and general boundary conditions. Proof. It is slightly more convenient to work with A b instead of A −a . Denote by From Proposition 5.6 it follows that for any z ∈ D, Thus, the mean of A r is equal to 2πe −γ √ 2πb . It remains to show that its variance tends to zero as r → 1. By the Markov decomposition w.r.t.
However, this expected value can be explicitly calculated from a Gaussian estimate and we can get that the right-hand term converges to e 2γ √ 2πb 2π 0 2π 0 e γ 2 G D (re iθ 1 ,re iθ 2 ) dθ 1 dθ 2 < ∞ as γ < 1. Finally, using dominated convergence and the fact that a.e. G D (re iθ 1 , re iθ 2 ) converges to 0 as r → 1, we have that E A 2 r converges as r → 1 to (2πe −γ √ 2πb ) 2 .
Measurability of Φ
We will now give a short argument to prove the measurability of Φ A u −a w.r.t. A u −a : Proposition 5.9. Let Φ be a GFF in D. We have that Φ A u −a is a measurable function of the set A u −a . Proof. In fact it is again clearer to prove the claim for the sets A u b . To prove the measurability of Φ A u b , take γ ∈ (0, 2) and consider µ γ the γ−GMC measure corresponding to √ 2πΦ. The proof is based on three measurability statements: is a measurable function of Φ: this follows from Lemma 4.4.
(2) µ γ is a measurable function of A u b and Φ A u b : this follows from Lemma 5.7. (3) Φ is a measurable function of µ γ for a fixed γ < 2: this follows from Remark 2.11. Thus, if F is a bounded measurable function, we have that Note that when A is a local set, it follows from the definition that Φ A and Φ A are conditionally independent given A. Hence This proves the proposition. 5.4. Explicit expression for the measure ν A u −a . Let us now derive an explicit expression for the measure ν A u −a . In fact, it is simpler to continue working with A u b , and thus write ν Notice the difference in sign in the definition due to the fact of taking the FPS in the other direction. Our aim is to prove that: Proposition 5.10. Almost surely for all f continuous function compactly supported in D, By symmetry, the same holds for ν A u −a . To prove this proposition we will first look at the step 3 in the proof of Proposition 5.9 in more detail. Indeed, this step stems from (2.5), which identifies the GFF as the derivative of the GMC measures at γ = 0, i.e. for any compactly supported continuous f : Let us take conditional expectation with respect to F A u b and write . Now, by Proposition 2.9 ∂ γ (µ γ , f ) converges in L 1 as γ → 0 + towards √ 2π(Φ, f ) and thus one can interchange the limit as γ → 0 + and the conditional expectation. For the same reason, one can also change the order of the γ−derivative and the conditional expectation. We obtain: We claim that: Lemma 5.11. Almost surely, for all compactly supported functions f on D, −a (z, z))eγ 2 2 g D\ A u −a (z,z) dz.
By symmetry, the same holds for ν A u −a . Proof. By Proposition 5.6, we have that Using the fact thatγh A u b (z) +γ 2 g D\ A u b (z, z)/2 converges to 0 as γ → 0 and (5.2), we obtain the result for a fixed function f . Thus, this also holds for a countable dense family, for the uniform convergence on compact subsets, in the space of continuous functions on D. Now, notice that as g D\ A u b (z, z) ≤ g D (z, z) < C r for any compact subset D r ⊂ D, the measure defined by the RHS of (5.3) on any such compact subset D r is of bounded total mass. As the space of such measures is compact, they have a weak limit and (5.3) holds simultaneously for all f continuous on D r . As we can take D r → D, the lemma follows.
In order to deduce Proposition 5.10, we will now make use of a simple estimate relating g D (z, z) as defined in (2.1) with the distance of z to ∂D. This result follows directly from Koebe's quarter theorem in simply connected domains, but in the case of n−connected domains requires an argument. Proof. If D is simply connected, then e 2πg D (z,z) equals the conformal radius CR(z, D), and the results follows from Koebe's quarter theorem, with c = 4.
For the other cases, note that 2πg D (z, z) is the value at z of the harmonic extension of the boundary values x → log(|z − x|), x ∈ ∂D. This already implies that 2πg D (z, z) ≥ log(d(z, ∂D)).
Take B z a Brownian motion started at z ∈ D and define T ∂D as the first time B z hits ∂D. Let δ be the smallest diameter of an inner hole of D. With the Beurling's estimate (Theorem 3.76 of [Law08]) we get that P(|z − B z T ∂D | ≥ rd(z, ∂D)) ≤ Cr −1/2 , where C is a constant not depending on z, as long as the open disk B(z, rd(z, ∂D)) does not entirely contain an inner hole of D. A sufficient condition for that is rd(z, ∂D) ≤ δ. Thus, We can now prove Proposition 5.10.
Proof of Proposition 5.10: First, note that D\A u −a has a.s. finitely many non simply connected components. By 5.12, there is some c > 1, such that for all z ∈ D\A u As both limits are equal to (5.1), the lemma follows.
5.5. Identification with the Minkowski content measure. In this section, we finish the proof of Theorem 5.1, by showing that the measure ν A u −a agrees with the Minowski content in the gauge r → | log(r)| 1/2 r 2 . The proof is "deterministic" and gives a rather general strategy for identifying two measures defined on a fractal set and satisfying coherent scaling.
Let us first introduce some notation: given F a function on (0, +∞) and a compact subset A of D, we define the measure M(F ) = M A (F ) = 1 D\A F (d(z, A))dz.
Proposition 5.10 can be then interpreted as saying that ν A u −a = lim γ→0 M(F γ ). Notice that in this limit, as we are on a bounded domain, the extra indicator function plays no role. The missing part of Theorem 5.1 then follows from: To get some insight into why this proposition might be true, notice that both families J r and F γ satisfy the same scaling property: J r (s) = β −1/2 J r (s β ), for β = | log(r)| | log(r )| ; F γ (s) = β −1/2 F γ (s β ), for β = γ 2 γ 2 .
Moreover, for γ = | log r| −1/2 , one can notice that the maximum of F γ is also of order | log r| −1/2 and this maximal value is taken at distance O(r). We prove the proposition in two steps: first we show that if both the limits of M(F γ ) as γ → 0 and of M(J r )/2 as r → 0 exist, then they are equal. Then, we show that if the limit of M(F γ ) exists then so does the limit of M(J r ). From now on, let us take A a closed set such that for all compactly supported f , (M(F γ ), f ) = (M A (F γ ), f ) converges as γ tends to 0. 5.5.1. Relationship between M(F γ ) and M(J r ).
5.5.2. Convergence of M(J r ). In the last paragraph, we saw that a certain integrated version of the Minkowski content measure M(J r ) converges. We now strengthen it to full convergence. Throughout the section we suppose that (M(F γ ), f ) converges as γ → 0 for all f continuous compactly supported in D.
The idea of the proof is to approximate J r by linear combinations n k=1 c k F γ k . As J r is not continuous on [0, 1] and not 0 in 0, one cannot expect a uniform approximation over the whole interval. Thus we have to use an uniform approximation over a subinterval of [0, 1] for certain smoothed versions of J r , and to argue that the cut out parts do not matter.
As a first step let us show that the Minkowski content measure is bounded. In fact we will show something a tiny bit stronger, that is useful for us in the later approximations: Proof. It is enough to prove the result for f non-negative. Let γ 0 > 0. There is C > 0 such that 1 s∈(1/4,1/2) ≤ CF γ 0 (s) and 1 s∈(1/2,1) | log s| ≤ CF γ 0 (s).
From where one can see that and as q r satisfies the scaling q r (s) = β −1/2 q 1/2 (s β ) with β = log 2 | log r| , it follows that Thus, the lemma follows from the convergence of (M(F γ ), f ). | 2018-05-24T09:26:05.000Z | 2017-06-23T00:00:00.000 | {
"year": 2017,
"sha1": "043e8840ccdfea803192452b58f578347f6db5b6",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1706.07737",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "043e8840ccdfea803192452b58f578347f6db5b6",
"s2fieldsofstudy": [
"Physics",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
247023581 | pes2o/s2orc | v3-fos-license | Concurrent changes in physical activity and body mass index among 66 852 public sector employees over a 16-year follow-up: multitrajectory analysis of a cohort study in Finland
Objectives To identify concurrent developmental trajectories of physical activity and body mass index (BMI) over time. Design Prospective cohort study, repeated survey. Setting Cohort study in Finland. Participants 66 852 public sector employees, who have been followed up for 16 years. Outcome measures Shapes of trajectories of changes in physical activity and BMI. Results At baseline, mean age was 44.7 (SD 9.4) years, BMI 25.1 (SD 4.1) kg/m2 and physical activity 27.7 (SD 24.8) MET hours/week. Four clusters of concurrent BMI and physical activity trajectories were identified: (1) normal weight (BMI <25 kg/m2) and high level of physical activity (30–35 MET hours/week), (2) overweight (BMI 25–30 kg/m2) and moderately high level of physical activity (25–30 MET hours/week), (3) obesity (BMI 30–35 kg/m2) and moderately low level of physical activity (20–25 MET hours/week) and (4) severe obesity (BMI >35 kg/m2) and low level of physical activity (<20 MET hours/week). In general, BMI increased and physical activity decreased during the follow-up. Decline in physical activity and increase in BMI were steeper among obese respondents with low level of physical activity. Conclusions Changes in BMI and physical activity might be interconnected. The results may be of interest for both clinicians and other stakeholders with respect to informing measures targeting increasing physical activity and controlling weight, especially among middle-aged people. Additionally, the information on the established trajectories may give individuals motivation to change their health behaviour.
INTRODUCTION
Both obesity and physical inactivity have negative impact on multiple aspects of health and they increase the risk of mortality. [1][2][3] Ageing is associated with gaining weight and decreasing physical activity, 4-6 but less is known whether these changes occur simultaneously and how much heterogeneity there is in the developmental trajectories of body weight and physical activity.
Few studies have examined heterogeneity in weight development over time more closely. A study among 30-year-old US war veterans identified five different, but all increasing, trajectories of body mass index (BMI) over 6-year follow-up. 6 However, the steepness of trajectories varied: while the participants without obesity showed only a small increase in BMI, the increase was much steeper among the participants with obesity. Another study from the USA conducted on overweight participants aged 60 years identified seven weight trajectories of which most showed either stable overweight, continuously increasing BMI or relapse after weight loss. Even in the two trajectory groups showing decrease in BMI, the participants remained overweight. 7 Physical activity has also been reported to change over time. Leisure time physical activity among women has previously been reported to increase until age of 50 years and start to decrease after that. 4 For men, the change in leisure time physical activity has been reported to vary between different types of activity-while moderate physical activity increased, low and high levels decreased. 5 Studies concerning trajectories of physical Strengths and limitations of this study ► Large cohort of 66 852 participants. ► Repeated measures of physical activity and body mass index (BMI) over 16 years. ► Only leisure time physical activity was taken into account, leaving out work-related activity. ► The self-reported nature of estimates of BMI and physical activity might lead to information bias.
Open access activity have found variation in development of activity. A 22-year follow-up study from Canada among those aged 18-60-years has identified trajectories of consistently inactive, increasing, consistently active and decreasing leisure time physical activity. 8 Another study conducted in the USA among 120 initially overweight people aged 54 (±9) years has measured activity with pedometers and identified 'sedentary' and 'low active' groups (decreasing daily count of steps), 'somewhat active' group (persistent daily count of steps) and 'active' group (increased daily count of steps) in 18-month follow-up. 9 The association between higher levels of physical activity and lower BMI has been established in adults, 10 11 and there has been some evidence that this association might be most pronounced when physical activity exceeds 150 min/week. 10 There is, however, limited knowledge on simultaneous changes in these two factors. In shortterm follow-up (18 months) among overweight Canadians aged 54 years, a trajectory with increasing activity has been associated with a trajectory of greater weight loss. 9 There is yet little knowledge on these two factors over longer follow-up. It is also unknown whether developmental patterns of BMI and physical activity differ by age or by gender.
To address the gap in the literature, the objective of this study was to examine concurrent changes in BMI and physical activity over 16-year follow-up by using a group-based multitrajectory analysis. While conventional statistics show a trajectory of average change of outcome over time, group-based trajectory modelling can distinguish and describe subpopulations (clusters), which may differ substantially from each other and from the average trajectory seen in the entire population. The aim was also to examine, whether the distinguished trajectories are different for those aged <50 years and those aged >50 years and whether the results are different when the study population is stratified by gender.
Study population
Participants were drawn from the Finnish Public Sector (FPS) cohort study, a dynamic cohort with follow-up intervals 2-4 years initiated from 1998/2000. It consists of employees in the municipal services of 10 Finnish towns and 21 public hospitals, who had a job contract for a minimum of 6 months. In year 2000, the most common occupations of the respondents were registered nurse (23%), teacher (19%), practical nurse (13%) and cleaner (10%). The FPS has been described in more detail elsewhere. 12 Physical activity was assessed with a questionnaire at all survey waves. The respondents were asked to estimate their average weekly hours of leisure time physical activity/exercise and commuting activity within the previous year. The time spent on activity at each intensity level in hours per week was multiplied by the average energy expenditure of each activity, expressed in metabolic equivalent of task (MET). 14 The MET is a ratio of rate of energy expenditure reflecting the amount of consumed energy compared with resting. One MET unit of 3.5 mL of oxygen per kg per min corresponds to oxygen consumption when calmly sitting down. Weekly physical activity was expressed as MET hours/week and categorised as low (<14 MET hours/week), moderate (14 to <30 MET hours/week) or high (≥30 MET hours/week) physical activity levels. 15 16 This categorisation was chosen since physical activity >14 MET hours/week has been reported to be associated with cardiovascular disease 17 and the activity level of 30 MET hours/week has been shown to be needed for weight management. 18 14 MET hours/week is approximately the equivalent of 140 min of brisk walking weekly. The definition of physical activity in the survey is presented in online supplemental table E1.
Statistical analysis
The characteristics of participants were reported as means and SD or as absolute numbers and percentage when appropriate.
Group-based multitrajectory analysis was used to distinguish different developmental trajectories for physical activity and BMI, both treated as continuous variables. This method is a form of finite mixture modelling for analysing longitudinal repeated measures data. While conventional statistics show a trajectory of average change of outcome over time, group-based trajectory modelling is able to distinguish and describe subpopulations (clusters) existing within a studied population. A censored (known also as 'regular') normal model of group-based multitrajectory analysis was used. The goodness of model fit was judged by running the procedure several times with a number of trajectory clusters starting from one up to five, until the smallest group was below the preagreed cut-off at ≥5%. The Bayesian Information Criterion, Akaike Information Criterion and average posterior probability were used as criteria to confirm the goodness of fit. A cubic regression was applied. The trajectory analysis was conducted on two age groups <50 and >50 years as
Open access
previous studies have suggested that changes in BMI and physical activity may vary depending on the age. 19 20 The sensitivity analysis was conducted by dividing both age groups by gender. No adjustments for co-variables were made.
All the analyses were performed using Stata/IC Statistical Software: Release 16 (StataCorp, College Station, Texas, USA). The additional Stata module 'traj' was required to conduct group-based trajectory analysis. The module is freely available for both SAS and Stata software (Jones and Nagin 1999; 2013).
Patient and public involvement
Participants of research were not involved in setting the study question and outcome measures and were not involved in the design and implementation of the study or writing the manuscript.
RESULTS
During the 16-year follow-up, the 66 852 participants had reported body weight and height on average in 3.5 (SD 1.3) study waves and physical activity in 3.6 (SD 1.3) study waves. The sample was predominated by 53 468 women (80%). In the younger group (aged ≤50 years), mean age was 39.8 (SD 7.2), BMI at baseline was 24.6 (SD 4.0) kg/ m 2 and average physical activity was 28.8. (SD 25.5) MET hours/week. In the older group (aged >50 years), age was 55.0 (SD 2.9), BMI was 25.6 (SD 4.2) kg/m 2 and physical activity was 26.7 (SD 24.1) MET hours/week.
A four-trajectory model was chosen as the five-trajectory model had resulted in a smallest group below a preagreed cut-off of 5% (table 1). Four concurrent trajectories of BMI and physical activity were identified for both age groups (figures 1 and 2): individuals with severe obesity (BMI >35 kg/m 2 ) and low level of physical activity (<20 MET hours/week).
Open access
Group 1: individuals with normal weight and high level of physical activity In this group, the younger respondents demonstrated a stable high level of physical activity with a slight rise towards the end of follow-up and their BMI increased slightly throughout the follow-up. For the older respondents, the level of physical activity decreased markedly during the follow-up, even if there was a slight rising pattern in the middle of follow-up. At the same time, the trajectory of BMI remained flat.
Group 2: individuals with overweight and moderately high level of physical activity In this group, the level of physical activity declined in both age groups, but the decline was steeper among the older respondents. In younger respondents, the decrease of physical activity slowed down slightly towards the end of follow-up. Simultaneously, BMI was steadily growing among younger respondents, while remaining relatively flat in older group.
Group 3: individuals with obesity and moderately low level of physical activity
The physical activity and BMI trajectories were similar to the trajectories observed in group of overweight individuals with moderately high level of physical activity (group #2), but with a slightly steeper decline in physical activity and steeper increase in BMI.
Group 4: individuals with severe obesity and low level of physical activity Also in this group, physical activity decreased and BMI increased. In younger respondents, this development slowed down at the end follow-up for both physical activity and BMI. Instead, in older respondents, the decrease in physical activity accelerated towards the end of follow-up with simultaneous slight decline in BMI.
Sensitivity analysis
Stratifying the respondents by gender in addition to age resulted in similar findings with few exceptions (online supplemental figures E1-E4 and online supplemental table E2). Among normal weight or overweight respondents, the decline in physical activity was steeper among men compared with women.
DISCUSSION
This prospective cohort study in 66 852 public sector employees followed repeatedly by 4-year intervals investigated trajectories of concurrent changes in BMI and physical activity over 16 years. Four trajectory clusters were identified for both participants aged ≤50 years and for those >50 years: (1) individuals with normal weight and high level of physical activity; (2) individuals with overweight and moderately high level of physical activity; (3) individuals with obesity and moderately low level of physical activity and (4) individuals with severe obesity and low level of physical activity. On average, BMI increased and physical activity decreased during the follow-up. Some trajectories demonstrated, however, distinctive features. Over time, the respondents with normal weight or overweight gained only a little weight while preserved a high or moderately high level of physical activity, even if the intensity of physical activity mildly decreased especially in older respondents. The decrease in physical activity and increase in BMI were steeper among the respondents with obesity or severe obesity, who had moderately low or low level of physical activity already at the start of the follow-up. Among the normal weight or overweight respondents, decline in physical activity was steeper among men compared with women. The observed age-related weight gain is in line with previous studies, 4-6 21 as well as the decline in physical activity. 4 22 23 Previous studies have also shown that an increase in BMI slows down with advancing age, and this was also supported by the present findings-the rise in BMI was steeper in the younger respondents. 24 25 During the follow-up, the decline in physical activity mirrored the increase in BMI. Similar findings have been reported before-several studies conducted among middle-aged adults have observed an association between physical activity and weight gain. 10 11 26 27 This association has been described to be dose-dependent-physically active individuals gain less weight than inactive peers. 11 Current results support this finding, since the increase in BMI was less steep in the more active groups. The amount of activity needed to prevent weight gain has been debated. Some studies have concluded that current activity recommendations are not sufficient enough to prevent weight gain and that there is a need for higher activity to remain in the normally weighted category. 10 11 26 This is in line with the current findings-only high physical activity was associated with normal weight.
The strengths of the study were long follow-up of 16 years, repeated measurements on physical activity and BMI and a large sample size. For our knowledge, there are no previous multitrajectory analyses of the relation between physical activity and BMI conducted in adults.
Open access
The study has also some limitations. Physical activity was self-reported and only leisure time and commuting activity were inquired. Thus, physical activity at work was not considered. The distribution of physical activity intensity was skewed-most of the participants were at least somewhat active, and even in the least active group the mean activity level was approximately 18 MET hours/ week, which is approximately the equivalent of 3 hours of brisk walking weekly. BMI was also based on self-reported weight and height, which may cause recall and information bias, possibly resulting in under-reporting of body weight. 28 Most of the participants had BMI >25 kg/m 2 indicating overweight or obesity (62% in the age group of ≤50 years and 68% in the older), which may reflect the current overweight and obesity pandemic. The cohort included predominantly working-age women employed in public sector. Therefore, the results might not be directly reflected on the entire population, since there might be variation in behaviour, for instance, among unemployed people or entrepreneurs. Moreover, a public sector often employs people with higher socioeconomic status, who might have more knowledge and financial resources to healthy lifestyle choices compared with manual workers.
The results may be of interest for both clinicians and other stakeholders with respect to informing measures targeting increasing physical activity and controlling weight, especially among middle-aged people. Additionally, the information on the established trajectories may give people more motivation to change their health behaviour. Further research may reveal risk factors that affect developmental trajectories seen in this study. Such factors may be, for example, gender, socioeconomic status, smoking, alcohol consumption and concurrent health disorders among others.
CONCLUSIONS
Changes in BMI and physical activity might be interconnected. The normal weight or overweight respondents gained only a little weight while preserved a high or moderately high level of physical activity. Compared with normal weight trajectories, the decrease of physical activity and increase in BMI were markedly steeper among the obese or severely obese trajectories, who also had moderately low or low level of physical activity. The findings were similar for both age groups. Among the normal weight and overweight trajectories, decline in physical activity was steeper among men compared with women. Since physical inactivity and overweight are both risk factors for many diseases, more research is needed to develop interventions that could simultaneously affect both.
Twitter Jenni Ervasti @JenniErvasti1
Contributors All the authors substantially contributed to the conception and design of the work, the interpretation of the results and revising it critically for important intellectual content. JE, JV and MK were responsible for the acquisition of data for the work. MS and JP were responsible for the statistical analysis. RT and MS were responsible for drafting the work. All the authors have finally approved the version to be published and they are agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. JV was a guarantor accepting full responsibility for the work, having access to the data, and controlling the decision to publish.
Funding This study was supported by funding granted by the Academy of Finland (Grants 332030 to SS; 633666 to MK; 321409 and 329240 to JV); NordForsk (to MK and JV); the UK MRC (Grant K013351 to MK); Hospital District of Southwest Finland (to SS).
Competing interests None declared.
Patient consent for publication Not applicable.
Ethics approval The ethics committee of the Hospital District of Helsinki and Uusimaa approved the study (registration number HUS/1210/2016). Participants gave informed consent to participate in the study before taking part.
Provenance and peer review Not commissioned; externally peer reviewed.
Data availability statement Data are available on reasonable request. Data may be obtained from a third party and are not publicly available. We are allowed to share anonymised questionnaire data of the Finnish Public Sector Study by application for with bona fide researchers with an established scientific record and bona fide organisations. For information about the Finnish Public Sector Study, contact Professor Mika Kivimaki ( mika. kivimaki@ helsinki. fi)/ Dr Jenni Ervasti ( jenni. ervasti@ ttl. fi).
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.
Open access This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/. | 2022-02-23T06:23:23.097Z | 2022-02-01T00:00:00.000 | {
"year": 2022,
"sha1": "9724af7e0116002e2cbe7365082dbe76bd816e9f",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/12/2/e057692.full.pdf",
"oa_status": "GOLD",
"pdf_src": "BMJ",
"pdf_hash": "d19ce4fc9fb0776f4fd20c1050e0c5b01f35098f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
53768053 | pes2o/s2orc | v3-fos-license | A 2.5D Cascaded Convolutional Neural Network with Temporal Information for Automatic Mitotic Cell Detection in 4D Microscopic Images
In recent years, intravital skin imaging has been increasingly used in mammalian skin research to investigate cell behaviors. A fundamental step of the investigation is mitotic cell (cell division) detection. Because of the complex backgrounds (normal cells), the majority of the existing methods cause several false positives. In this paper, we proposed a 2.5D cascaded end-to-end convolutional neural network (CasDetNet) with temporal information to accurately detect automatic mitotic cell in 4D microscopic images with few training data. The CasDetNet consists of two 2.5D networks. The first one is used for detecting candidate cells with only volume information and the second one, containing temporal information, for reducing false positive and adding mitotic cells that were missed in the first step. The experimental results show that our CasDetNet can achieve higher precision and recall compared to other state-of-the-art methods.
I. INTRODUCTION
Division of the cell in adult mammalian epidermis is important for maintaining the epidermal structure as these cells are important for replenishing eliminated keratinocytes [1]. Cancer, atopic dermatitis, ichthyosis vulgaris, and skin diseases disrupt the balance between the proliferation and elimination of keratinocytes and create abnormal skin structures [2], [3], [4]. Though detecting the mitotic cell (cell division) is essential in investigating cell behaviors, the majority of the methods and experiments were performed with 2D dynamic images that may overlook the important information can result in wrong detection. 3D live cell dynamic images (4D images) can be obtained by using a two-photon microscopy [1]. A typical slice image of an observed 3D dynamic image is shown in Fig.1, with blue bounding boxes indicating the mitotic cells (cell division). Automatic detection of mitotic cells from such 3D dynamic images (4D images) is a challenging task. Recently, deep learning architecture has demonstrated the powerful ability of computer vision tasks by automatically learning hierarchies of relevant features directly from the input data. The deep convolutional neural network has been successfully applied for image classification and object detection, especially for ImageNet classification competition, which has been the most successful network for image classification since 2012 [5]. Moreover, Fast Region-based Convolutional Networks (Fast R-CNN) for object detection and Single Shot MultiBox Detector (SSD) are powerful methods, both of which have outperformed several other methods, that use CNN as base network to perform object detection [6], [7]. However, these methods are designed for 2D natural image detection. In the field of mitotic cell detection, varies methods have been proposed, most of which are based on image binarization [8] or segmentation of cells [9]. Though those methods is that they do not require training dataset to train the model, they require proper alignment between each slice or time sequence to obtain good results, which is time-consuming. Anat et al. [10] used a deep learning method called pixel-wised method to improve detection accuracy and accelerate the computation time. This method is based on 2D patch classification using a simple CNN network and takes considerable computation time. Though we can apply Fast R-CNN and SSD, which are widely used for object detection in natural images, they will cause several false positives because the object (mitotic cell) is similar to the background image (normal cells), as shown in Fig.1. In this paper, we proposed a 2.5D cascaded end-toend convolutional neural network (CasDetNet) with temporal information for accurate automatic detection of 4D (x, y, z, t) mitotic cell division events in epidermal basal cells with few training data. The CasDetNet consists of two 2.5D networks. The first is used for detecting candidate cells with only volume information and the second one, with temporal information, is used for reducing false positives (normal cells) and adding mitotic cells that are missing in the first step. We also intend to use a 2.5D CNN as a base network. Compared to conventional 2D CNN, our 2.5D CNN (2D image with neighbor slices) can include more information for detection (the first step) and reduction of false positives (the second step). Though the 3D CNN can include more information than 2D and 2.5D CNN, it can use limited number of training samples (3D images) and thus cause overfitting. Results show that CasDetNet can deliver higher precision and recall comparing to other advanced methods.
The paper is organized as follows. Section II introduces the proposed CasDetNet for mitotic cell detection method is introduced in section II. Section III describes the experimental results. Finally, Section IV presents the conclusion.
II. THE PROPOSED NETWORK
The proposed CasDetNet for detection of mitotic cells is shown in Fig.2. It comprises two 2.5D networks. The first network is used to detect candidate cells using only volume information and the second, which contains temporal information, is used to reduce false positives and to add mitotic cells that were missing in the first step. The second network is cascaded to the first network and the two networks are then trained simultaneously (end-to-end training). The details regarding the first and second networks will be described in subsections II-A and II-B, respectively.
A. The first network for detection of candidate cells using volume information The first network for detecting candidate cells is motivated by Fast R-CNN to determine the local features for establishing the region of interest (ROI). The goal is to cause the networks hidden layers to detect candidate of mitotic cells. The original Fast R-CNN requires 2D image as input and produces a set of ROI as detection results. The number of training set and network architectures determine the quality of detection result. Further, the conventional Fast R-CNNs drawback is that it loses 3D spatial information, which is important for accurate mitotic cell detection. Though we can extend the conventional Fast R-CNN to a 3D version for 3D volume images, the number of training samples will be considerably limited and result in over-fitting. Thus, we propose a 2.5D Fast R-CNN for our first detection network. As shown in Fig.1, three slice images {s −1 , s, s +1 } are used as input to detect the candidate cells in the target slice image {s}, which is called 2.5D network. The outputs (ROIs) are indicated as {o 1 , o 2 , o 3 , ..., andsoon}. The advantage of our 2.5D network is that we can use neighbor slice information (2.5D information) to distinguish between the mitotic cell and normal cells, which is important for detecting mitotic cells divided along z-axis. Figure 2 (upper part) illustrates our first 2.5D network. For our base network, we use the VGG network architecture. To enhance the accuracy of the network, we use transfer learning from ImageNet data to VGG. Each slice is processed individually and the processed slices are concatenated to form a 3D volume. We replace 3D convolutional layer with 3 × 3× 3 kernel size, followed by a ReLU non-linearity layer instead of 2D convolutional layer of original Fast R-CNN to obtain network results o i . In each output o i , the network will use nearby output o −1 and o +1 to generate the concatenated output. It will also be used in cell selection process to generate volume selected output O t i . Thus, the network will generate a set of output O t i consisting of 3 outputs {o i1 , o i2 , o i3 } t for each image slice s i . Further, t indicates the time sequence in 4D data. For each set of output O t i , we calculate the mean to obtain first volume output V t i , as shown in Fig.3.
B. The second Network for reduction of false positive
We propose using the second network to reduce the false positives generated by the first network. Results from the first network V t i contain both correct and incorrect detection results. In this section, we used the second network to refine the results (reduce the false positives) by using temporal {time 1 , time 2 , time 3 , ..., time n } information.
There are several methods to manage extra dimensional information (temporal information). Taking mean or thresholding from image sequence is a common method to smoothen the image sequence and removing or adding over-/under-detection. We concatenate the volume output V t i at time t, previous output V t−1 i , and next output V t+1 i time sequence together and then apply the second CNN classification (for reducing false positives), as shown in Fig.2 (lower part). The network will generate the time set of output consisting of three outputs of Fig.3.
III. EXPERIMENTAL RESULTS
To validate the effectiveness of our proposed method, we perform experiments on 4D (temporal 3D volume sequence) data from JSPE, Technical committee on Industrial Application of Image Processing Appearance inspection algorithm contest 2017 (TC-IAIP AIA2017) [11]. There are five datasets, each containing approximately 80 temporal frames. The data size is approximately 480×480×37. Each data contains 13 mitotic cells, as listed in Table III (ground truth). Data augmentation is added in the training phase to increase the number of training set so that overfitting that normally occurs in small datasets can be avoided and the model can be induced to learn to detect the mitotic cells that will generally be underdetected in the 2D network. Cropping, rotation, translation, mirror imaging, noising, and resizing methods are used in our study. The parameters for cropping, rotation, translation, noising, and resizing are randomly selected. We determine the parameter for each augmentation method as follows: 224×224 cropping size with random location; random rotation angle in the range of 0180; random percentage of Gaussian noise in the range of 1%-3%; random resizing scale in the range of 0.91.1. Using data augmentation methods helps to generate varied combination images to train the model. In our experiments, we use leave-one-out method. Further, for training our model, we use Adam optimization method. As described in the previous section, two networks are cascaded and trained simultaneously (end-to-end). The learning rate for Adam in our network starts with 0.5 × 10 −5 and changes into 0.5 × 10 −6 after finishing the 10k batch, with each batch containing five image slices.
A. Detection results on 2D slice images
First, we present detection results on 2D slice images. Each slice image is considered as a sample. The total number of mitotic cells (2D slice images) is shown in Table I as ground truth, and precision and recall are used as quantitative measures. For evaluation, we compare the precision and recall of our method with SSD [7], FAST R-CNN [6], and 3D convolution FAST R-CNN, which is a modified version of
Data
Sugano [12] Our method ground truth TP FN FP TP FN FP 1 1 0 0 1 0 0 1 2 1 0 3 1 0 0 1 3 2 0 0 2 0 0 2 4 3 0 0 3 0 0 3 5 2 1 0 1 2 0 3 the original FAST R-CNN. All methods are calibrated from ImageNet except 3D FAST R-CNN. The detection results for 2D slice images using CasDetNet are shown in Table I. The number of true positive ROI of all data is largely the same as the number of ground truth ROI, except for Data No. 5 that cannot be detected properly as its mitotic cells were difficult to detect because they occur at the edge of the image. The detection results for 2D slice images obtained using our proposed method and SSD are shown in Fig.4. It is evident that our method can detect mitotic cell correctly.
On the other hand, several false positives are detected by SSD ( Fig.4(b)). Compared to the SSD result ( Fig.4(b)), our proposed method ( Fig.4(a)) can significantly reduce false positives. The quantitative comparisons are shown in Table II. Though both 2D FAST R-CNN and SSD present high recall, they also present low precision because of several false positives being detected. Both precision and recall for 3D FAST R-CNN are lower because of overfitting. The 3D FAST R-CNN also has high computation cost. If we only use the first 2.5D network, we can improve the precision compared to 2D FAST R-CNN and SSD because of the 2.5 D network. However, it still contains a large number of false positives. We can also significantly reduce these false positives by using the second network with temporal information. It should be noted that we do not compare our method with Anats method [10] because it is a pixel-wise method and takes more time than 3D FAST R-CNN in both training and testing.
B. Detection results on 4D data
Our aim is to detect mitotic cells on 4D data. We combine our detection results on 2D slice image, as described in the previous sub-section, for final results and compare our results with the winner of the TC-IAIP AIA2017 contest [11]. The detection results (TP, FN, FP) regarding 4D data are summarized in Table III. Except Data No. 5, perfect detection is achieved without any FP and FN. For Data No. 5, two mitotic cells are not detected, the reason for which has been described in the previous sub-section. Sugano method [12], the winner in TC-IAIP AIA2017 contest, can also properly detect mitotic cells. However, there are 3 FP for Data No. 2.
IV. CONCLUSION
We have proposed a 2.5D cascaded convolutional neural network for automatic detection of mitotic cells in 4D image (x, y, z, and time). The proposed network consists of two networks, the first of which is a modified 2.5D Fast R-CNN for detecting candidate cells and the second is used for reducing false positives using temporal information. The results demonstrated that our proposed method is more accurate than the other established methods such as Fast R-CNN, SSD, and the TC-IAIP AIA2017 contest winners method. | 2018-06-21T13:03:54.577Z | 2018-06-04T00:00:00.000 | {
"year": 2018,
"sha1": "fcac84d96a62b73a6ab6b7eb4596e08941de9c1b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1806.01018",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "539b15d7b9bafa5763192b7f1312cf298cae3478",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
17697557 | pes2o/s2orc | v3-fos-license | Pathophysiology of Small-Fiber Sensory System in Parkinson's Disease
Abstract Sensory symptoms are frequent nonmotor complaints in patients with Parkinson's disease (PD). However, few investigations integrally explored the physiology and pathology of the thermonociceptive pathway in PD. We aim to investigate the involvement of the thermonociceptive pathway in PD. Twenty-eight PD patients (16 men, with a mean age and standard deviation of 65.6 ± 10.7 years) free of neuropathic symptoms and systemic disorders were recruited for the study and compared to 23 age- and gender-matched control subjects (12 men, with a mean age and standard deviation of 65.1 ± 9.9 years). We performed skin biopsy, contact heat-evoked potential (CHEP), and quantitative sensory tests (QST) to study the involvement of the thermonociceptive pathway in PD. The duration of PD was 7.1 ± 3.2 (range 2–17 years) years and the UPDRS part III score was 25.6 ± 9.7 (range 10–48) during the off period. Compared to control subjects, PD patients had reduced intra-epidermal nerve fiber (IENF) density (2.48 ± 1.65 vs 6.36 ± 3.19 fibers/mm, P < 0.001) and CHEP amplitude (18.02 ± 10.23 vs 33.28 ± 10.48 μV, P < 0.001). Twenty-three patients (82.1%) had abnormal IENF densities and 18 (64.3%) had abnormal CHEP. Nine patients (32.1%) had abnormal thermal thresholds in the feet. In total 27 patients (96.4%) had at least 1 abnormality in IENF, CHEP, or thermal thresholds of the foot, indicating dysfunctions in the small-fiber nerve system. In control subjects, CHEP amplitude linearly correlated with IENF density (P < 0.001). In contrast, this relationship disappeared in PD (P = 0.312) and CHEP amplitude was negatively correlated with motor severity of PD independent of age, gender, and anti-PD medication dose (P = 0.036), suggesting the influences of central components on thermonociceptive systems in addition to peripheral small-fiber nerves in PD. The present study suggested impairment of small-fiber sensory system at both peripheral and central levels is an intrinsic feature of PD, and skin biopsy, CHEP, and QST provided an integral approach for assessing such dysfunctions.
INTRODUCTION P arkinson's disease (PD) is a multidimensional neurodegenerative disorder with both motor and nonmotor symptoms. 1 Sensory complaints, especially pain, are one of the most common nonmotor symptoms, and more than one-quarter of PD patients experience ''primary pain'' related to dysfunction of the nociceptive system in the early stages of PD when motor symptoms are not prominent. 2,3 However, nerve conduction studies (NCS) are usually normal in PD patients, except for large-fiber neuropathy related to long-term exposure to levodopa with increased homocysteine levels and vitamin B12 deficiency. 4 Taken together, the symptoms of pain at early or premotor stage of PD raise the possibility of small-fiber sensory dysfunctions in either peripheral or central compartments of the nociceptive pathways during the early course of PD. 5 Previous studies have suggested the involvement of smallfiber pathology in the neurodegenerative process of PD. Skin biopsies from PD patients have revealed peripheral deafferentation and deposition of phosphorylated a-synuclein in cutaneous sensory and autonomic nerves. [6][7][8][9] Heat pain thresholds and the amplitudes of laser-evoked potentials are reduced in PD patients compared to control subjects, [10][11][12][13] implying that central nociceptive processing is altered. However, a thorough and comprehensive analysis of the involvement of nociceptive system dysregulation in PD using integrated strategies that encompass psychophysical, pathological, and physiological examinations is lacking. Furthermore, the clinical significance of these alterations in relation to the motor progression of PD remains unknown.
Contact heat-evoked potential 11 mediated by Ad fibers is a recent straightforward noninvasive approach to examine nociceptive dysfunction as a physiological counterpart of skin biopsy. 14,15 In small-fiber neuropathy, characteristic CHEP signatures, including the prolongation of latencies and reduction of amplitudes, parallel the degree of skin nerve degeneration. 16,17 In addition, CHEP parameters also reflect maladaptive plastic changes in the brain of patients with chronic pain. 18 We hypothesized that components of the thermonociceptive system are involved in the degenerative process of PD. We applied 3 different methods, including skin biopsy, CHEP, and quantitative sensory testing (QST) to investigate the physiology and pathology of thermonociceptive dysfunction and their clinical significance in PD.
Participants
Twenty-eight patients with idiopathic PD who received regular follow-up at the movement disorder clinic of National Taiwan University Hospital (NTUH) were enrolled in the study. The inclusion criteria were fulfillment of the diagnosis of PD based on the United Kingdom PD Society Brain Bank clinical diagnostic criteria 19 and an absence of sustained sensory symptoms, neuropathic pain, or weakness in the limbs. We excluded patients who had systemic medical illnesses, such as diabetes mellitus, chronic liver, and kidney diseases, malignancy, endocrine diseases, autoimmune diseases, alcoholism, toxic exposure, or a family history of neuropathy. None of the patients had clinically relevant signs of autonomic dysfunction and mutations in Parkin, PINK1, LRRK2, SCA2, and SCA3 were previously excluded. 20,21 All patients were treated with L-dopa or a combination with dopamine agonists and had good clinical responses. L-dopa equivalents were calculated according to Möller et al. 22 We defined the daily L-dopa dose as the average daily dosage of L-dopa in the 6 months before entering the study. We also collected the total cumulative dose of L-dopa equivalents in the year prior to enrollment. Each patient was examined using the motor subscale of the Unified Parkinson's Disease Rating Scale (UPDRS part III) and Hoehn-and-Yahr staging. Patients maintained their regular anti-PD medications and were examined while in the on status. Twenty-three ageand gender-matched healthy subjects were enrolled for direct comparisons of the results of skin biopsy and CHEP.
Our study was approved by the Ethics Committee of NTUH (201106076RB) and followed the Helsinki Declaration regarding international clinical research involving humans. Written informed consent was obtained from each participant before enrollment in the study.
Skin Biopsy, Immunohistochemical Staining, and Quantification of Epidermal Innervation
A 3-mm skin punch biopsy was taken at the leg 10 cm proximal to the lateral malleolus in the side with more severe motor symptoms. The sampled skin tissue was fixed with 2% paraformaldehyde-lysine-periodate in 0.1 M phosphate buffer overnight. The skin sample was cut perpendicularly into 50 mm thick sections. Skin sections were immunostained with rabbit antibody to protein gene product 9.5 (PGP 9.5, 1: 1000; UltraClone, Isle of Wight, UK) for 16 to 24 hours and were further incubated with biotinylated goat antirabbit antibody (Vector, Burlingame, CA) for 1 hour. The avidin-biotin complex was sequentially applied for another hour and the reaction product was demonstrated using chromogen SG (Vector Laboratories).
Epidermal innervation was counted through the depth of the entire skin section following an established rule by a clinically blinded trained examiner using an Olympus BX40 microscope (Tokyo, Japan) at 40Â magnification. The epidermal length along the upper margin of the stratum corneum in each skin section was measured with ImageJ version 1.43 (Image Processing and Analysis in Java, National Institutes of Health, Bethesda, MD). The density of IENF was thereby calculated and expressed as the number of nerve fibers per millimeter of epidermal length. In our laboratory, the normative values (mean AE SD and 5th percentile) of IENF densities at the distal leg are 11.2 AE 3.7 and 5.9 fibers/mm for subjects aged < 60 years and 7.6 AE 3.1 and 2.5 fibers/mm for subjects aged ! 60 years. The IENF density lower than 5.9 and 2.5 fibers/mm was classified as abnormal in these 2 age groups, respectively. 23
Records of Contact Heat Evoked Potential
A contact heat evoked potential stimulator (Medoc, Ramat Yishai, Israel) was used to deliver the heat stimuli. 14 The diameter of the circular thermode is 27 mm; the heating rate is 70 8C/s; and the cooling rate is 40 8C/s. Cooling begins immediately after the thermode reaches its target stimulus temperature based on the default algorithms. CHEP was recorded while subjects sat on a chair, with their eyes closed and muscle relaxed, in a semi-dark room with the room temperature controlled at 25 8C. The heat stimulus was applied to the hairy skin area of the lateral leg around 10 cm proximal to the lateral malleolus. The skin area was divided into 6 adjacent nonoverlapping districts. The thermode was moved clockwise or counter-clockwise across these sites. The heat pulse was delivered from the 32 to 51 8C. The interstimulus interval was randomly set to around 20 to 22 s. CHEP was recorded using a NeuroScan SynAmps 64-channel amplifier and Quik-Caps system (NeuroScan, El Paso, TX). The recording electrode was set at Cz. The references were set as bilateral mastoids. The ground electrode was positioned between FPz and Fz. To control artifacts, we monitored the electrooculogram from supra-and infra-orbital electrodes. The impendence of all electrodes was kept below 5 kV and the online EEG was sampled at 1000 Hz with a bandpass filter at 0 to 300 Hz. Before the formal recording of CHEP, we delivered several heat stimuli to subjects to avoid startle responses. During the study, subjects were asked to pay consistent attention to the stimulus and verbally rated the intensity of pain perception for 3 s after each stimulus using a verbal rating scale (VRS; 0-10), in which ''0'' means no sensation, ''4'' represents the pain threshold, and ''10'' corresponds to the intolerable pain. We recorded CHEP from each leg separately and the order of sides was balanced among the subjects.
Analysis of CHEP
The offline processing of CHEP waveforms was based on Scan 4.5 software (NeuroScan, El Paso, TX). The cleaned EEG signals were cut into epochs from 500 ms before to 1500 ms after the stimulus onset (0 ms). Each epoch was baseline corrected from À500 to 0 ms and filtered with a bandpass filter at 0.1 to 30 Hz. Epochs with major artifacts were excluded. CHEP waveforms were analyzed based on an average of the first 16 artifact-free epochs from the leg of the more affected side. In normal subjects, CHEP at Cz consisted of an initial negativewave (N2-wave) followed by a positive-wave (P2-wave) as described previously. 17 We measured the N2-peak latency as the CHEP latency and the amplitude between the N2-peak and P2-peak as the CHEP amplitude. If both N2-wave and P2-wave were absent, the CHEP amplitude was defined as the mean amplitude of the average tracing in the time window between 300 and 800 ms. If only N2-wave or P2-wave was absent, the CHEP amplitude was the existent P2-wave or N2-wave amplitude. In our laboratory, the normative values from 72 healthy controls 340 years of age were 471.1 AE 42.2 and 534.5 ms for CHEP latency and 38.1 AE 9.1 and 21.4 mV for CHEP amplitude (mean AE SD and 5th percentile).
Quantitative Sensory Testing
QST was performed according to the algorithm of level in the lower limb of more severely affected side using a Thermal Sensory Analyzer (Medoc, Ramat Yishai, Israel) as described before. 24 Thermal thresholds according to the algorithm of level recorded on the foot dorsum were expressed as the warm threshold temperature and cold threshold temperature. Vibratory thresholds were measured with similar algorithms, and expressed in micrometers. These values were compared with normative values for the age, which had previously been documented. 24 Nerve Conduction Study NCS was performed in bilateral sural, peroneal, and tibial nerves with a Viking Select Electromyographer (Nicolet, MD) following established methods. 25 Bilateral sural, peroneal, and tibial nerves were studied. Abnormal results in NCS were defined as having reduced amplitude of compound motor action potential (CMAP) or sensory action potential (SAP), prolonged distal latencies, slowing of the nerve conduction velocity. 25
Statistical Analyses
Data were interpreted according to our previously described normative databases. The results of skin biopsy and CHEP for the PD patients were further compared to those of age-and gender-matched subjects. Numerical variables were expressed as the mean AE standard deviation of the mean. For variables following a Gaussian distribution, data was compared using 2-tailed t test. Regression analysis was performed to evaluate the correlations between variables and the correlation was expressed graphically with the slope of regression line and its 95% confidence interval (CI). The correlation was further explored with multivariate analysis, and the covariance of the model (R 2 ) and the standardized correlation coefficient were presented. We performed all the analysis by Stata (StataCorp LP, College Station) and Prism (GraphPad Software, San Diego) software. A P value < 0.05 was considered significant.
Clinical Characteristics
The mean age of the 28 PD patients was 65.6 AE 10.7 (range 44-88) years and the mean disease duration was 7.1 AE 3.2 (2-17) years. Sixteen of the patients were men. The detailed clinical features of the PD patients enrolled in this study are summarized in Table 1. The mean UPDRS part III score was 25.6 AE 9.7 (10-48) during the off period. The mean L-dopa equivalent dose was 601.8 AE 274.7 (150-1200) mg/day and the total cumulative L-dopa equivalent dose the year before enrollment was 189.2 AE 82.5 (54-360) g/year. The results of neurological examinations were unremarkable in 21 patients (75.0%), and a mild but symmetrical decrease in the deep tendon reflex of the lower limbs was observed in 7 patients (25.0%). All of these patients had full muscle power and did not reveal subjective sensory impairment.
The independent control group for skin biopsy and CHEP comparisons comprised 23 age-and gender-matched healthy subjects, including 12 men. The mean age was 65.1 AE 9.9 years (P ¼ 0.73 for gender and P ¼ 0.86 for age).
Nerve Conduction Study and Quantitative Sensory Testing
For large-fiber nerve functions according to NCS of the lower limbs, the results were within normal limits in 24 patients (85.7%). The other 4 patients had normal sural nerve studies but mildly reduced amplitudes of compound muscle action potentials in peroneal or tibial nerves. For the psychophysical parameters of both small-fiber and large-fiber nerves assessed by QST, 9 patients (32.1%) had abnormal thermal thresholds in the foot (all 9 with elevated warm thresholds and 1 with elevated cold thresholds) and 16 patients (57.1%) had elevated vibratory thresholds in the lateral malleolus.
Skin Biopsy
Skin biopsy was performed to investigate the pathology of small-diameter sensory nerves. In control subjects, intra-epidermal nerve fibers (IENFs) with a typical varicose appearance arose from the subepidermal nerve plexuses, and dermal nerve fibers exhibited dense and linear immunoreactivities ( Figure 1A). In contrast, IENFs were markedly reduced and dermal nerve fibers fragmented in PD patients ( Figure 1B), which is consistent with nerve degeneration. IENF densities were significantly lower in PD patients compared to control subjects (2.48 AE 1.65 vs 6.36 AE 3.19 fibers/mm, P < 0.001, Figure 1C). Among the 28 PD patients, 23 (82.1%, Table 1) had IENF densities lower than the 5th percentile of the normative data: 6 cases (6/6) with Hoehn-and-Yahr stage 1 in the off state, 10 (10/11) with stage 2, 6 (6/8) with stage 3, 1 (1/2) with stage 4, and none (0/1) with stage 5. Figure 2A shows the grand average of CHEPs from PD patients and age-and gender-matched control subjects. In controls, CHEPs had well-defined biphasic waveforms with an initial negative peak (N2-wave) followed by a positive peak (P2-wave) (Figure 2A). The CHEP amplitude was attenuated in PD patients ( Figure 2B). Furthermore, the CHEP amplitude was significantly reduced in PD patients compared to control subjects (18.02 AE 10.23 vs 33.28 AE 10.48 mV, P < 0.001, Figure 2C), and 18 patients (64.3%) had abnormal CHEP parameters. There was no difference in the N2-wave latencies between PD cases and controls (P ¼ 0.41) or any difference in CHEP amplitude between the side with more severe motor symptoms and the side with less involvement (18.02 AE 10.23 vs 16.36 AE 11.31 mV, P ¼ 0.16). Upon contact heat stimulation, the mean intensity of pain perception on the verbal rating scale (VRS) was 4.4 AE 1.7, which is in the range of mild to moderate pain.
Correlation Between Clinical and Thermonociceptive Parameters
All patients except 1 (96.4%) had at least 1 abnormality in IENF density, CHEP amplitude, or thermal thresholds in the foot, providing quantitative evidence for small-fiber physiology and pathology. First, we investigated the relationship between skin innervation, CHEP amplitude, and the thermal threshold in the foot, and analyzed the effects of age, gender, disease duration, and anti-PD drugs on these parameters using univariate linear regression. In PD, there was no relationship between CHEP amplitude and IENF density (P ¼ 0.312, Figure 3A) although CHEP amplitude and IENF density both negatively correlated with age (r ¼ À0.414, P ¼ 0.021 and r ¼ À0.079, P ¼ 0.005, respectively). The warm threshold in the foot positively correlated with the daily dose and yearly cumulative dose of L-dopa equivalent (r ¼ 0.004 and 0.012, P ¼ 0.019 and 0.030, respectively). These findings contrasted with the normal control group in which the CHEP amplitude positively correlated with IENF density (r ¼ 2.698, P < 0.001, Figure 3B) in addition to the correlation of CHEP amplitude and IENF density with age (r ¼ À0.749, P < 0.001 and r ¼ À0.233, P < 0.001, respectively). When simultaneously considering the effects of age and gender in the multivariate linear regression model, CHEP amplitude only significantly correlated with IENF density in the control group (r ¼ 1.637, P ¼ 0.011 for IENF density; R 2 ¼ 0.76 and P < 0.001 for the model).
Next, we examined the relationship between the smallfiber sensory dysfunction and the motor symptoms of PD. The CHEP amplitude negatively correlated with the Hoehn-and-Yahr stages and UPDRS part III scores in both the on and off state ( Figure 4A-D), whereas the warm threshold in the foot positively correlated with Hoehn-and-Yahr stages and UPDRS part III scores in both the on and off state. There was no correlation between IENF density and PD motor severity. In multivariate linear regression analysis incorporating age, gender, daily L-dopa equivalent dose, disease duration of PD, and CHEP amplitude or warm threshold in the foot as independent variables, CHEP amplitude still negatively correlated with Hoehn-and-Yahr stages in the off state (r ¼ À0.032, P ¼ 0.036 and R 2 ¼ 0.69, P < 0.001 for the model).
To further explore whether peripheral or central parameters were associated CHEP in PD, the multivariate regression model was applied with CHEP amplitude as dependent variable, and age, gender, IENF density and motor scores, that is, Hoehn-and-Yahr stage as independent variables. In this model, IENF density represented the peripheral deafferentation and Hoehn-and-Yahr stage served as a surrogate marker of central effects of PD. Hoehn-and-Yahr stages in the off state was the only factor correlated with CHEP amplitude (r ¼ À4.26, P ¼ 0.031 and R 2 ¼ 0.41, P ¼ 0.013 for the model) ( Table 2).
DISCUSSION
In the present study, we reported the integrated data of CHEP, skin biopsy, and QST in PD patients who were free of neuropathic sensory symptoms and systemic diseases. Almost all enrolled PD patients (96.4%) had quantitative evidence of dysfunction in the thermonociceptive system based on reduced IENF density, reduced CHEP amplitude, and elevated warm threshold in the foot. Furthermore, CHEP amplitude and warm threshold in the foot correlated with the severity of PD motor dysfunction, and CHEP amplitude reflects the motor severity independent of age, gender, and the dose of anti-PD medications.
We observed a high prevalence of small-fiber sensory dysfunction in PD based on combined neurophysiological, pathological, and psychophysical assessments. This finding provides objective evidence and explanations for previous epidemiological studies indicating a high prevalence of pain in PD patients. 26,27 The exclusion of systemic diseases, toxin exposure, and organic brain lesions in the present study ascertained that the small-fiber physiology and pathology was intrinsic to the disease course of PD. Although recent studies have emphasized the relationship between the use of levodopa and peripheral neuropathy, 4,28 the absence of a correlation between L-dopa equivalent doses and the IENF density or CHEP parameters, as well as the preservation of large fiber physiology in our PD patients, suggest that the small-fiber nerve degeneration and thermonociceptive pathophysiology were independent of the use of anti-PD medications and an integral part of the neurodegeneration that occurs in PD. Postmortem pathological studies have revealed progressive pathological changes in the thermonociceptive pathway at multiple levels in PD. 26 Progressive involvement of Lewy body deposition in PD brains could affect the structures of the medial and lateral pain processing systems, that is, the locus coeruleus, nucleus raphe magnus, and gigantocellular reticular nucleus, which are key areas in the descending regulation of pain, 29 as well as the anterior cingulated cortex, amygdala, and prefrontal cortex, which are associated with motivational and emotional aspects of pain. 27 In addition to the cutaneous denervation being pathological evidence of primary thermonociceptive nerves, 8,9 recent studies in PD have demonstrated the involvement of medium-sized multipolar projecting neurons at lamina I of the spinal cord, the initiating component of the spinothalamic tract. 30,31 According to Braak's pathology staging, many of these structures are affected in the early course of PD, even before the obvious involvement of motor function. 29,31 All of these pathological changes may contribute to abnormalities in skin innervation, CHEP amplitude, and thermal thresholds in PD patients. This study demonstrates the feasibility of the CHEP as a noninvasive clinical examination for assessing thermonociceptive dysfunctions in PD. The CHEP amplitude was reduced in PD patients compared to controls, and 64.3% patients had reduced CHEP amplitudes compared to normative values from healthy subjects of similar age and gender. We found no difference in CHEPs between the sides regarding the severity of motor dysfunction. These findings extended a previous study that reported reduced N2-P2 amplitude of laser-evoked potentials in PD regardless of the clinically affected side. 12 Given that CHEP records brain activities after heat pain stimuli on skin as conveyed by the thermonociceptive pathway, CHEP could detect dysfunctions in this pathway at either the peripheral and central level. Previous studies have showed a correlation between reduced CHEP amplitude and decreased IENF density in diseases of pure peripheral nerves, that is small-fiber neuropathy of different etiologies. 17,32 In the present study, CHEP amplitude was only correlated with the IENF density in the control group. Despite significant reduction of IENF density and CHEP amplitude in PD patients, the relationship between CHEP amplitude and IENF density altered, that is, CHEP amplitude was associated with Hoehn-and-Yahr stage in PD. These observations suggest the change in CHEP amplitude was affected not only by peripheral deafferentation but also central components in PD, that is, the striatum. 29,30,33 Recently, imaging studies have demonstrated striatal activations by heat pain No relationship was found between the N2-P2 amplitude of CHEP evoked by stimulation of the leg and the intraepidermal nerve fiber density in the leg of patients with Parkinson's disease. (B) In the age-and gender-matched controls, the N2-P2 amplitude of CHEP evoked by stimulation of the leg highly correlated with the intraepidermal nerve fiber density of the leg (P < 0.001). CHEP ¼ contact heat evoked potential. stimuli in healthy subjects and neuropathic patients, implying an unexplored role of the striatum in nociceptive processing. 33,34 Taken together, the present report might provide evidence of the central effects of PD contributing to the alterations of CHEP amplitude. In this study, we also observed a correlation between CHEP amplitudes and the warm threshold in the foot and motor severity in PD as measured by Hoehn-and-Yahr stages and UPDRS part III scores. Even after considering the confounding effects of age, gender, anti-PD medication dose, and disease duration of PD, the CHEP amplitude still highly correlated with Hoehn-and-Yahr stages. Compatible with previous literatures, our study showed no correlation between the cutaneous sensory innervations and motor dysfunction in PD patients, 9 suggesting that separate pathological processes underlie the central motor system and peripheral thermonociceptive nerves in PD. 35 Unlike IENF density, CHEP represents integrated brain responses to heat-pain stimuli, including the activation of somatosensory areas, insula, and cingulated cortex, which are further influenced by deep structures of the thalamus, basal ganglion, amygdala, and brainstem. 33,36,37 Many of these structures become progressively involved in PD; thus, the changes in CHEP amplitude might potentially reflect the motor progression of PD. 29,38 A recent study investigating laser-evoked potentials in PD patients with shoulder muscular pain did not show a relationship between N2-P2 amplitude and motor symptom severity. 10 Possible reasons for the discrepancy between this study and ours include differences in motor status on recordings, sensory symptoms, heat pain stimuli used, the location at which the stimulus was applied, and disease duration. These issues await further studies for clarification. In addition to CHEP, the warm threshold in the foot also correlated with motor severity in our PD patients. Similar to CHEP, the thermal thresholds assess an integral outcome of thermal sensations. Taken together with recent studies showing an improvement in temperature sensation after deep brain stimulation of the subthalamic nucleus in PD, 39,40 these observations suggest that the dysfunction in the thermonociceptive networks might be associated with impairment of the cortical-striatal-nigral dopaminergic circuits. There are several limitations in this study. For the studied PD patients, we excluded patients with sustained sensory symptoms, neuropathic pain, or weakness in the limbs to avoid confounding etiologies causing neuropathic disorders or pain. The study therefore did not address the relationships between the clinical sensory symptoms and the assessed thermonociceptive dysfunctions. Another aspect is the small sample size of PD patients, which hinders us from further stratification of patients with different motor severity. Finally, we did not check the serum level of vitamin B12 in these PD patients, which might underestimate the effects of levodopa use on the risk of neuropathy. However this limitation would not significantly impact our findings since most reported neuropathy related to levodopa use was mainly large-fiber neuropathy, 4 and we had adjusted the cumulative dosage of levodopa in our analysis. Nevertheless, the present study documents that the small-fiber physiology and pathology of the thermonociceptive pathway at both the peripheral and central levels are intrinsic features of PD. Skin biopsy, CHEP, and QST provide an integral and sensitive approach for assessing such dysfunctions. | 2018-04-03T05:19:36.382Z | 2016-03-01T00:00:00.000 | {
"year": 2016,
"sha1": "eb3b43446d2146c3d04a3f48d374e4a78d27f424",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1097/md.0000000000003058",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eb3b43446d2146c3d04a3f48d374e4a78d27f424",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
163501 | pes2o/s2orc | v3-fos-license | Nucleotides Regulate Secretion of the Inflammatory Chemokine CCL2 from Human Macrophages and Monocytes
CCL2 is an important inflammatory chemokine involved in monocyte recruitment to inflamed tissues. The extracellular nucleotide signalling molecules UTP and ATP acting via the P2Y2 receptor are known to induce CCL2 secretion in macrophages. We confirmed this in the human THP-1 monocytic cell line showing that UTP is as efficient as LPS at inducing CCL2 at early time points (2–6 hours). Expression and calcium mobilisation experiments confirmed the presence of functional P2Y2 receptors on THP-1 cells. UTP stimulation of human peripheral CD14+ monocytes showed low responses to LPS (4-hour stimulation) but a significant increase above background following 6 hours of treatment. The response to UTP in human monocytes was variable and required stimulation >6 hours. With such variability in response we looked for single nucleotide polymorphisms in P2RY2 that could affect the functional response. Sequencing of P2RY2 from THP-1 cells revealed the presence of a single nucleotide polymorphism altering amino acid 312 from arginine to serine (rs3741156). This polymorphism is relatively common at a frequency of 0.276 (n = 404 subjects). Finally, we investigated CCL2 secretion in response to LPS or UTP in human macrophages expressing 312Arg-P2Y2 or 312Ser-P2Y2 where only the latter exhibited significant UTP-induced CCL2 secretion (n = 5 donors per group).
Introduction
The CCL2/CCR2 mediated recruitment of monocytes is necessary for fighting infections to microorganisms [1]. This chemokine signalling axis has also been implicated in a number of inflammatory disorders where monocyte infiltration is a key factor such as atherosclerosis, multiple sclerosis, and rheumatoid arthritis [2]. Understanding the cellular regulation of this important chemokine is therefore critical for understanding some of the early pathophysiology of inflammatory disorders.
P2Y receptors are members of the metabotropic family of purinergic receptors belonging to the larger family of Gprotein coupled receptors. Eight subtypes of P2Y have been identified to date-P2Y1, P2Y2, P2Y4, P2Y6, P2Y11, P2Y12, P2Y13, and P2Y14-with differences in both pharmacology and downstream signalling pathways [3,4]. P2Y2 has a widespread distribution in the body including expression on glial cells, some neurons, endothelial cells, epithelial cells of many tissues, and myeloid immune cells including monocytes, macrophages, and dendritic cells [5][6][7][8][9][10]. Many studies have demonstrated activation of P2Y2 induces a transient calcium response [11,12] but much less is known about the regulation of chemokine or cytokine production. Our previous work was the first to demonstrate a role for the P2Y2 receptor in regulation of CCL2 secretion from alveolar and peritoneal macrophages [10]. P2Y2 and the UDP-responsive P2Y6 receptor can also signal other chemokine productions including CXCL8 (IL-8) and CCL20 (MIP-3 ) [9,13,14]. Knockout mouse studies have demonstrated that P2Y2 plays an important role in defence against lung infection with Pseudomonas aeruginosa [15] yet it can also play a role in allergic lung inflammation in various models [16,17].
Extracellular nucleotide induction of CCL2 in cells expressing P2Y2 could be an important trigger for the initial recruitment of monocytes to an inflammatory site. Many inducers have been identified for CCL2 including lipopolysaccharide (LPS), growth factors such as platelet 2 Mediators of Inflammation derived growth factor (PDGF), and cytokines such as tumour necrosis factor-alpha (TNF-), reviewed in [18]. Following tissue/cellular damage or regulated nucleotide release from cells or nerve terminals, the activation of P2Y2 by extracellular nucleotides could switch on a rapid production of CCL2. Whilst this may be beneficial for initiating repair to an injured site, uncontrolled or chronic nucleotide release may be detrimental and cause excessive tissue inflammation. Both ATP and UTP are known danger signals and act as immunomodulatory signals to communicate alarm messages to immune cells.
The aim of this study was to investigate extracellular nucleotide-induced chemokine production in human macrophages and monocytes with an emphasis on CCL2. Firstly we wanted to confirm our earlier findings in rodent alveolar macrophages using a human monocyte/macrophage cell line and secondly we wanted to compare P2Y2 induced CCL2 secretion with that of LPS, a known bacterial inducer of CCL2. Finally we wanted to perform a pilot study to determine whether we could measure nucleotide-induced CCL2 secretion from primary human cells and assess the responses to both P2Y2 agonists and LPS.
Cell
Culture. The THP-1 monocytic cell line was maintained in RPMI 1640 media (Life Technologies) containing 10% foetal bovine serum (US origin, Lonza), 2 mM Lglutamine, 100 U/mL penicillin, and 100 g/mL streptomycin (all Life Technologies) and grown under humidified conditions in a 5% CO 2 incubator. Cells were routinely passaged every 2-3 days. For differentiation to macrophages, THP-1 cells were plated at 1×10 6 cells/well in 24-well plates in 500 L complete medium and stimulated with 1000 U/mL IFN-and 100 ng/mL LPS for 48 hours.
Human Monocyte Preparation.
Peripheral venous blood was collected in lithium heparin tubes (Becton Dickinson) from healthy volunteers with informed consent (study approved by Nepean Blue Mountains Local Health Service Human Ethics committee). Mononuclear cells were isolated by Ficoll-Paque (GE Healthcare) gradient centrifugation and monocytes were isolated using CD14 microbeads and LS columns on a midiMACS system (Miltenyi Biotech, Germany) as per manufacturer's instructions. The purity for CD14 monocyte isolations was routinely >90% by flow cytometry.
For macrophage experiments, peripheral blood mononuclear cells were plated in complete RPMI 1640 medium at a density of 2.5 × 10 6 cells per well. Cells were incubated for 2 hours at 37 ∘ C in order to adhere to plastic, nonadherent cells removed, and adherent PBMCs were cultured overnight in 1 mL complete media, washed once the following day, and cultured for 6 more days.
Calcium
Measurements. THP-1 cells were harvested from flasks, pelleted at 300 ×g, and resuspended in Fluo-4 NW assay buffer (Life Technologies). THP-1 cells were plated at a density of 2 × 10 5 cells/well into a 96-well plate coated with poly-D-lysine (Merck Millipore) and were loaded for 30 minutes at 37 ∘ C. Human monocytes were plated at 2-4 × 10 5 cells/well and prepared in the same way. Calcium measurements were performed using a Fluostar OPTIMA plate reader (BMG Labtech) with excitation at 485 nm and emission at 520 nm. All measurements were made at 37 ∘ C using a gain setting of 40%.
2.5. ELISA Experiments. THP-1 cells were plated at 1 × 10 6 cells/well in a 24-well plate in RPMI 1640 media containing 1% serum (0.5 mL per well). Stimulations were performed in duplicate. LPS (1 or 10 g/mL) or nucleotides (varying concentrations) were added directly into the media for 2, 4, 6, or 24 hours. Following stimulation media were removed from the wells into Eppendorfs and centrifuged to remove any contaminating cells. Supernatants were transferred to fresh Eppendorfs and frozen at −80 ∘ C. Freshly isolated human peripheral blood CD14 + monocytes were plated at 5 × 10 5 cells/well in RPMI media containing 1% serum. Stimulations were performed in duplicate/triplicate as for THP-1 cells and cell-free supernatants collected after 4 or 6 hours.
Ninety-six well plates (NUNC) were coated with anti-MCP-1 capture antibody (clone 10F7, BD Biosciences) at a concentration of 2 g/mL in sodium carbonate buffer pH 9.5. Samples and standards (recombinant human MCP-1) were diluted in media. Detection antibody, anti-MCP-1biotin (clone 5D3-F7, BD Biosciences), was used at 0.5 g/mL and followed by streptavidin-HRP at 1 g/mL. TMB-Ultra (PerBioscience) was used for visualisation and 1 M H 2 SO 4 was used as stop solution. Absorbance at 450 nm was read using a BMG Labtech Optima plate reader. Standard curves were fit with regression factor of 2 > 0.96.
2.6.
Real-Time PCR. Cells were stimulated as described above, collected into RNA Protect reagent (Qiagen), and stored at −80 ∘ C. Cells were then processed to extract total RNA using an RNEasy Mini Kit (Qiagen). RNA concentrations were measured using a spectrophotometer (Cary) using absorbance at 260 nm. 1 g RNA was reverse transcribed using a Tetro cDNA synthesis kit (Bioline) as per manufacturer's instructions.
Primers for real-time PCR were optimised for concentrations over the range (0.125 M to 1 M) and primer efficiencies were determined. Primer sequences were -actin for- Quantitative real-time PCR was performed using Sensimix SYBR green No ROX mastermix (Bioline) and 0.75 M each primer on freshly prepared cDNA. PCR were performed in triplicate using a Rotorgene 2000 (Corbett Research) and analysed using a threshold of 0.003 to determine Ct values. Data was analysed using the 2 −ΔΔCt method.
Flow Cytometry.
One million THP-1 or peripheral blood mononuclear cells were stained per flow tube. Cells were fixed with 2% paraformaldehyde buffer for 20 minutes on ice and permeabilised with 0.1% saponin in PBS. Rabbit IgG was used as the negative control and anti-P2Y2 (Sigma) was used at 1 : 100 dilution. Primary antibodies were incubated for 30 minutes in PBS/0.1% saponin containing 5% human AB serum. Cells were co-stained with mouse antihuman CD14-FITC (1 : 100 dilution). PBMCs were washed with PBS/saponin and incubated with goat anti-rabbit IgG-Alexa 647 at 1 : 100 dilution for 30 minutes on ice. After final washing with PBS/saponin, cells were resuspended in PBS and 30000 events acquired on a BD FACSCalibur flow cytometer. Monocytes were identified by CD14 expression and plots of P2Y2-Alexa-647 staining were generated using Weasel flow cytometry software (WEHI).
Genotyping.
Genomic DNA was prepared from whole blood as previously described [19] and stored at −80 ∘ C. The P2Y2 gene was amplified using 0.025 U/mL recombinant Taq DNA polymerase (Invitrogen), 1.5 mM MgCl 2 , 100 M dNTP, and 0.4 M each of forward and reverse primers. Primer sequences were forward 5 -CTT TTG CCG TCA TCC TTG TCT-3 and reverse 5 -CAT CTC GGG CAA AGC GTA-3 yielding a product of 328 bp. The following cycling conditions were used: initial denaturation (95 ∘ C for 3 minutes), followed by 40 cycles of denaturation (95 ∘ C for 45 seconds), annealing (55 ∘ C for 30 seconds), and extension (72 ∘ C for 30 seconds) on a PTC-200 Peltier Thermal Cycler (MJ Research, Waltham, Massachusetts, USA). The samples were then cooled at 4 ∘ C for 10 minutes.
A restriction assay was designed to determine the sequence at nucleotide 1269, G, or C. Mutation from G>C introduces an extra cut-site for the restriction enzyme MwoI. P2Y2 PCR product was incubated with 3 U MwoI for 1 hour at 60 ∘ C. Genomic DNA carrying G at nucleotide 1269 will be cut at 2 positions yielding 3 fragments of 195 bp, 130 bp, and 3 bp. Genomic DNA carrying C at nucleotide 1269 will be cut at 3 positions yielding 4 fragments of 181 bp, 130 bp, 14 bp, and 3 bp. The assay distinguishes between bands of 195 bp and 181 bp using a 3% agarose gel. Genotyping was verified using a commercial high-throughput method assay for rs3741156 (AGRF, Brisbane).
Statistical
Analysis. Data plotted are means ± SEM of three to four experiments. Graphs and statistical analysis were performed using GraphPad Prism version 5 (GraphPad Software Inc., La Jolla, CA, USA).
P2Y2 Regulates CCL2 Production in THP-1 Monocytes.
It has been demonstrated by others that THP-1 cells express P2Y2 receptors [13,20]. We confirmed P2Y2 expression on THP-1 using flow cytometry (Figure 1(a)) and used a functional measure of P2Y2 receptors by intracellular calcium measurements in response to a range of P2Y2 nucleotide agonists, ATP, UTP, 2-thio-UTP, UTP S, and UDP (Figures 1(b) and 1(c)). A concentration-response curve was generated for UTP-induced calcium responses on THP-1 cells (Figure 1(d)). Calcium responses were reduced in the presence of suramin (100 M), a broad P2 receptor antagonist known to block P2Y2 responses (Figure 1(b)).
We then stimulated THP-1 cells with the nucleotides UTP and ATP S to stimulate P2Y2, or with LPS (1-10 g/mL) to induce CCL2 chemokine production and secretion. We determined the amount of CCL2 secreted at three separate timepoints: 2 hours, 6 hours, and 24 hours ( Figure 2). We found that UTP and ATP S were as effective as LPS at stimulating CCL2 secretion after 2 hours. After 6 hours the UTP or ATP S-induced CCL2 secretion remained elevated above basal and again was not significantly different to LPS. However, after 24 hours of stimulation there was a further increase in the LPS-induced CCL2 secretion, while the UTP or ATP S-induced CCL2 secretion remained low ( Figure 2). We decided to study the 2-hour timepoint to further investigate nucleotide-induced CCL2 secretion. A range of different nucleotide agonists were tested and many increased CCL2 levels above background levels ( Figure 3). However, we found that only UTP or LPS treatments gave a significant difference compared to basal (one-way ANOVA with Dunnett's post hoc test, = 3-4 experiments). The UTP signal was not blocked by suramin, a feature also observed when concentrations of UTP higher than 100 nM were used in calcium responses in THP-1 cells (data not shown). Using the THP-1 cell line we looked at CCL2 gene induction by UTP in comparison to LPS. We used quantitative real-time PCR and measured CCL2 relative to -actin as the reference gene at 2 hours following stimulation with LPS, UTP, UDP, or media alone. We found that UTP induced an 8.8-fold increase in CCL2 expression compared to an 11-fold upregulation of CCL2 in response to LPS (Figure 4). In contrast UDP did not induce a significant upregulation of CCL2 levels ( Figure 4). As expected, LPS induced other chemokines CCL20 (459fold above media control) and CXCL8 (1121-fold above media control) in THP-1 cells, whereas UTP treatment induced a 4.1-fold upregulation of CCL20 and a 2.3-fold upregulation of CCL3 with respect to unstimulated cells. UTP induced some upregulation of CXCL8 expression (61-fold) but this was only 5% of the LPS response (1121-fold).
CCL2 Production in CD14 + Primary Human Monocytes.
Following our observations that UTP could induce a similar level of CCL2 secretion as LPS in the THP-1 monocytic cell line, we wanted to address whether this was also a feature of primary human monocytes. We probed CD14positive PBMCs for P2Y2 expression and found significant labelling relative to a nonspecific IgG control ( Figure 5(a)).
To assess the functionality of P2Y2 receptors on magnetically isolated CD14 positive monocytes we performed calcium measurements on Fluo-4AM loaded cells. UTP at both 10 M and 100 M induced large transient calcium responses in primary monocytes ( Figure 5(b)). We stimulated human CD14 + monocytes with media, UTP (10 M), or LPS (100 ng/mL) for 4 hours ( = 11 different donors). When analysing the data as collective, we found a small increase in CCL2 in response to LPS (mean 376 ± 88 pg/mL) above background CCL2 secretion in media alone (mean 223 ± 56 pg/mL). There was no difference above background with 10 M UTP treatment (mean 215 ± 51 pg/mL) ( Figure 5(c)). When looking at each individual donor we could see that in 4 out of 11 subjects UTP increased CCL2 levels above media alone treated cells. Subtracting the background CCL2 concentration showed a UTP response with a mean of 23 pg/mL ( = 4 subjects, range 8-49 pg/mL).
We then determined whether a longer incubation time of 6 hours would reveal an increased CCL2 secretory response to UTP. When analysing the data as a collective, we found a significant increase in CCL2 in response to LPS (mean 645 ± 55 pg/mL, = 5 donors, < 0.05 one-way ANOVA with Dunnett's post hoc test) compared to background (mean 154 ± 55 pg/mL, = 5 donors). Treatment with 10 M UTP (mean 134 ± 47 pg/mL) or 100 M UTP (229 ± 72 pg/mL, = 5 donors) was not significantly different from background CCL2 levels ( Figure 5(d)). Again, when looking at each subject we could see that all 5 donors responded to 100 M UTP with CCL2 levels ranging from 19 to 129 pg/mL above the basal secretion with a mean value of 76 pg/mL.
To determine whether we could measure CCL2 gene induction in primary human monocytes we performed qPCR analysis of CCL2 gene induction by UTP (100 M) and LPS after a 6-hour stimulation. We found variable induction of CCL2 and CCL20 in response to either UTP or LPS ( Figure 6, = 3-4 donors).
Does P2Y2 Play a Role in CCL2 Secretion in Human
Macrophages? With such small responses to UTP in monocytes and unstimulated THP-1 monocytic cells, we decided to differentiate THP-1 to macrophage-like cells using IFN-and LPS [20]. Our previous work was performed using an alveolar macrophage cell line and peritoneal macrophages and measured robust CCL2 production [10]. IFN-/LPS differentiated THP-1 cells (48-hour treatment) were stimulated with either LPS, UTP, or media control and CCL2 was measured in supernatants collected at 6 and 24 hours. A much higher basal secretion of CCL2 was measured compared to unstimulated THP-1 but both LPS and UTP could further increase CCL2 secretion at both timepoints ( Figure 7). To begin to address the variability in responses observed with primary monocytes we focused on nonsynonymous single nucleotide polymorphisms in P2Y2. There are three known SNPs in human P2Y2 which may affect functional responses [21,22]. We first sequenced the P2Y2 gene from Figure 2: The effect of P2Y2 agonists compared with LPS on CCL2 secretion from unprimed THP-1 cells. THP-1 cells were plated in RPMI media containing 1% serum and stimulated with media alone, LPS (1 or 10 g/mL), UTP (10 M), or ATP S (10 M) for the times indicated above each graph. Mean raw data is plotted in pg/mL ± SEM from three independent experiments. Symbols: * denotes < 0.05 compared with control, # denotes no significant difference between treatments, and ns denotes not significant with respect to control (one-way ANOVA with Tukey's post hoc test). Standard curves were performed for each ELISA experiment with fits of > 0.95. THP-1 monocytes and found several synonymous SNPs and a single nonsynonymous SNP (rs3741156) altering amino acid 312 from arginine (R) to serine (S) (data not shown). We developed an in-house genotyping assay based on restriction enzyme analysis with MwoI. The presence of G or C at position 1269 (NM 002564) correlates with a cut-site for this enzyme. Wild-type individuals carrying G at this position would show 3 fragments of 195, 130, and 3 bp, whereas polymorphic individuals carrying C at this position would show 4 fragments in the PCR restriction assay of 181, 130, 14, and 3 bp (Figure 8(a)). We used this assay to genotype P2Y2 in healthy volunteers and confirmed the genotyping using a custom high-throughput SNP assay (Australian Genome Research Facility). From genotyping a total of 404 subjects we determined an allele frequency of 0.276 for rs3741156 ( = 404 subjects).
We then performed a pilot study to investigate the effect of genotype on chemokine secretion using individuals carrying wild-type (WT) P2Y2 or 312S-P2Y2 receptors (5 subjects per genotype). We stained PBMCs from WT and 312S-P2Y2 subjects for expression level of P2Y2 on CD14 positive cells and found no significant difference in mean fluorescence intensity ( = 3 donors/genotype) (Figure 8(b)). We cultured adherent PBMCs to macrophages over 7 days to differentiate monocytes to macrophages and stimulated cells with media, 10 M UTP, or 1 g/mL LPS for 4 hours ( = 5 donors per genotype group). Supernatants were tested for CCL2 by ELISA and results are shown in Figure 9 as a scatter graph. Responses to both LPS (mean 1054 ± 485 pg/mL) and UTP (mean 882 ± 341 pg/mL) were smaller in WT-P2Y2 macrophages and not significantly different from background CCL2 (mean 433 ± 181 pg/mL, = 5 donors). In contrast,
Discussion
This study is the first to investigate UTP induced chemokine secretion from THP-1 cells and primary human monocytes and macrophages. Our main finding suggests that UTP can elicit comparable levels of CCL2 as stimulation with the bacterial product LPS in THP-1 cells. We also found that macrophages produce more CCL2 in response to nucleotides than monocytes even though levels of P2Y2 expression are similar. It is well known in the literature that monocytes and macrophages express P2Y2 receptors amongst other P2Y and P2X receptors [3,4,11,13]. Here we confirm expression of P2Y2 in THP-1 cells using flow cytometry and intracellular calcium measurements as an indirect measure of G proteincoupled receptor activation (Figure 1). The pharmacology for this receptor is relatively poor compared with other purinergic receptors. Several pieces of evidence suggest that the UTP-induced response in THP-1 cells is due to P2Y2 activation. UTP is only known to activate P2Y2, P2Y4, and P2Y6 receptors [23], and, of these, the only receptor responsive to ATP is P2Y2. ATP and UTP are equipotent at P2Y2 receptors and EC 50 values for UTP and ATP-induced calcium responses were 97 ± 34 nM and 105 ± 59 nM, respectively, in THP-1 cells. Suramin could suppress UTP and ATP-induced calcium responses as well as the response elicited by UTP S (Figure 1). However, suramin did not suppress responses induced by the P2Y2-selective agonist 2-thio-UTP ( Figure 1). Furthermore, we found that suramin could not suppress calcium responses induced by high concentrations (>1 M) of nucleotides and as such we did not use suramin in the subsequent CCL2 experiments.
The calcium experiments were used as an indication that functional P2Y receptors were present on THP-1 cells and we found that the P2Y6 receptor agonist UDP also elicited a suramin-sensitive calcium response in THP-1 cells (Figure 1). Whilst P2Y6 is likely expressed in THP-1 cells as shown by Yebdri et al. [13], experiments with CCL2 secretion suggest that UDP did not have a significant our effect ( Figure 3). Furthermore, both UTP S (P2Y2 and P2Y4 selective agonist) and 2-thio-UTP (P2Y2 selective) could induce CCL2 secretion above basal levels but not to the same degree as UTP (full agonist at P2Y2).
A major aim of this study was to compare nucleotideinduced chemokine secretion to a known proinflammatory signal such as LPS. We chose the THP-1 cell line to perform these experiments to limit variability in responses. The LPSinduced CCL2 secretory response in THP-1 monocytes was low (<100 pg/mL) at early timepoints such as 2 and 6 hours but increased over a 24-hour treatment period (Figure 2). The amount of constitutively produced CCL2 from THP-1 cells in our hands was similar to that measured by Steube et al. [24]. Steube et al. observed a low level of CCL2 secretion from THP-1 in response to LPS compared to other myelomonocytic cells lines [24]. This low constitutive CCL2 secretion is quite different from the high level of constitutive CCL2 expression seen in NR8383 alveolar macrophages in our previous study [10].
We compared the LPS-induced CCL2 response to that of UTP and ATP S at concentrations known to induce large calcium responses (10 M) in THP-1 cells. At an early timepoint, 2 hours, the nucleotide-induced CCL2 secretion was comparable to LPS-induced CCL2 secretion ( Figure 2). However, the nucleotide-induced response remained stable between 2 and 6 hours and the LPS-induced response steadily increased over time. In primary human monocytes and macrophages the kinetics of CCL2 production appeared to be slower in response to either LPS or UTP (Figures 5 and 9). After 4-6 hours of stimulation the LPS-induced CCL2 response becomes significant above background CCL2 secretion, whereas overall the UTP-induced response remains not significantly elevated above basal levels ( Figure 5). Thus in primary human monocytes we observed a large difference between the P2Y2-induced response and the LPS-induced response. Several factors may influence this data including genetic differences in membrane receptors, expression levels of receptors at the plasma membrane, and differences in intracellular signalling or in CCL2 mRNA stability. However, in human monocyte-derived macrophages the UTP-induced response was again similar to the LPS-induced response ( Figure 9) and this was also true in IFN/LPS differentiated THP-1 macrophages after 6 hours (Figure 7).
We investigated whether nucleotides could also induce production of other inflammatory chemokines such as CCL20 (MIP-3 ), CCL3 (MIP-1 ), and CXCL8 (IL-8). Nucleotide-induced CCL20 production has been demonstrated in human dendritic cells (UDP and ATP S) [9], CXCL8 production in monocytes [13], and CCL3 production in rodent microglia (UTP, UDP) [7]. Using quantitative PCR we found that LPS upregulated both CCL20 and CXCL8 in THP-1 cells; however UTP had no significant effect on CCL3, CCL20, and CXCL8 production ( Figure 4). In primary human monocytes LPS significantly induced CCL20 gene expression in all 4 donors (42-427-fold increase). In comparison UTP and UDP nucleotides did not significantly induce CCL20 ( = 4 donors). Our data shows that CCL2 induction in primary human CD14 + monocytes was variable in response to either LPS or UTP ( Figure 6) confirming our variability in the protein secretion experiments. The finding that UTP stimulation of P2Y2 receptors can elicit a similar CCL2 chemokine response to LPS stimulation of TLR4 is a novel finding and one with potential relevance for inflammation. UTP is a known danger signal, similar to ATP, and is likely to be present in areas of tissue damage without infection in addition to sites of infection. Such nucleotides may be a driving factor in sterile inflammation and may contribute to disease associated chronic inflammatory states. It is also important to hypothesise about why nucleotides may induce CCL2. In addition to the role of CCL2 as a chemokine for immune cells, it may be released as a mechanism for upregulating other receptors such as P2X4 receptors, as recently described by Toyomitsu et al. [25]. CCL2 may therefore act as an autocrine factor on monocytes/macrophages to prime further inflammatory signalling.
A second major aim of the current study was to investigate P2Y2 induced chemokine production in primary human monocytes and macrophages. We isolated monocytes from peripheral blood of a number of donors to determine if the nucleotide-induced CCL2 response was detectable. Our data demonstrates a degree of variability in constitutive and induced secretion of CCL2 from monocytes and macrophages. This type of variability has been observed previously for LPS where different individuals can be classified as high or low responders [26] and other studies have demonstrated variable LPS-induced responses including interleukin 1 (IL-1 ) secretion [27] and CCL20 chemokine secretion from human dendritic cells and monocytes [9]. We and others have also previously shown variability in IL-1 secretion in response to P2X7 activation [19,28]. One source of such variability in responses is genetic variation in the form of single nucleotide polymorphisms and three SNPs have been identified in the human P2RY2 gene [21,22]. Sequencing determined that one SNP, rs3741156, was present in the THP-1 cell line and other studies have demonstrated that this mutation altering amino acid 312 can affect UTPinduced calcium responses in transfected cells [21]. Other studies have indicated that P2Y induced calcium signalling is important in switching on CCL2 production in myeloid cells [7]. Monocyte-derived macrophages from individuals carrying the 312Ser-P2Y2 variant displayed a significant CCL2 secretory response compared to individuals expressing a 312Arg-containing receptor ( Figure 9). Therefore, SNPs in P2Y2 may correlate with variability in responses. Future studies will investigate if other SNPs are present in the gene and whether they have a functional effect on receptor signalling.
Conclusions
Nucleotides are effective inducers of the chemokine CCL2 from THP-1 monocytic cell line similar to lipopolysaccharide. Fold upregulation/reference gene UTP LPS CXCL8 Figure 6: UTP induces chemokine gene transcription in human monocytes. Human CD14 + monocytes were plated in RPMI media containing 1% serum and stimulated with media alone, LPS (1 g/mL), UTP (10 M), or UDP (10 M) for 6 hours. RNA was extracted, reverse transcribed to cDNA, and probed with quantitative PCR primers for chemokines (CCL2, CCL20, CCL3, and CXCL8) and reference gene ( -actin). PCR was performed in triplicate using a Rotorgene 2000. Data was analysed using the 2 −ΔΔCt method using a threshold of 0.003 and fold upregulation with respect to the media control is plotted. Human primary macrophages display a more robust CCL2 response to nucleotides than human monocytes. Some of the variability in the CCL2 response to nucleotides could be explained by genetic variation in the P2Y2 gene with a mutation altering amino acid 312 demonstrating an increased chemokine response. : Macrophages expressing 312Ser-P2Y2 secrete higher levels of CCL2 in response to UTP and LPS. Human adherent PBMCs were plated at 2.5 × 10 6 cells/well and cultured for 7 days. Media were replaced and cells were stimulated with media, 10 M UTP, or 100 g/mL LPS for 4 hours. Each symbol represents a different donor ( = 5 donors per genotype group) and each donor has a different colour. Bars represent the mean data for each condition. * * denotes < 0.05 using a one-way ANOVA with Dunnett's post hoc test; ns denotes not significant. | 2016-05-04T20:20:58.661Z | 2014-09-07T00:00:00.000 | {
"year": 2014,
"sha1": "b4a0943056f761cd3b64c9b5645f9f6e0239b113",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/mi/2014/293925.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "02afb15b45a28a81afb123c2f86f4bbe77380295",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
15037502 | pes2o/s2orc | v3-fos-license | Role of aquaporin 5 in antigen-induced airway inflammation and mucous hyperproduction in mice
Abstract Airway inflammation and mucus hyperproduction play the central role in the development of asthma, although the mechanisms remain unclear. The aquaporin (AQP)-5 may be involved in the process due to its contribution to the volume of liquid secreted from the airways. The present study firstly found the overexpression of AQP5 in the airway epithelium and submucosal glands of asthmatics. Furthermore, we aimed at evaluating the role of AQP5 in airway inflammation and mucous hyperproductions during chronic allergic responses to house dust mite (HDM). Bronchoalveolar lavage levels of interleukin (IL)-2, IL-4, IL-10, interferon-γ and Mucin 5AC (MUC5AC), and number of peribronchial and perivascular cells were measured in AQP5 wild-type and AQP5 knockout (KO) mice. We found that HDM induced airway inflammation, lung Th2 cell accumulation and mucin hypersecretion in C57BL/6 mice rather than AQP5 KO mice. Expression of MUC5AC and MUC5B proteins and genes in the lung tissue was significantly lower in AQP5 KO mice. Thus, our results implicate involvement of AQP5 in the development of airway inflammation and mucous hyperproduction during chronic asthma.
Introduction
Allergic asthma is a complex disorder, which involves a significant contribution of environmental stimuli for the manifestation of symptoms and degree of clinical severity. Chronic asthma is pathologically characterized by airway inflammation, increased mucus production, airway remodelling and airway hyper-responsiveness to bronchoconstrictive agents [1].
The aquaporins (AQPs) are a family of small transmembrane proteins that facilitate osmotically driven water transport, and in some cases the transportation of small solutes such as glycerol [2]. AQP5 is expressed in the apical membrane of type I alveolar epithelial cells, acinar epithelial cells in submucosal glands and large airway epithelia [3,4]. The volume of liquid secreted from the nasopharynx and the upper airways was lower by 2-fold in AQP5 knockout (KO) mice, resulting in hypertonic fluid with elevated protein concentration [5]. In addition, the previous studies found AQP5 might be involved in the regulation of MUC5AC production [6], airway hyper-responsiveness to cholinergic stimulation and altered lung resistance and dynamic compliance [7], lung infection [8], airway inflammation [9] and acute lung injury [10].
House dust mite (HDM) represents one of the most common aeroallergens in allergy and asthma, influencing approximately 10% of the population. Chronic exposure to HDM extract in mice could lead to persistent airway inflammation, hyper-responsiveness and remodelling, as a accepted model of chronic asthma [11]. In the present study, we tried to validate if AQP5 as the disease-related target could be overexpressed in the asthmatics and furthermore investigate the potential role of AQP5 in airway inflammation and mucin hypersecretion induced by chronic exposure to HDM. We compared the effects of chronic HDM exposure on the development of airway eosinophilia, cytokine production and mucin secretion in AQP5 KO wild-type (WT) mice. Our results firstly showed the overexpression of AQP5 in the airway epithelium and submucosal glands of asthmatics and found that the expression of MUC5AC and MUC5B genes and proteins, number of inflammatory cells and production of Th2 cytokines were # These two authors contributed equally to this work. markedly lower in AQP5 KO mice. Thus, our data indicate that AQP5 may act as a novel regulator of the allergic response and a therapeutic target for treating mucin hypersecretion and airway inflammation in chronic asthma.
Animals
AQP5 KO mice (gift from Dr. A. S. Verkmann, California University) were generated as previously described [12] and back-crossed to a C57BL/6 background for 6 to 10 generations. The experiments were performed on litter-matched (age 7-8 weeks, body weight 20-25 g, female) WT (AQP5 WT) and AQP5-null (AQP5 KO) mice. Mice were genotyped from DNA isolated by tail clips with PCR primers to the AQP5 gene [12]. The mice were kept in a 12 : 12 hr night-day rhythm, fed with standard mice chow and provided water ad libitum. This study was approved by the Animal Care Committee of Fudan University Zhongshan Hospital.
Chronic asthma model
Mice were challenged by the intranasal administration of 25 g of purified HDM extract (Dermatophagoides pteronyssinus; ALK, Hørsholm, Demark) in a total volume of 10 l of saline for once a day, 5 days a week, for up to five consecutive weeks. Phosphate-buffered solution (PBS) at the same volume without HDM was used in the control group. There were 8-10 mice in each terminate group.
Bronchoalveolar lavage (BAL)
The trachea was cannulated and the bronchoalveolar space was lavaged with 0.5 ml PBS and withdrawn with 0. 3
Histological evaluation
Twenty-four hours after the last HDM challenge, lungs were harvested, fixed in 10% neutral-buffered formalin and embedded in paraffin. Sections (4 m) of specimens were put onto 3-amino propyltriethoxy saline-coated slides. The morphology and leucocyte infiltration in the tissue were assessed using haematoxylin and eosin staining. Inflammatory changes were graded by a scale of 0-5 for perivascular, bronchiolar and submucosal gland eosinophilia [13]. Quantitative analysis of pathology was performed by the
Expression of AQP5 protein in human bronchi
The lower airways without cancer were selected for pathological evaluation. The player of airway epithelial cells in patients with chronic asthma was obviously thicker than those without chronic asthma. Hypertrophy of epithelial cells in the mucosal players and submucosal glands was noticed in asthmatics. Immunohistochemical analysis demonstrated that AQP5 was localized in normal airway epithelium (Fig. 1A) and submucosal glands (Fig. 1C) of the bronchi. The expression of AQP5 proteins was obviously higher in the epithelial cells of airway mucosa (Fig. 1B) and submucosal glands (Fig. 1D) in asthmatics.
Allergic inflammation in HDM-treated mice
Administration of HDM for 5 weeks to C57BL/6 WT mice led to a significant increase in the amount of pulmonary oedema and epithelial hypertrophy in the terminal airway and leucocyte infiltration (Fig. 2B), although not in AQP5 KO animals (Fig. 2D). The total scores of histopathology in HDM-treated AQP5 WT mice were significantly higher than those in PBS-treated animals (
Role of AQP5 in HDM-induced cytokine production
BAL levels of IL-4 (Fig. 3A) and IL-10 ( Fig. 3B) in animals challenged with HDM were significantly increased as compared with those with PBS (P Ͻ 0.05 and 0.01, respectively). AQP5 KO mice had significantly lower levels of IL-4 and IL-10 than WT mice after chronic exposure to HDM (P Ͻ 0.05). BAL levels of IL-2 (Fig. 3C) and IFN-␥ (Fig. 3D)
Airway epithelial mucin changes after chronic HDM exposure
Chronic exposure to HDM induced obvious metaplasia (Fig. 4A-2), hyperplasia and hypertrophy of goblet cells with staining positively with periodic acid-Schiff for mucin in WT mice, as compared with animals challenged with PBS ( Fig. 4A-1) and AQP5 KO mice with HDM ( Fig. 4A-4). There was no significant difference of goblet cell metaplasia between mice with PBS and AQP5 KO mice with PBS. However, the number of goblet cells in the small airway of HDM-challenged WT animals measured by morphometric system was significantly higher than that in PBS-and HDM-challenged AQP5 KO animals (P Ͻ 0.01 and 0.05, respectively, Fig. 2B). There was no significant difference of goblet cell alterations between PBS-challenged WT and AQP5 KO animals.
Table 1 Severity of inflammatory cellular infiltration in lung tissue after immunization
Inflammatory changes were graded by histopathological assessment using a semi-quantitative scale of 0-5. Results are presented as median, with range in parenthesis (n ϭ 8/group). *P Ͻ 0.01 versus animals challenged with PBS, † P Ͻ 0.05 versus WT mice challenged with HDM.
Fig. 2 Histological findings of lung tissues with peripheral airways (haematoxylin and eosin, ϫ400 origin) from WT mice with PBS (A) or HDM (B) and AQP5 KO mice with PBS (C) or HDM (D)
after the intranasal challenges, once a day, 5 days a week for 5 weeks. Airway MUC5AC and MUC5B changes after chronic allergen exposure Figure 5 demonstrates that more cells with positive staining of MUC5AC (Fig. 5A) and MUC5B (Fig. 5B) were observed in the small airway of WT mice after chronic exposure to HDM, as compared with AQP5 KO animals. The number of MUC5AC (Fig. 6A)and MUC5B (Fig. 6B) (Fig. 7B) and MUC5B (Fig. 7C) mouse lung tissue was significantly lower than that in WT mice after HDM challenge.
Discussion
The AQPs are a family of small (30 kD monomer) integral membrane proteins that function as selective water transporters, and in some cases they also transport glycerol (aquaglyceroporins). Of 13 related AQPs in mammals, at least four are expressed in the lung and airways, e.g. AQP1 in microvascular endothelia, AQP3 and AQP4 in airway epithelia and AQP5 in the apical membrane of type I alveolar epithelial cells, acinar epithelial cells in submucosal glands or large airway epithelia [2]. There was no clinical study on the changes of AQP5 expression in the asthmatics, and only one study demonstrated that the gene and protein expressions of AQP5 were altered in patients with chronic obstructive pulmonary disease [14]. The present study firstly examined the expression of AQP5 proteins by immunohistochemistry and found the overexpression of AQP5 protein in the asthmatic airway epithelium and submucosal glands of the bronchi as compared to controls. Due to the lack of reliable, specific and nontoxic inhibitors against AQPs, the significance of AQPs has been examined through the use of gene KO mice or primary cell culture extracted from AQP KO mice. In the present study, we investigate potential involvements of AQP5 in chronic HDM-induced mucus hyperproduction by using AQP KO mice.
Inflammatory cytokines have been found to be involved in overproduction and overexpression of mucus proteins and genes in the airway epithelium [15,16]. For example, Th2 cytokines like IL-4 were suggested to regulate mucin gene expression [17] and were down-regulated in the AQP5 KO mouse, suggesting that AQP5 may be involved in robust Th2-driven responses to allergens. The fact that AQP5 deletion resulted in increased levels of IFN-␥ and IL-2 suggests that there may be a switch from Th2 towards Th1 responses where AQP5 plays a critical role. Th2 cytokines were found to be important in eosinophil recruitment, airway hyper-responsiveness and mucus hypersecretion [18,19]. It is possible that AQP5 may be involved in the development of HDM-induced chronic airway inflammation and mucus hyperproduction through the overproduction of those inflammatory mediators. We found the formation of metaplasia, hyperplasia and hypertrophy of airway goblet cells and peri-bronchial accumulation of inflammatory cells in WT mice following chronic HDM exposure, although obviously less in AQP5 KO mice. Although there is a potential that animals with the genetic modification may have physiological difference and changes, the present study did not find the significant alternations of any measured parameters between WT and AQP5 KO mice challenged with PBS.
Of mucin genes in adult human lung [20], MUC5AC and MUC5B appear to be the predominant genes expressed and the most abundant in mucus secretions [21]. Our data provide solid evidence to support the involvement of AQP5 in overproduction of [22]. Like acute allergic reaction, the role of AQP5 in chronic airway inflammation and mucosal hyperproduction may be involved in down-regulation of antigen presenting cells, because dendritic cells in AQP5 KO mice had less antigen-presenting function [22][23][24]. Mucins, cystic fibrosis transmembrane conductance regulator protein (CFTR) and AQP5 are involved in the formation and composition of the airway surface liquid, essential for proper mucociliary clearance. Expressions of AQP5 and CFTR were found in the airway [25], of which CFTR expression was enhanced whereas AQP5 expression was completely abolished after IL-13 treatment. However, amiloride administration did not inhibit water permeability of airway AQPs [26]. IL-13 did not affect AQP3 or AQP4 expression, whereas it completely inhibited the expression of both AQP5 mRNA and protein [25].
Our findings are supported by a number of previous studies, e.g. pilocarpine-induced fluid secretion [5] and AQP-deletionimpaired osmotic equilibration [27]. However, another study showed that mice deficient in AQP5 were hyper-responsive to cholinergic stimulation and had high lung resistance and low dynamic compliance [7]. The differences between present and previous study [7] include the genetic background, e.g. 129svj mice in the previous and CD1/C57 in the present, and the models, e.g. bronchostimulator-induced fluid secretion in the previous and HDM-induced chronic asthma in the present. Previous studies showed that a bronchodilating -adrenergic agonist increased AQP5 abundance and redistribution of AQP5 to the apical membrane of mouse lung epithelial cells through an adenosine 3Ј, 5Ј-monophosphate (cAMP)-protein kinase A (PKA) dependent pathway [28]. However, other studies demonstrated that siRNA-induced AQP5 down-regulation was associated with MUC5AC over-expression in SPC-1 cell line [29]. We believe the discrepancy between our findings and those of other investigators may be due to the differences between the in vitro and animal models. There is a great need to investigate and understand the role of AQP5 in acute and chronic airway diseases.
In conclusion, our data demonstrated that the chronic challenge with HDM resulted in the development of airway inflammation and mucus hyperproduction in WT mice, although significantly less in AQP5 KO mice. Levels of leucocyte recruitment, cytokine production and mucus hyperproduction in AQP5 KO mice after HDM challenge were still higher than those with PBS challenge. Thus, our data indicate that AQP5 may act as one of new and novel therapeutic targets to treat chronic airway diseases. | 2016-05-14T14:03:39.380Z | 2010-06-09T00:00:00.000 | {
"year": 2010,
"sha1": "8244da630410256f75f3589a65440c45e03f734e",
"oa_license": null,
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/j.1582-4934.2010.01103.x",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "8244da630410256f75f3589a65440c45e03f734e",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
199654202 | pes2o/s2orc | v3-fos-license | A Q-Cube Framework of Reinforcement Learning Algorithm for Continuous Double Auction among Microgrids
: Decision-making of microgrids in the condition of a dynamic uncertain bidding environment has always been a significant subject of interest in the context of energy markets. The emerging application of reinforcement learning algorithms in energy markets provides solutions to this problem. In this paper, we investigate the potential of applying a Q-learning algorithm into a continuous double auction mechanism. By choosing a global supply and demand relationship as states and considering both bidding price and quantity as actions, a new Q-learning architecture is proposed to better reflect personalized bidding preferences and response to real-time market conditions. The application of battery energy storage system performs an alternative form of demand response by exerting potential capacity. A Q-cube framework is designed to describe the Q-value distribution iteration. Results from a case study on 14 microgrids in Guizhou Province, China indicate that the proposed Q-cube framework is capable of making rational bidding decisions and raising the microgrids’ profits.
Introduction
The power system has experienced the evolution from a traditional power grid to the smart grid and then to the Energy Internet (EI), driven by economic, technological and environment incentives. Distributed energy resources (DERs) including distributed generation (DG), battery energy storage system (BESS), electric vehicle (EV), dispatchable load (DL), etc. are emerging and reconstructing the structure of power systems. In future EI, renewable energy sources (RESs) are regarded as the main primary energy owing to the wide application of solar panels, wind turbines and other new energy technologies [1]. According to a recent report from the U.S. Energy Information Administration (EIA), the U.S. electricity generation from RESs surpassed coal this April for the first time in history, providing 23% of the total electricity generation compared to coal's 20%. Meanwhile, the proportion of RES generation in Germany has already reached 40% in 2018. The considerable increase of RESs encourages a significant decrease in energy prices, which drives the reform of energy trading patterns and behaviors in the power system. In addition, flexible location and bi-direction energy trading ability of DERs lead to the transformation of management mode from centralized to decentralized [2]. In this process, the introduction of economic models to this decentralized system makes the power grid truly equipped with market characteristics [3].
As the aggregators of DERs in certain geographical regions, microgrids are important participants in the power market [4]. By implementing internal dispatch, microgrids can provide economic benefits through applying demand response projects and avoiding long distance energy transmission [5]. establish better bidding mechanisms [15,[24][25][26][27][28]. Nicolaisen et al. [15] applied a modified Roth-Erev RL algorithm to determine the bidding price and quantity offers in each auction round. The authors in [25] presented an exact characterization of the design of adaptive learning rules for contained energy trading game concerning privacy policy. Cai et al. [26] analyzed the performance of evolutionary game-theory based trading strategies in the CDA market, which highlighted the practicability of the Roth-Erev algorithm. The authors in [27] presented a general methodology for searching CDA equilibrium strategies through the RL algorithm. Residential demand response enhanced by the RL algorithm was studied in [28] by a consumer automated energy management system. Particularly, Q-learning (QL) stands out because it is a model-free algorithm and easy to implement. The authors in [29] considered the application of QL with temperature variation for bidding strategies. Rahimiyan's work [30,31] concentrated on the adaptive adjustment of QL parameters with the energy market environment. Salehizadeh et al. [32] proposed a fuzzy QL approach in the presence of renewable resources under both normal and stressful cases. The authors in [33] introduced the concept of scenario extraction into a QL-based energy trading model for decision support.
The existing literature shows the potential of combining QL algorithms and energy trading mechanisms in obtaining better market performance. However, suitable answers to the following three issues are still unsettled, which are the motivations for this paper's research: (1) How the QL algorithm could be combined to fit better with energy trading mechanisms to describe the characteristics of the future energy market. Bidding in the future energy market is close to real-time enhanced by ICTs, and the iteration of Q-values should be round-based rather than hour-based or day-based, whereas the time scale of updating Q-values in [29] couldn't reflect the latest market status. In addition, for a multi-microgrid system, the QL algorithm should be carried out separately by each microgrid. The authors in [34] provided the thought of applying a fitted-Q iteration algorithm in the electric power system, and more appropriate methods need to be proposed. (2) How the coupling relationship of bidding price and quantity should be modeled and reflected by the Q-values of the Q-learning algorithm. Little research has been made about the impact of bidding quantity on bidding results in the above literature. Wang's work referred to the issue of bidding quantity [25], but only the bidding strategies of sellers in the market are discussed. In addition, the energy trading game presented in this paper adopted a discontinuous pricing rule. The impact of BESS on adjusting bidding quantity was mentioned in [35] without considering the ramping restriction, which is not practical in realistic scenes. The authors in [36] applied the extended fitted-Q iteration algorithm to control the operation modes of battery storage devices in a microgrid; however, only three actions were taken into consideration in this paper and the (dis)charge rate constraints were ignored. (3) How the QL parameters should be decided by each microgrid, considering real-time energy market status, microgrid preferences, historical trading records and other time-varying factors.
In QL algorithms, the risk characteristic of one microgrid is reflected by the values of QL parameters. However, in the existing literature, those QL-based models try to identify the bidding status according to the experiences gained from a series of trials in the current bidding rounds, ignoring the importance of historical trading records. The authors in [30,31] had noticed this issue, but the relationship between QL parameters and bidding performances were not analyzed in detail. In addition, the progress of QL research in other areas [37] hasn't been introduced into the energy trading market.
To tackle the above issues, we formulate the energy trading problem among microgrids as a Markov Decision Process (MDP) and investigate the potential of applying a Q-learning algorithm into a continuous double auction mechanism. Taking inspiration from related research on P2P trading and heuristic algorithms, a Q-cube framework of Q-learning algorithm is proposed to describe the Q-value distribution of microgrids, which is updated in each bidding round iteratively. To the best of the authors' knowledge, none of the previous work has proposed a non-tabular formation of Q-values for decision-making of the power grid.
The contributions of this paper are summarized as follows: (1) The energy trading problem among microgrids in the distribution network is framed as a sequential decision problem. The non-cooperative energy market operation and bidding behaviors are modeled with a continuous double auction mechanism, which decreases the need for centralized control and suits the weakly-centralized nature of this distribution network. (2) The high dimensional continuous problem is tackled by the Q-learning algorithm. Except for the bidding price, the bidding quantity of microgrids is considered as the second dimension of bidding action space and could be adjusted during the bidding process with the assistance of BESS, by which the coupling relationship between energy trading price and quantity during bidding process is handled. Related parameter setting and sharing mechanisms are designed. (3) A non-tabular solution of Q-values considering two dimensions of action space is designed as a Q-cube. The Q-value distribution in the proposed Q-cube is in accordance with the behavior preferences of the microgrids. (4) The real-time supply and demand relationship is highlighted as the state in the proposed Q-learning algorithm. A normal probability density distribution is divided into eight equal parts as eight states for all the microgrids. In addition, the idea of 'local search' in heuristic algorithms is applied in the proposed Q-learning algorithm for generating the action space. This approach not only takes the characteristics of power grids into consideration, but also achieves the compromise between exploitation and exploration in the action space. (5) The proposed continuous double auction mechanism and Q-learning algorithm are validated by a realistic case from Hongfeng Lake, Guizhou Province, China. Profit comparison with traditional and P2P energy trading mechanisms highlights the doability and efficiency of the proposed method. A 65.7% and 10.9% increase in the overall profit of the distribution network could be achieved by applying a Q-learning based continuous double auction mechanism compared with the two mechanisms mentioned above.
The rest of this paper is organized as follows. In Section 2, the overview of a non-cooperative energy trading market is presented, along with a description of the proposed Q-learning based continuous double auction mechanism. A Q-cube framework of the Q-learning algorithm is introduced in Section 3. Case studies and analyses are demonstrated in Section 4 to verify the efficiency of the proposed Q-cube framework for a Q-learning algorithm and continuous double auction mechanism. Finally, we draw the conclusions and future works in Section 5.
Mechanism Design for Continuous Double Auction Energy Trading Market
In this section, we provide the overview of non-cooperative energy trading market and the analytical description of Q-learning based continuous double auction mechanism.
Non-Cooperative Energy Trading Market Overview
In a future distribution network, the DNO is the regulator of local energy trading market as it provides related ancillary services for market participants: (1) By gathering and analyzing the operation data from ICT, the DNO monitors and regulates the operation status of distribution network; (2) By carrying out centralized safety check and congestion management, the DNO guarantees the power flow in every transmission line is under limitation; (3) By adopting reasonable economic models, the DNO affects energy trading patterns and preferences of market participants. With the reform of the traditional energy market, along with the application of advanced metrology and ICT, the trend of peer-to-peer energy trading pattern is emerging. As peers in this energy market, we assume that MGOs have no information on their peers' energy trading preferences and internal configurations, which addresses the concern on privacy protection. In addition, each peer in this energy market is blind about the bidding target, it joins this energy trading market to satisfy its own needs for energy to the greatest extent rather than seeking cooperation. Each MGO can adjust its bidding price and quantity according to public real-time market information and private historical trading records. Accordingly, the energy trading among microgrids in the distribution network could be formulated as a non-cooperative peer-to-peer energy trading problem. Figure 1 shows the process of the non-cooperative energy trading market discussed in this paper.
Consider a distribution network containing a number of networked microgrids in a certain area. In the hour-ahead energy trading market before Time Slot N, each MGO deals with the internal coordinated dispatch (ICD) of local DERs and residents based on DERs' power prediction and BESS's state of charge (SOC) restriction information. Meanwhile, the DNO makes the distribution network scheduling for further procedures. A Q-learning based continuous double auction among microgrids is implemented according to ICD results and BESS's SOC status; detailed descriptions are presented in the following chapter. After the safety check and congestion management made by DNO, energy trading commands are confirmed and transmitted to each MGO. As the MGOs are empowered to set real-time price for regional energy, internal pricing for DER power and charge and discharge scheduling for BESS are completed in this period. Energy is exchanged according to the pre-agreed trading contracts in Time Slot N under the regulation of DNO. Sufficient back-up energy supply and storage capacity are provided in case of the impact of extreme weather and dishonesty behaviors of the market participants. A market clear process is carried out after Time Slot N to ensure the accurate and timely settlement of energy transactions. Punishments are also made for the above abnormal market behaviors. Security and timeliness of the market clear process could be guaranteed by advanced ICT such as blockchain, smart meters, 5G, etc.
Q-Learning Based Continuous Double Auction Mechanism
This paper proposes a Q-learning based continuous double auction (QLCDA) mechanism for the energy trading market. Figure 2 presents the process of the proposed QLCDA in one time slot.
Before the QLCDA start in one time slot, each MGO tackles the ICD problem and generates the initial bidding information. The SOC check and charge and discharge restriction of the BESS are also completed in this initialization stage. In each round of CDA (indexed by n), each MGO reports its energy trading price and quantity to the DNO. Note that the trading quantity would be updated in each round; it is possible that one MGO changes its role as buyer or seller in the bidding process. Thus, an identity confirmation is made as the first step in CDA and the number of buyers (nb)/ sellers (ns) are obtained. Then, the DNO calculates and releases the overall supply and demand relationship (SDR) to these networked microgrids. Meanwhile, the reference prices for buyer and seller microgrids are calculated and released, which are the average price of selling and buying energy in the real-time market. MGOs update their bidding price and quantity according to real-time SDR and historical trading records based on the Q-Learning algorithm; the SOC restrictions are also taken into consideration to limit the behaviors of BESS in each microgrid. The bidding price of sellers and buyers are sorted in increasing order by the DNO; we have price b nb < price b nb−1 < · · · < price b 1 and price s 1 < price 2 < · · · < price s ns . Once the price sequences of seller and buyers are intersected, i.e., price s 1 < price b 1 , MGOs whose bidding prices are in this interval are chosen to join the energy sharing process. Actual trading price and quantity are decided in this step and the bidding quantity of each microgrid is updated based on the sharing results. If there is still untraded energy in the market, the QLCDA will repeat until the deadline of bidding rounds (N represents the maximum bidding round in one time slot). If energy demand or supply are fully satisfied before the deadline, QLCDA will be stopped in the current round. Results of QLCDA are confirmed by MGOs and sent to the DNO for further energy allocation and safety check. Detailed descriptions on initialization and the energy sharing mechanism are presented in the following chapters.
Initialization Setups
As the ICD of each microgrid is completed before QLCDA by each MGO, the scheduling plans are assumed to be fixed during one time slot, therefore the initial bidding quantity of QLCDA are set as the results of ICD. For seller microgrid i, the initial bidding price in time slot T is calculated as follows: Similar to the seller, buyer microgrid j submits the bidding price as follows: where p t grid,buy and p t grid,sell represent energy purchase and sell price of the grid in time slot t respectively. p_hl t and p_ll t are the highest/lowest bidding price limitation of this market in time slot t. rand i and rand j are random real numbers generated from the range of [0.95,1] to obtain higher/lower initial bidding price for sellers/buyers.
As each microgrid is equipped with BESS already, it is essential to consider the application of BESS in QLCDA and make full use of its charging and discharging capacity to improve real-time SDR inside the distribution network. The charging ability of microgrid i's BESS is given by: Similarly, the discharging ability of microgrid i's BESS is calculated as follows: where C i is the capacity of microgrid i's BESS, SOC t i is the initial SOC of BESS in time slot t. Due to the limitation of material technology, charging and discharging behaviors of BESS are constrained, SOC ∆,charge i and SOC ∆,discharge i represent the ramp constraints on charging and discharging of microgrid i's BESS, respectively. Practical operations of BESS will cause energy loss, therefore we set η charge i and η discharge i as the charging and discharging efficiency of BESS, respectively. During the QLCDA process, updated bidding quantity can't exceed the restrictions on these two parameters. ∆t is the bidding cycle in this energy trading market.
Energy Sharing Mechanism
Once the price sequences of buyers and sellers are intersected, i.e., price s 1 < price b 1 , the microgrids whose bidding price are within the interval will be chosen to enter the energy sharing process. Due to the uncertainty and complexity of price intersections, a layering method and a price-prioritized quantity-weighted sharing rule are combined to solve the energy sharing problem.
The number of selected buyer and seller microgrids are nb share and ns share , respectively. Starting from the highest bidding price of sellers, the buyer microgrids whose bidding prices are higher than p s bs share and all of the seller microgrids are selected to be combined into a sharing layer. These buyer microgrids have the priority to trade with seller microgrids as they would like to pay the higher price for each unit of energy. Deals are made in this layer and related microgrids are removed from the sharing list depending on different situations. The layering method is applied repeatedly until there is no buyer microgrid in the sharing list or all the energy of seller microgrids is sold out. The detailed layering process is presented below:
•
(1) Form a bidding layer according to the above-mentioned method and proceed to (2).
•
(2) Allocate the energy in this layer. If energy demand exceeds supply in this layer, the sharing process is over after allocation. If energy supply exceeds demand in this layer, proceed to (3).
•
(3) Remove the buyer microgrids in this layer from the optional sharing list as their energy demands are satisfied. Remove the sell microgrids whose selling prices are higher than the current highest price of buyer microgrids as there are no potential buyers for them. Return to (1) to form a new bidding layer.
Take the situation in Figure 3 as an example. Two buyer microgrids (p b 1 and p b 2 ) and three seller microgrids (p s 1 , p s 2 and p s 3 ) are selected to form Layer 1 as shown in Figure 3a. After energy allocation in Layer 1, all of the seller microgrids have surplus energy, therefore p b 1 and p b 2 are removed from the sharing list as their energy demands are satisfied. p s 3 is also removed from the list as no buyer microgrid's bidding price is higher than his. Afterwords, Layer 2 is formed containing one buyer microgrid (p b 3 ) and two seller microgrids (p s 1 and p s 2 ), as shown in Figure 3b. The sharing process ends after the energy allocation in this layer. For each layer in the energy sharing process, without loss of generality, we propose a price-prioritized quantity-weighted sharing rule for two situations. Figure 4 gives the sharing results of examples on these two situations, in which the bidding price/quantity of each deal is given below/above the figure. Energy quantity of buyers in a layer is sorted based on their quoted prices in descending order, while for sellers the quantity are sorted in ascending order. This rule ensures buyers with higher bid prices give priority to lower-priced energy. In Figure 4a, for the sharing process in round n, when ∑ q b i ≥ ∑ q s j , every seller will sell out its energy, the exceeded part of demand will be cut and participate in the next round of bidding in the energy market. However, when ∑ q b i < ∑ q s j as shown in Figure 4b, the sellers will have to fairly share the exceeded part of supply. A seller microgrid j's trading quantity is calculated as follows: In Equation (6), q m cut,j represents the cut quantity for microgrid j in round n. The oversupply burden is weighted shared to each seller microgrid and cut from their energy supply. This sharing rule guarantees that each seller microgrid could sell a non-negative quantity, which is more fair than the equally sharing mechanism. After the determination of sharing layers and trading quantity, the DNO can choose any suitable price within the interval [p s i , p b j ] as trading price at this time slot for microgrid i and j. We assume both sides of this transaction agree to trade at a price p ij = θ · (p b i + p s j ), where θ ∈ (0, 1) is a predefined constant. Without loss of fairness, θ is set as 0.5 in this paper.
The proposed energy sharing mechanism ensures that buyer microgrids with higher bidding price and seller microgrids with lower bidding price have the priority in reaching a deal. In addition, the fairness of energy trading quantity is accomplished by a weighted sharing rule.
A Q-Cube Framework for Q-Learning in a Continuous Double Auction Energy Trading Market
In a normal Q-learning algorithm, an agent learns from the environment information and interacts with relevant agents. By observing states and collecting rewards, an agent selects appropriate actions to maximize future profits. The agents are independent from one other both in terms of acting as well as learning. However, the particularity of the energy trading market creates a complex energy economic system. Non-cooperative trading pattern, personalized MGO preferences and time-varying market conditions bring difficulties to the selection of bidding strategies for market participants. As a model-free algorithm, Q-learning is capable of modeling the MGOs' bidding behaviors in a continuous double auction energy trading market. In this paper, a Q-cube framework of Q-learning algorithm is proposed especially for this multi-microgrid non-cooperative bidding problem, which addressed the exploitation-exploration issue.
Basic Definitions for Q-Learning
We base the Q-cube framework on an MDP consisting of a tuple S, A, S , r . Detailed introductions of these variables are given as follows.
State Space
S represents the state space, which describes the state of MGOs in a real-time energy market. As a multi-agent system, it is impossible and senseless to select different state descriptions for each agent, whereas a common formulation is preferred. We propose to choose the real-time supply and demand relationship to form the state space for the following reasons: (1) the SDR has a decisive impact on bidding results. When the energy supply exceeds demand in a distribution network, seller microgrids are more willing to cut their bidding prices to make more deals, and exceeded supply is preferred to be stored in the BESS rather than selling to the grid at lower prices. In the meantime, buyer microgrids are not eager to raise their bidding prices quickly, but they tend to buy more energy for later use as the trading prices are much cheaper than those of the grid. The interactions between price and quantity on two roles of the energy market participants still exist when the energy demand exceeds supply. (2) The SDR reflects external energy transactions status of the networked microgrids. The more balanced the supply and demand relationship is, the less energy networked microgrids interacted with the distribution network. (3) The SDR describes the bidding situation as a public information of the energy trading market, which addresses the issue of privacy protection.
In this paper, the real-time SDR of round n in time slot T is formulated as a normal distribution with µ = 0 and δ = 0.3, whose value is extended to the interval of [0,2].
where CP n is the clear power index, representing the clear power of the energy market in round n divided by a pre-defined constant A.
A pre-selection on the value of δ is performed and the results are shown in Figure 5a. A small choice of δ value (δ = 0.1) will cause a sharp increase of SDR during the interval of [−0.25, 0.25], which makes the SDR meaningless in a large clear power index range. Meanwhile, a large δ value (δ = 0.5) will reduce the sensitivity of SDR when the energy supply and demand are close to equilibrium. Therefore, a compromise choice of δ value (δ = 0.3) is preferred. The blue curve in Figure 5a shows the SDR under a different clear power index. When ∑ q n seller = ∑ q n buyer , SDR n = 1, the energy supply and demand attain equilibrium. When ∑ q n seller ≥ ∑ q n buyer , SDR n ≥ 1, vice versa. The SDR is sensitive in the interval close to 1 as the equilibrium between energy supply and demand is dynamic and in real time. In view of the fact that the SDR of energy trading market is a continuous variable, it is impossible to consider this MDP problem in an infinite space. In addition, it is impractical to model and simulate the energy trading market with limitless state descriptions. As a common method of applying Q-learning algorithm in practical problems, the state space should be divided into limited pieces for a better characterization of the SDR. For the Q-learning algorithm proposed in this paper, the number of states should be even-numbered as the SDR function is symmetrical. In addition, the probability of falling into each state should be equal. Without loss of fairness, the probability density distribution of the SDR function is divided into eight blocks with equal integral areas as shown in Figure 5b. These eight SDR intervals are defined as eight states in the proposed state space S for all the MGOs. The clear power index is also divided into eight intervals, corresponding to eight intervals of the SDR. When the clear power index is close to 0 (the market is near the equilibrium between energy supply and demand), the interval length of state is small as in most time slots the SDR experiences minor oscillation in the bidding rounds near the deadline. However, for the states whose clear power index is far from 0, the interval length is large as the SDR isn't sensitive, which means that the microgrids in the distribution network want to escape from these states.
Action
A represents the set of eligible actions of MGOs, which are the variation of bidding price and quantity in each bidding round. As most of the previous works aim at increasing market participants' profits via the dynamic adjustment of bidding pricing, we propose a two-dimensional formulation of action for Q-learning. By covering both bidding price and quantity, the action space is extended to a two-dimensional space rather than a set of single price actions, formulated as Equation (9): a n = (p n , q n ) n = 1, 2, · · · , N.
The basic idea on actions in this paper is that each MGO always optimistically assumes that all other MGOs behave optimally (though they often will not, due to their exploration and exploitation nature). In addition, all the MGOs play fair competition in the bidding process. Considering the particularity of energy trading market and agent-based simulation environment, the concept of 'Basic Action' is created to describe the rational and conventional action of each MGO. One point needs to be emphasized is that 'Basic Action' is just a point in the action space, showing the general choices of bidding price and quantity for MGOs. The mathematical expressions of basic price action are presented as follows: where p T i,step represents the price changing step of MGO i, determined by MGO i's initial bidding price price initial i and historical trading price price history,T i in time slot T as shown in Equation (11).
β is a regulation factor for the price changing step. As the QLCDA reaches the time deadline, both buyer and seller MGOs are willing to make a concession on the bidding price to make more deals. The setup of time pressure TP n as presented in Equation (12) describes the degree of urgency over bidding rounds. Discussions on the choice of time pressure function have already been made in previous research [19]. In this paper, we adopt a simplified form in which the time pressure of each microgrid is only related to the bidding round index. The historical trading records of each microgrid are ignored in the description of time pressure. SDRF n is a modified factor based on real-time SDR. Different calculation expressions are adopted for buyer and seller MGOs as follows, inside which π is an adjustment coefficient in the range of [0.3,0.5]. The setting of π measures the influence of SDR on the basic bidding price: Accordingly, the basic quantity action is calculated as follows: where the PR n i is a reference price factor calculated as a parameter of normal distribution, λ = 1 when MGO i is a buyer, while λ = −1 when MGO i is a seller. p re f erence,n i is the reference price of MGO i in round n, calculated as the average price of potential transactions in the market. The values of µ and σ in Equation (15) are the same as those in Equation (7). ρ is a pre-defined adjustment coefficient located in the range of [0.95,1] for coordination with the change rate of SDR in Equation (14).
Since the action space is a continuous one, it is impossible to explore the whole action space in this problem. The idea of 'local search' in heuristic algorithms is applied in the proposed Q-learning algorithm: we intend to explore the neighborhood space of basic action on price and quantity dimensions for better bidding performance in the QLCDA process. Based on the basic action obtained in the former process, we search two directions of the price and quantity dimensions symmetrically, therefore the number of actions in each dimension is odd. Supposing that we choose more than two neighborhoods of the basic action in one direction, the total number of actions in this problem will be at least 25 actions, which is impractical and meaningless in both modeling and simulation. To limit the number of bidding actions and reduce computational complexity, only the closest neighborhoods are taken into account. The neighborhood actions are calculated as follows, where ξ and τ indicate the proximity of bidding price and quantity according to bidding experiences, respectively. ξ and τ are independent variables that only describe the neighborhood relationship of bidding price and quantity. Thus, a 3×3 action matrix is created as alternative behaviors of one MGO under a certain state. One factor, in particular, needs highlighting: the nine actions under a certain state represents nine bidding preferences and tendencies of each microgrid. Given that the SDR in one state might be different, the nine actions are also SDR-based and not totally the same for one state:
Q-Values and Rewards
The goal of the Q-learning algorithm for bidding strategy optimization is to choose the appropriate actions under different states for each MGO, and the Q-Values indicate the long-term values of state-action pairs. In the former Q-learning process, the Q-values for state-action pairs are arranged in the so-called Q-table. However, based on the action space, we mention, in the former chapter, a Q-cube framework of Q-learning algorithm is proposed as shown in Figure 6, in which the colors of state slices are corresponding to Figure 5.
Quantity(kW h) Figure 6. A Q-cube framework designed for the Q-learning algorithm.
The Q-value of taking one bidding action under one certain state is distributed in this Q-cube as shown with a small blue cube inside. Generally speaking, the proposed Q-cube is a continuous three-dimensional space, but, for practical purposes, we discrete the problem domain by taking eight states, three bidding prices and three bidding quantities under consideration in this paper.
Each MGO has a unique Q-cube showing the Q-value distribution in the proposed problem domain. The Q-values in the Q-cube are not cleared to zero at the end of each time slot but will be applied as initial configuration of the next time slot. The rolling iteration of Q-cube accumulates bidding experience in the energy trading market.
r(s, a) is the reward function for adopting action a in state s. The selection of reward function is crucial, since it induces the behavior of MGOs. Seeing that we consider the dual effects of bidding price and quantity in QLCDA, both contributions of adopting one certain action should be taken into account in the reward function. The mathematical expression of reward function is presented in Equation (21). ω represents the weighted factor on bidding price and quantity. As price is the decisive factor in deciding whether a deal is closed, we pay more attention to the bidding price, therefore ω is usually set to be greater than 0.5: r p (s, a) and r q (s, a) represent the contributions of bidding price and quantity update on the reward function, which are calculated as follows. All of the variable definitions are the same as those in Equations (10)-(16):
Update Mechanism of the Proposed Q-Cube Framework
In the proposed Q-learning-based continuous double auction energy trading market, as two dimensions of MGOs' action, bidding price is the key factor in deciding whether to close a deal or not, bidding quantity affects the real-time SDR of the overall market. Meanwhile, the SDR (as the MGOs' states) has a decisive influence on MGOs' actions by updating Q-Values. The coupling relationship between MGOs' actions and market SDR is modeled in this chapter, as shown in Figure 7. One MGO takes a n−1 in round n − 1 and the state transfers from s n−1 to s n . After calculating rewards and updating Q-value, the probability of choosing any action in the action space is modified. Afterwards, given the new Q-cube and market SDR, the MGO might choose a n as the action in round n and repeat the above process. Therefore, the state-action pair of one MGO in each bidding round is formulated in a spiral iteration way, considering both local private information and public environment. The Q-cube framework is a connector of the state perception process and a decision-making process, which is the core innovation of this paper.
Q-Value Update
The common Q-Value update rule for the model-free Q-learning algorithm with learning rate α and discount factor γ is given as follows: where Q n+1 (s n i , a n i ) represents the updated Q-value for MGO i adopting action a n i under state s n i in the nth bidding round. When observing the subsequent state s n+1 i and reward r(s n i , a n i ), the Q-value is immediately updated. We adopt this common Q-value update rule for Q-learning in this paper.
The learning rate α and discount factor γ are two critical parameters of MGOs as they reflect each MGO's bidding preference. The learning rate defines how much the updated Q-value learns from the new state-action pair. α = 0 means the MGO will learning nothing from new market bidding information, while α = 1 indicates that the Q-value of a new state-action pair is the only important information. The discount factor defines the importance of future revenues. The MGOs whose γ near 0 are regarded as a short-sighted agent as it only cares about gaining short-term profits, but, for the MGOs whose γ is close to 1, they tend to wait until the proper time for more future revenues.
Action Update
In each round of QLCDA, for each MGO, firstly the basic action is calculated based on market SDR and historical trading records as shown in Figure 8 with red balls in the action space. The colors of action space slices represent the market state. The neighborhood actions are formed in the action space as shown with blue blocks. A selection process is carried out by creating the probability matrices of nine optional actions. In Equation (25), the action matrix A n i is composed by combining the optional bidding price and quantity. Correspondingly, the elements the probability matrix Pro n i are formed according to 'ε-greedy' strategy. The probability (x bb ) of the basic action (a bb i = (p basic i , q basic i )) is given preferential treatment and equals ε. For each microgrid, the setting of ε represents its degree of attention on optimal bidding action choice in theory, which is diverse from each other. The probability of other neighborhood actions are calculated by weighted sharing of the remaining probability according to their Q-value (as Equation (26)). The sum of nine probabilities on actions equal to 1: For example, x −+ represents the probability of choosing action a −+ i = (p − i , q + i ) under the current state, which is calculated as follows: This selection mechanism means that all MGOs have a higher possibility of choosing actions with higher Q-values in each round of QLCDA. By putting the MGOs' best possible local actions together, the most suitable actions for the current global state are generated in a distributed non-cooperative way.
Case Studies and Simulation Results
In this section, we investigate the performance of the Q-learning algorithm for continuous double auction among microgrids by Monte Carlo simulation. The proposed algorithm is tested on the realistic case in Guizhou Province of China. The distribution network near Hongfeng Lake consists of 14 microgrids with different scales and internal configurations. Detailed topology of the networked microgrids are given in Figure 9. As power flow calculation and safety check are not the focus of this paper, distance information and transmission price in this distribution network are not provided here. The interested reader may refer to [24] for more details.
We simulate this non-cooperative energy trading market within a scheduling cycle of 24 h. The QLCDA is performed every ∆t = 0.5 h. A scheduling cycle starts at 9:00 a.m. The internal coordinated dispatch of each microgrid is accomplished in advance, from which the dispatch results are treated as initial bidding information in QLCDA. BESS properties of the 14 microgrids are provided in Table A1, including capacity, initial SOC, charge and discharge restriction and charge and discharge efficiency. Guizhou Grid adopts the peak/flat/valley pricing strategy for energy trading, which divides a 24-hour scheduling cycle into three types of time intervals. The surplus energy injected to the grid is paid at 0.300 CNY for each kWh in the whole day. In addition, buying energy from the grid is charged at the price 1.197/0.744/0.356 CNY, respectively (see Table A2). In order to simulate the microgrids' preferences in decision-making, different risk strategies are adopted by setting diverse Q-learning parameters. Fourteen microgrids' values of learning rate, discount rate and greedy degree are given in Table A3. Three risk strategies are defined and discussed according to different Q-learning parameter choices: • Other hyper parameters in the proposed Q-learning algorithm are given in Table A4. The proposed energy trading market model and QLCDA algorithm are implemented and simulated using MATLAB R2019a on an Intel Core i7-4790 CPU, 3.60GHz. Three case studies on bidding performances and profit comparisons are discussed in this section. All the three case studies are simulated repeatedly for 30 times, among which the bidding result of one certain Monte Carlo simulation is analyzed in detail in Case Studies 1 and 2, and the average values of bidding profits are adopted to compare with the profits of two other energy trading mechanisms in Case Study 3.
Bidding Performance of the Overall Energy Trading Market
The proposed continuous double auction energy trading mechanism achieves significant effects on the energy trading among microgrids. Figure 10 shows the bidding process of price in Time Slot 12. In Figure 10a, the bidding price of all microgrids in the whole time slot is presented. Starting with different initial bidding prices, the slopes of price curves indicate different bidding strategies of the MGOs. Due to the fact that bidding price is the key factor in deciding whether to close a deal or not, different intersection points of the pricing curves represent deals under various market conditions. Buyer/Seller MGOs with stronger willingness of reaching deals prefer to raise/drop their prices quickly, expecting that their energy demand/supply is satisfied in the early stage of a time slot. Although patient MGOs would like to wait until the deadline for a better trading price, they have to experience fierce price competition near the deadline and face the possibility of no energy to trade. Figure 10b shows the bidding price details in rounds 105-135 of time slot 12. MG 11 hadn't traded energy with other microgrids for a long time according to historical records. With stronger willingness of selling energy, MG 11 drops its bidding price quickly and reaches a deal with MG 4 at the price 0.530 CNY/kWh. Unmet energy demand of MG 4 is satisfied by MG 13 with a higher price (0.579 CNY/kWh). MG 6 raises its bidding price slowly and closes deals with MG 9 and MG 10 at the price of 0.481 CNY/kWh and 0.489 CNY/kWh, respectively. However, 27.016 kWh of energy demand has to be bought from the grid with a higher price (0.744 CNY/kWh) as all the energy supply from other microgrids is sold out. This shows a trade-off between price and trading opportunity: one MGO might be eager for closing a deal, but the trading price might not be satisfactory. On the other hand, the energy trading market follows the principle of 'First, Bid, First, Deal', which means the closer the time to the deadline, the less energy one is able to trade.
Comparison on clear power curves before and after CDA is presented in Figure 11. Enhanced by the proposed CDA mechanism, the distribution network achieves better performance on the balance of energy supply and demand. As a result of more balanced energy trading market conditions, more energy is transacted within the distribution network rather than trading with the grid, which reduces long-distance energy transmission loss. With the help of BESS, an alternative form of 'demand response' is performed among microgrids by exerting the potential capacity of elastic loads, which expands the concept of demand response from time-slot-based to multi-agent-based by CDA. In addition, trading prices are more reasonable and profitable, taking care of each MGO's personal preferences. The comparison of trading quantity before and after the proposed CDA is given in Table 1. A significant effect could be obtained by adopting CDA as the trading quantity with grid decrease by different degrees. For example, only 10.8% of the energy demand of MG 3 is provided by the grid, while microgrids with heavy demand like MG 4 and MG 6 still depend on the grid to a large extent, holding 65.5% and 57.1%, respectively. Seller microgrids' dependency of the grid is obviously less than that of buyer microgrids with an average percentage of 26.1% as they prefer to sell energy within the distribution network. The BESS storage change and (dis)charge energy loss are also presented in Table 1, from which we could find that most of the microgrids' BESS obtain higher SOC at the end of one scheduling cycle. The larger BESS capacity and the more active the participation in the trading market, the more BESS (dis)charge energy loss will be caused. The bidding results of specific microgrids with different roles are presented in this chapter, including bidding price and quantity. Figure 12 gives the energy trading price of MG 4 and MG 12. MG 4 plays the role of buyer in the whole scheduling cycle, and it successfully reaches deals with other microgrids in most of the time slots as shown in Figure 12a. On no-deal time slots, it buys energy from the grid at higher prices. During the valley interval, although the grid purchase price is low enough (0.356 CNY/kWh), there are still plenty of opportunities to trade with other microgrids in consideration of the real-time SDR. MG 4 succeeds at buying energy at lower prices in almost all the time slots in this interval. Different from MG 4, MG 12 plays two roles in different time slots. The detailed trading prices of MG 12 in time slots 9 to 32 are presented in Figure 12b. Good performance is obtained in both roles that MG 12 plays: during buyer intervals, it reaches deals with other microgrids at prices lower than the grid's, while, in seller intervals, it sells energy in every time slot for higher profits. The overall profit of MG 12 raised by 33.9% after joining the CDA energy trading market. For MG 7, the bidding performance on quantity is presented in Figure 13a. As a buyer microgrid in the whole scheduling cycle, the gaps between original bidding quantity curve and actual trading quantity curve correspond to the real-time SDR. When SDR ≥ 1 (the original clear power ≥ 0 as shown in Figure 13a above the blue horizontal line) in former and later time slots, MG 7 raises its trading quantity and stores more energy into its BESS to absorb the surplus energy in the market. During the middle time slots when SDR < 1 (the original clear power < 0), part of the energy demand is provided by its own BESS, which helps to balance the excessive energy demand in market. The two curves coincide at the end of the scheduling cycle as the BESS stores enough energy in time slot 32 to 38 and SOC is near 1. The same characteristics could be found in the bidding performance of MG 12. In Figure 13b, when energy demand exceeds supply as shown below the purple horizontal line, the BESS of MG 12 discharges to satisfy the energy demand. More energy is sold in these time slots to reach a better market SDR performance, while, during the nighttime, MG 12 charges the surplus energy to its BESS rather than selling to the grid. It is obvious that the actual trading quantity curves cohere better with the real-time SDR than the original bidding quantity curves in both the standpoints of buyer and seller microgrids. O r i g i n a l B i d d i n g Q u a n t i t y A c t u a l T r a d i n g Q u a n t i t y B E S S Q u a n t i t y C h a n g e T i m e S l o t ( 0 . 5 h ) B u y i n g Q u a n t i t y ( k W h ) O r i g i n a l B i d d i n g Q u a n t i t y A c t u a l T r a d i n g Q u a n t i t y B E S S Q u a n t i t y C h a n g e S e l l i n g Q u a n t i t y ( k W h ) The BESS SOC of MG 7 and MG 12 is presented in Figure 14, from which we could find the trend of SOC curves coheres with that of the SDR. When SDR < 1, both MG 7 and 12 discharge their BESS to compensate the lack of energy supply. The BESS of MG 12 releases all its energy and the SOC reaches 0 since time slot 16. However, when the energy supply exceeds demand during the nighttime, the BESS starts to charge and save surplus energy for later use. The SOC of MG 7 reaches 100% since time slot 40. Different from former research by [25], the charge and discharge behaviors of BESS are restricted by ramp constraints, which makes the simulation results closer to reality. Due to BESS capacity and (dis)charge energy loss, the regulatory ability of BESS on the energy trading market is limited. When SOC = 0 or SOC = 1, internal re-scheduling of each microgrid could be developed for greater bidding potential.
Case Study 2: Effectiveness Verification of the Proposed Q-Cube Framework
The Q-cube of a MGO is updated in each round of the whole scheduling cycle. Q-values are iteratively accumulated following the proposed update rules. In order to display this distribution in the three-dimension space, bidding actions are abstracted to nine actions. MG 6 and MG 13 are chosen as the examples of risk-taking strategy and conservative strategy, respectively. The Q-value distributions of these two microgrids are shown in Figure 15. As a risk-taker, the Q-value distribution in MG 6's Q-cube is a non-uniform distribution with a slight trench in the middle of action dimension as shown in Figure 15a. According to the Q-cube framework proposed in this paper, the low value of MG 6's greedy degree (ε = 0.1680) results in its curiosity on the neighborhood actions of basic action (action 5) for all the states. Neighborhood actions are given more opportunities to accumulate Q-values based on the action selection mechanism. The eagerness of obtaining more future profits aggravates this phenomenon as the discount factor (γ = 0.6721) is high. A low value of learning rate (α = 0.2617) indicates that new bidding information in the real-time market has little impact on the choice of actions. On the contrary, MG 13 chooses to be conservative in the QLCDA process, whose Q-value distribution in the Q-cube is presented in Figure 15b. MG 13 likes to keep in touch with the latest market information and prefers to choose the basic action under states near SDR = 1, which leads to high values of learning rate (α = 0.6812) and greedy degree (ε = 0.8462). He is satisfied with current revenues and doesn't have much interest in exploring new actions, so the discount factor of MG 13 is at a low level (γ = 0.333). Therefore, there is an obvious hump on the surface of Q-value plane around the middle part (near Q (state 4, action 5) and Q (state 5, action 5)), showing that MG 13 is rational and not greedy.
The iteration results of Q-values of different microgrids prove that the proposed Q-Cube framework for Q-learning algorithm is capable and effective in reflecting the microgrids' characteristics.
Case Study 3: Profit Analysis on Different Energy Trading Mechanisms
To verify the performance of the proposed QLCDA, a profit analysis on different energy trading mechanisms is carried out. Previous work of [19] on peer-to-peer energy trading mechanism is introduced here for comparison. As shown in Table 2, three energy trading mechanisms are simulated on the same case from Guizhou Grid for 30 times and the average values of energy trading profits are calculated and analyzed for statistically significance. Negative values indicate the cost paid to peer microgrids and the DNO. The proposed QLCDA mechanism is proved to have superior performance over tradition energy trading mechanism as expected. In addition, for most microgrids, a certain degree of increase on profits could be obtained compared to P2P mechanisms. The profits of seller microgrids are commonly raised as clean energy generated during valley intervals could be stored until the needed time rather than selling to the grid at lower prices. A 65.7% and 10.9% rise in the overall profits of the distribution network can be achieved by the QLCDA mechanism compared with that of the tradition energy trading mechanism and P2P mechanism, respectively.
However, for some buyer microgrids (particularly for MG 6), the profits by adopting the QLCDA mechanism is less than that of the P2P mechanism. This could be explained by the following reasons: (1) as presented in Table 1, the trading quantity is adjustable in the QLCDA mechanism, most of the microgrids obtain higher BESS SOC at the end of one scheduling cycle, inside which MG 6 stores the largest quantity of energy (151.1kWh). The profits by selling this part of stored energy are not calculated in Table 2, while, in a P2P mechanism, the effect of applying BESS and changes in bidding quantity is not taken into consideration. (2) MG 6 is a risk-taker based on its Q-learning parameters. The low value of greedy degree (ε = 0.1680) and learning rate (α = 0.2617) indicate that MG 6 cares less about new bidding information and wants to explore more potential actions rather than sticking to the basic action. A high value of discount rate (γ = 0.6721) proves his eagerness for more future profits, therefore it prefers to keep its BESS at a high SOC and seek deals with lower trading prices near the deadline. From another standpoint of view, the profits analysis proves the effectiveness of the proposed Q-Cube framework for the Q-learning algorithm on energy trading problems. Considering the equipment and operation costs of BESS, the proposed QLCDA mechanism might not be the best choice for energy trading among microgrids, but the simulation results prove its potential in increasing profits for microgrids with different configurations and preferences.
Conclusions
To better describe the characteristics of future electricity market, a non-cooperative continuous double auction mechanism, considering the coupling relationship of bidding price and quantity, was developed in this paper to facilitate energy trading among microgrids in the distribution network. An alternative form of 'demand response' is performed in the proposed energy trading mechanism by exerting the potential capacity of BESS, which expands the concept of demand response from time-based to multi-agent-based. The Q-learning algorithm was introduced to CDA mechanism as a decision-making method for each microgrid. To solve the existing defects on the application of Q-learning algorithm in power system, a non-tabular framework of Q-values considering two dimensions of the bidding action is proposed as a Q-cube. In addition, corresponding parameter setting and state-action architecture are designed to better reflect the microgrids' personalized bidding preferences and make rational decisions according to real-time status of the networked microgrids. Simulations on a realistic case from Hongfeng Lake, Guizhou Province, China prove the efficiency and applicability of the proposed CDA mechanism and Q-cube framework. All of the microgrids are able to make an appropriate negotiation response to the global real-time supply and demand relationship without disclosing personal privacy. A 65.7% and 10.9% increase in the overall profit of the distribution network could be achieved by applying a QLCDA mechanism compared with the traditional energy trading mechanism and P2P energy trading mechanism, respectively. In addition, the Q-value distribution in the proposed Q-cube gives a good response to microgrid's bidding behaviors and preferences on both theoretical analysis and simulation results. As has been demonstrated in this paper, the proposed Q-cube framework of a Q-learning algorithm for a continuous double auction mechanism can be applied to more energy trading markets in future EI.
There are still some limitations of the proposed Q-cube framework to be discussed: the interaction between bidding price and quantity should be better described as many other factors could have an influence on this coupling relationship, and it is still difficult to summarize the microgrids' energy bidding preferences with these existing parameters. Moreover, the power flow calculation should be considered synchronously as the energy trading quantity might cause safety issues in the distribution network. In future works, a two-layer energy bidding architecture could be discussed considering both QLCDA among microgrids and internal coordinated dispatch inside microgrids. The interaction of these two layers is worth studying. The power transmission limitations should be considered to ensure the safety of energy market. In addition, further extensions are to be carried out on the time-varying setting of QL parameters and a more appropriate description of the reward function. Acknowledgments: The authors thank Ke Sun and Yifan Cheng for careful reading and many helpful suggestions to improve the presentation of this paper.
Conflicts of Interest:
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript: Table A1 shows the BESS properties of the 14 microgrids in the Guizhou Grid, including capacity, initial SOC, charge and discharge restriction and charge and discharge efficiency. The peak/flat/valley electricity price formulated by Guizhou Grid, China is presented in Table A2, which divides a day into three types of time internals. The learning rate α, discount factor γ and greedy degree ε parameters of the 14 microgrids are given in Table A3. The values of hyper parameters that appear in this paper are given in Table A4. | 2019-08-16T10:22:38.372Z | 2019-07-26T00:00:00.000 | {
"year": 2019,
"sha1": "bb4b0cb5036958f5d014f5587b306644b7a07bdc",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/12/15/2891/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c51b79616fce75ebc0556ab2ad2e1dcf424ab321",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Engineering"
]
} |
232772793 | pes2o/s2orc | v3-fos-license | Exosomes: Small EVs with Large Immunomodulatory Effect in Glioblastoma
Glioblastomas are among the most aggressive tumors, and with low survival rates. They are characterized by the ability to create a highly immunosuppressive tumor microenvironment. Exosomes, small extracellular vesicles (EVs), mediate intercellular communication in the tumor microenvironment by transporting various biomolecules (RNA, DNA, proteins, and lipids), therefore playing a prominent role in tumor proliferation, differentiation, metastasis, and resistance to chemotherapy or radiation. Exosomes are found in all body fluids and can cross the blood–brain barrier due to their nanoscale size. Recent studies have highlighted the multiple influences of tumor-derived exosomes on immune cells. Owing to their structural and functional properties, exosomes can be an important instrument for gaining a better molecular understanding of tumors. Furthermore, they qualify not only as diagnostic and prognostic markers, but also as tools in therapies specifically targeting aggressive tumor cells, like glioblastomas.
Background
Glioblastoma (GBM), a primary brain tumor in adults, is among the most aggressive tumors we know. Worldwide, 100,000 people are diagnosed with glioma each year [1] and most suffer from the highly aggressive form, glioblastoma (grade IV). In fact, it accounts for 52% of all primary brain tumors [1]. Despite enormous advances in the involved oncological disciplines, the prognosis of affected patients remains poor [2]. Patients still suffer from an aggressive course, and survival rates have improved only marginally over the last three decades [3,4]. Thus, the median overall survival from diagnosis to death is approximately 15 months [5]. The reasons for this aggressive behavior are the subject of intensive current research efforts.
One of many difficulties with this tumor entity is that the tumor is genetically and epigenetically very heterogeneous. Deep sequencing of the genome and transcriptome, as well as analyses of the epigenome, have shown a wide range of genetic and epigenetic variations within the same glioblastoma, so that the development of therapies to eliminate all tumor cells is challenging [6,7]. This genetic and phenotypic heterogeneity was also taken into account by the WHO classification of 2016 [8], which already incorporates certain mutations.
Glioblastoma and its Immunosuppressive TME
Glioblastomas establish a highly immunosuppressive tumor microenvironment (TME) [9], communicating with normal brain cells to create a microenvironment that supports tumor progression. This affects practically every cell in the TME.
Exosomes: The Smallest Group of EVs
Based on their size, biochemical properties, and origin, extracellular vesicles (EVs) are classified into three subgroups. These include apoptotic bodies, microvesicles, and exosomes [28]. The latter are the smallest group of EVs, comparable to viruses regarding their size (30-150 nm) ( Figure 1) [29]. Exosomes are released by practically all cells and found in nearly all body fluids [30]. Exosomes differ from other EVs by their biogenesis through the endosomal compartment, thus carrying, for example, tsg101 as a typical marker [31]. They carry a lot of the molecular characteristics of the parental cell and due to their small size can reach nearly every part of the body. To illustrate this a little, we can compare exosomes to hemerodromes in ancient Greece. They delivered messages over long distances. The most famous one might be Pheidippides who covered around 240 km in two days to ask for military support. After this huge run, he is said to have collapsed and died [32]. Exosomes are also mostly used up after transmitting their message [31]. In order to keep the communication system running and to have a constant influence on the organism, and in particular on the immune system, tumor cells constantly produce large amounts of exosomes, called tumor-derived exosomes (TEX). They acquire lipids, proteins, and nucleic acids during their formation in the parental cell, which in some respects makes them a kind of miniature version of the mother cell [33]. This property in turn can be of great interest in order to characterize the properties of the tumor. For example, Thakur et al. demonstrated that exosomal DNA (exoDNA) represents the entire genome and reflects the mutational status of the parental tumor cells [34]. According to well-established scientific opinion and our own findings, we understand that exosomes are able to interact with immune cells in a way that aids in tumor immune escape. For example, in a paper published in 2017, we showed that TEX affects the suppressive activities of regulatory T cells (Tregs) by utilizing receptors on the surface of recipient cells [35]. In another study, we investigated how TEX are not internalized by T cells, but the signals they carry are delivered to cell surface receptors and thereby modulate the gene expression and functions of human T lymphocytes. Tregs were shown to be more sensitive to TEX-mediated effects than other T cell subsets [36].
Exosomes in the TME of Glioblastoma: Potent Modulators of the Immune Response
A typical feature of tumor progression is the inflammatory response in the TME with an accumulation of macrophages [15]. Depending on their activation status, they exert an anti-tumor or a pro-tumor effect. As mentioned above, pro-tumor M2 macrophages seem to predominate in the TME [37]. This knowledge, however, can only help us in the development of therapeutic approaches if we understand what causes this shift to the immunosuppressive form of macrophages. Gabrusiewicz et al. [38] showed an interesting effect of glioblastoma stem cell-derived exosomes. After uptake into macrophages, a reorganization of the actin cytoskeleton occurred, which resulted in a shift towards the immunosuppressive M2 phenotype, including expression of PD-L1. The authors concluded that glioblastoma stem cell-derived exosomes qualify as potent modulators of the immunosuppressive tumor microenvironment.
This assumption was also supported by a recent study on the molecular and immunosuppressive properties of glioblastoma-derived exosomes (GDEs) [27]. For this purpose, exosomes from three human GBM cell lines were investigated. All exosomes obtained carried known immunosuppressive markers including CD39, CD73, FasL, CTLA-4, and TRAIL. Coincubation experiments with NK cells, CD4+ T cells, and CD 8+ cells revealed downregulation of activation status, reduced cytokine production, and enhanced According to well-established scientific opinion and our own findings, we understand that exosomes are able to interact with immune cells in a way that aids in tumor immune escape. For example, in a paper published in 2017, we showed that TEX affects the suppressive activities of regulatory T cells (Tregs) by utilizing receptors on the surface of recipient cells [35]. In another study, we investigated how TEX are not internalized by T cells, but the signals they carry are delivered to cell surface receptors and thereby modulate the gene expression and functions of human T lymphocytes. Tregs were shown to be more sensitive to TEX-mediated effects than other T cell subsets [36].
Exosomes in the TME of Glioblastoma: Potent Modulators of the Immune Response
A typical feature of tumor progression is the inflammatory response in the TME with an accumulation of macrophages [15]. Depending on their activation status, they exert an anti-tumor or a pro-tumor effect. As mentioned above, pro-tumor M2 macrophages seem to predominate in the TME [37]. This knowledge, however, can only help us in the development of therapeutic approaches if we understand what causes this shift to the immunosuppressive form of macrophages. Gabrusiewicz et al. [38] showed an interesting effect of glioblastoma stem cell-derived exosomes. After uptake into macrophages, a reorganization of the actin cytoskeleton occurred, which resulted in a shift towards the immunosuppressive M2 phenotype, including expression of PD-L1. The authors concluded that glioblastoma stem cell-derived exosomes qualify as potent modulators of the immunosuppressive tumor microenvironment.
This assumption was also supported by a recent study on the molecular and immunosuppressive properties of glioblastoma-derived exosomes (GDEs) [27]. For this purpose, exosomes from three human GBM cell lines were investigated. All exosomes obtained carried known immunosuppressive markers including CD39, CD73, FasL, CTLA-4, and TRAIL. Coincubation experiments with NK cells, CD4+ T cells, and CD 8+ cells revealed downregulation of activation status, reduced cytokine production, and enhanced apoptosis 4 of 19 of CD8+ T cells. Human macrophages changed their phenotype and expression pattern after co-incubation with GDEs toward M2 macrophages with typical markers, including CD206, IL-10, LAP, and arginase-1. Other upregulated markers in this population were CD39, PD-1, and EGFR. The control group showed no change in expression. These observations were confirmed at the pathway level. To further substantiate the in vitro observations, GDEs were injected into normal mice and the concentration of CD8+ T cells and M1-like macrophages were measured in the spleen. Both showed a significant reduction, accompanied by an increase in M2-like macrophages. Juliana Azambuja and colleague [9] thus impressively demonstrated how exosomes from glioblastomas exert a suppressive effect on different types of immune cells.
This effect does not necessarily have to be exerted in a direct way. Rather, it is a kind of snowball effect, in which exosomes seem to be a driving force. This is also illustrated by data from the Pittsburg group around Theresa L. Whiteside [27], who showed in 2020 that exosomes from glioblastoma triggered a conversion of macrophages to tumor-associated macrophages (TAMs). TAM-derived exosomes, second-row exosomes so to speak, appear to play a significant role in tumor progression. Among the upregulated proteins of TAMs-derived exosomes, was arginase-1. Targeted arginase inhibition suppressed GBM proliferation mediated by TAM-derived exosomes. The authors concluded that blockade of arginase could be a potential therapeutic approach. Though, at the same time, they acknowledged that TAM-derived exosomes also carry markers that influence glioma cell migration, resistance, invasion, and other biological functions [27]. Yet, it is certainly an interesting player in the network of exosome-triggered tumor progression. Not least because disorders in L-arginine metabolism and their effect on both carcinogenesis and the activity of the antitumor immune system have recently become a focus of interest [39]. For example, Czystowska-Kuzmicz et al. [40] demonstrated that arginase-1 + exosomes from an ovarian cancer cell line reduced the proliferation of murine T cells and reduced the expression levels of CD3ζ and CD3ε chains. Therefore, it has already been suggested to investigate arginase-1 as a potential target in tumor therapy [39,41]. This is in contrast to a study by Bian et al., which showed that tumor-induced myeloid-derived suppressor cells (MDSC) do not generally express arginase-1, nor is this required for MDSC-mediated inhibition of T cells [42].
However, perhaps these results are only seemingly controversial. Thus, it is becoming increasingly apparent that the immunomodulatory effects often take complex detours and have limited reproducibility in in vitro models. Moreover, Domenis et al. [43] could show that the suppressive effect on T cells by GDEs is indirect via influencing monocyte maturation rather than directly interacting with T cells. Therefore, we still consider the role of arginase-1 positive exosomes to be an interesting approach. Indeed, numerous immunomodulatory molecules are found in exosomes isolated from glioblastoma patients. These include antigen-presenting molecules, tumor antigens, immune intracellular adhesion molecules, and TGF-ß [44]. These effects are not concentrated on only one group of immune cells, but affect all groups of immune cells [45,46]. In addition to macrophages, NK cells, and T cells, B cells also appear to be affected in their function [47].
B Cells in the TME of Glioblastoma: A Potentially Underestimated Immunocyte
To our knowledge, little is known about the role of B cells in gliomas and especially about the effect of exosomes on B cells in the tumor microenvironment. Colleagues from Ulm, Germany, recently studied the effect of circulating exosomes on B cells in HNSCC [48]. For this, they obtained B cells, as well as exosomes, from the peripheral blood samples of healthy patients, as well as those with head and neck squamous cell carcinoma (HNSCC). Like we also observed [49,50] the protein concentration of circulating exosomes was higher in the diseased patients than in the healthy patients. They concluded that plasma-derived exosomes show inhibitory effects on the function of healthy B cells. Notably, there was little difference in the inhibitory effects by exosomes between the two groups, assuming a physiological B-cell inhibitory role of circulating exosomes [48]. It is likely that the situation is similar in other tumor entities. Among the few other studies on this are those by Catalina Lee-Chang, published in December 2019 [47], where she and her colleges investigated B cell-mediated immunosuppression in glioblastoma. Forty percent of patients showed B cell tumor infiltration. This supports other studies, which have also previously shown that B cells are present in the tumor microenvironment [51]. Nevertheless, it remains to be clarified whether, and to what extent, they contribute to tumor progression.
To address this question, Lee-Chang et al. [47] studied human and mouse GBMassociated regulatory B cells (Bregs) with respect to their immunosuppressive potency on activated CD8+ T cells. Local application of B cell-depleting immunotherapy with CD20 antibodies resulted in highly significantly improved overall survival in animals, indicating a so far underestimated role of Bregs in tumor progression. Further investigation revealed an interesting mechanism. Thus, myeloid-derived suppressor cells (MDSCs) in the TME of glioblastomas appear to give membrane-bound PD-L1 via microvesicles to naive B cells, thus changing them toward the regulatory subtype. Upon uptake, these Bregs were able to inhibit CD8+ T cell activation, and thus contribute to tumor immune escape. CD163-bearing MVs from myeloid cells were also observed to be preferentially taken up by B cells, as well as Foxp3+ Tregs, a finding that implies that transfer of PD-L1-bearing MVs may be a universal process of intercellular communication between regulatory cells [47].
The contribution of B cells to tumor progression through tumor immune escape has long received little attention. This is due, not least, to the fact that B cells were for a long time regarded mainly as major effector cells of the humoral defense [52]. Not until recently have the antibody-independent functions of B cells received more attention. In a study published in 2016, we addressed the question of the extent to which B cells contribute to immune regulation. One of the difficulties of this topic is that the group of regulatory B cells is very heterogeneous and so far not a conclusively categorized subtype [53]. For quite some time now, one has begun to abandon the rigid classification into different groups of lymphocytes as the definitive form of differentiation. It rather seems that the different classes of lymphocytes are adaptations to the environment with high plasticity [35,36,[53][54][55].
Adenosine (ADO) in the TME: Potent Mediator of Immunosuppression
ATP is considered the energy currency of the body and is found everywhere in the organism. Particularly high concentrations are found during increased cell turnover [56,57]. Under physiological conditions, the extracellular concentration is around 30-200 nM but may increase a hundredfold in the context of hypoxia, inflammatory reactions, or in the tumor microenvironment [56]. While extracellular ATP promotes inflammation and is able to recruit immune cells, ADO has an inhibitory effect on the immune system [58]. The purine nucleoside is involved in angiogenesis and suppression of the antitumor functions of effector T cells, while enhancing the functions of suppressor cells (Treg and MDSC) [59,60], resulting in tumor progression [61,62]. As ADO is abundant in the TME, due to cell death, it is thought to be one of the most potent mediators in the TME [63].
It was already observed that TEX have a significant effect on T cells by modulating them differentially [36,[64][65][66]. For instance, TEX were previously reported to inhibit the functions of human activated CD8+ T lymphocytes by inducing their apoptosis via the Fas/FasL pathway [66]. A proprietary study could provide evidence for differential exosome-mediated changes in gene expression levels in resting vs. activated T cells [36]. Nevertheless, the impact on B cells is less well known, and deserves further attention, as B cells are, amongst others, central effector cells of the adaptive immune response [64,67,68]. It is well known that B cells utilize the adenosine (ADO) pathway [69]. As such, they express both CD39 and CD73; ectonucleotidases that can degrade extracellular ATP to ADO in two sub-steps.
Current in-house studies, among others, shed light on the effect of tumor-derived exosomes (TEX) on the recently described CD39high B cell subtype in dependence on ATP concentration. The regulatory B cell subset is characterized by carrying high expres-sion levels of the enzyme CD39 on their cell surface (CD39high B cells) [68,70,71]. This B cell subtype is thought to have immunosuppressive properties through secretion of anti-inflammatory cytokines such as IL-10, as well as inhibition of T effector cell in proliferation and activation. The data is currently under submission, but is mentioned here for completeness.
Interestingly, CD39high B cell subset population increased upon activation of B cells with IL-4 and CD40L. Upon addition of TEX, the percentage of CD39+, as well as CD39high B cells, in the total B cell population decreased. However, CD 39high B cells expressed more CD39 per B cell after co-incubation with TEX. Furthermore, TEX was also shown to affect the expression of apoptosis-associated proteins in activated B cells.
The best studied enzymes required to degrade extracellular ATP to ADO are CD39 and CD73. It has repeatedly been shown that CD39 and CD73 may be overexpressed in cancer cells, but also in various subsets of immune cells and stromal cells. The resulting increase in adenosine concentration in the TME leads, among other things, to an impairment of antitumor immunity [72]. CD39 has been considered the most important and ratelimiting enzyme for ATP degradation. Cancer therapies are in development to specifically target CD39, and being analyzed in early clinical trials (ClinicalTrials.gov: NCT04306900, NCT03884556). The evidence is growing to also consider CD73 as a potential target [73]. For example, Briceño et al. [74] recently published on the autocrine effect of CD73-mediated adenosine production, which limits the differentiation and metabolic fitness of CD8+ T cells. This finding has been supported by the fact that CD73-deficient cells, on the one hand, showed increased glucose uptake and higher mitochondrial respiration and, on the other hand, achieved a more effective reduction in tumor burden than wild-type cells after adoptive transfer into B16.OVA melanoma-bearing mice.
Nonetheless, adenosine metabolism is a complex and incompletely understood pathway. Apart from the above-mentioned better known enzymes, CD39 and CD73, it has now become clear that other enzymes may also play important roles in purinergic signaling, including members of the ectonucleotide pyrophosphatase/phosphodiesterase (ENPP) family, tissue nonspecific alkaline phosphatase (TNAP), and adenosine deaminase (ADA) [75]. This could be a potential pitfall for ongoing clinical trials, as the desired anti-tumor effect of CD39-CD73 axis blockade could lead to resistance to treatment by expanding shuffle signaling pathways.
The ectoenzymes CD39 and CD73 are found, not only on hematopoietic (mesenchymal stromal cells), cancer, and B cells [76], but also on tumor exosomes [77]. The latter appear to influence their own release in the sense of an autocrine effect via adenosine receptors [78]. Possibly, this leads to a high exosomal load. Our own analyses of exosomal burden in HNSCC, which have not yet been published, showed a significant correlation between the exosomal protein concentration and aggressiveness of the disease. Similarly, Pietrobono et al. [3] concluded that high extracellular levels of adenosine correlate with glioma aggressiveness. The underlying mechanisms suggest both direct interaction and modulation via detours through modulation of the mesenchymal stromal cell secretome [3]. However here, too, the relationships seem to be more complex and one looks in vain for a simple connection. Some previously published studies showed that millimolar concentrations of extracellular ADO significantly reduced the growth of pancreatic, hepatic, and colorectal cancers [79][80][81].
In general, the role of exosomes or EVs is still controversial, as some studies suggested that EVs can inhibit the immune system through adenosine and anti-inflammatory cytokine expression [43,82]. In contrast, there are also some reports of EVs with antitumor activity, and which have demonstrated immunogenic activity of EVs and the ability to induce T-cell-dependent antitumor immunity [83,84]. For example, in 2020 the research group led by Fabrício Figueiró demonstrated that glioma-derived extracellular vesicles expressing CD9, HSP70, CD39, and CD73, and producing adenosine, reduced glioma progression by modulating the tumor microenvironment. Thereby, a reduction in tumor size occurred by acting on cell proliferation [85].
Consequently, the relationship seems to be part of a complex network of interactions, in which the effect exerted by ADO on the players involved in the tumor micromileu depends on cell type, concentration, and type of receptor interaction [86]. In particular, the exact role of the individual adenosine receptors (A1, 2A, 2B) has not yet been conclusively determined.
Exosomes in Diagnostics: Highly Promising Candidates for Liquid Biopsy
Neuroradiology plays a central role in the diagnosis of gliomas. Furthermore, especially in the era of immunotherapies, imaging does not allow distinguishing pseudoprogression from pseudo-response with reasonable certainty. Recurrent biopsies are likewise not an option due to their invasiveness. With regard to circulating biomarkers, in addition to problems with sensitivity and specificity, there is also the problem with the blood-brain barrier (BBB) [87]. Although individual circulating tumor cells (CTCs) [88][89][90][91], and even quite recent clusters [92] of CTCs, have been found in peripheral blood from patients with GBM, it remains unclear to what extent CTCs are able to pass through the blood-brain barrier. Theoretically, CTCs can enable profiling of the whole-tumor genome, but they can also reflect only a single cell type of the heterogeneous tumor composition, whereas exosomes can reflect the complex heterogeneity of the whole tumor, as well as its adaptations to therapy [93,94].
The function of the BBB is to protect the central nervous system from toxins or infectious pathogens. Hypoxia, however, can damage this barrier and cause increased permeability [95], the genetic mechanism behind this is only partially understood. It is possible that exosomal VEGFR plays an active part in this process [96]. Studies by Zhao et al. [97] supported this assumption. They were able to show in an in vitro model that under hypoxic conditions glioblastomas release more VEGF-A carrying exosomes. These in turn had the ability to increase the permeability of the BBB by suppression of claudin-5 and other occluding proteins [97].
Banks et al. [98] examined ten different types of exosomes derived from very different cell lines (mouse, human, cancerous, non-cancerous) with respect to their ability to cross the blood-brain barrier. It was found that all the tested exosomes crossed the BBB without problems. However, there was a high variance concerning rates and vesicular-mediated mechanisms [98].
Consequently, it is a complex process with variable dynamics and it can be assumed that permeability of the BBB in GBM varies both in an interindividual and in a stage-dependent manner. Due to their small size, exosomes can also overcome an intact BBB [99,100] and are thus suitable for early diagnosis and genotyping, irrespective of the stage.
Several authors have already succeeded in detecting exosomes of glioblastomas peripherally [96,[101][102][103]. Johan Skog et al. [96] already succeeded in 2008 in detecting the EGFRvIII status of glioblastomas using microvesicles drawn from peripheral blood. Even at this early stage in the field of liquid biopsy, the authors concluded that longitudinal blood sampling offered a way to monitor tumor genetic dynamics [96]. Fortunately, they were not alone in their success in the years that followed, other research groups also succeeded in detecting microvesicles such as GDEs in peripheral blood [71][72][73]. For instance, in 2018, Manda et al. presented exosomes as biomarkers for detecting EGFR-positive high-grade gliomas [103]. Already, the exosomal protein amount correlates with the aggressiveness of the glioma, which shows the potential of this measurement in diagnostics [104]. This could also be shown for other tumors. Ludwig et al. [105] observed that the protein quantities of extracellular vesicles (EVs) isolated from HNSCC patient plasma were significantly higher than in healthy donors. In addition, they showed that an increase in total plasma EVs correlates with disease activity [105]. For a more precise tumor diagnosis, however, much more information is needed. Interestingly, 90% of all patients with GBM showed aberrant expression of at least one of the following markers on the exosomal level: EGFR, EGRRvIII, podoplanin, and IDH1 [106]. In the field of exosomal diagnostics, proteins are the most frequently investigated exosomal cargo to date [107]. Nevertheless, other components, like mRNAs and miRNAs, also have great potential to become part of a diagnostic panel in the future.
There are also promising data in the field of exosomal microRNAs (miRNA) with regard to diagnosis and characterization of glioblastomas. MiRNAs play an important role as cellular regulators in a variety of physiological and pathological conditions. According to recent findings, miRNAs can be exchanged between cells via exosomes and their detection allows many conclusions to be drawn about the parental cell. In a widely published pilot study by Ebrahimkhani et al. [108] in 2018, exosomal microRNAs were isolated from the serum of glioblastoma patients and analyzed by unbiased deep sequencing. Exosomal RNA was superior to free RNA in terms of diagnostic predictability [108]. Similarly, a study by Daasi et al. demonstrated that the exosomal PD-L1 status, but not the soluble form of PD-L1, was of prognostic value for disease progression in HNSCC [109]. Furthermore, exosomes of hypoxic glioblastoma cells were found to contain miR-301a, a driver of resistance to radiotherapy. Tumor cells in normoxic conditions take up these exosomes and enter a state of radiotherapy resistance by influencing the Wnt/ß-catenin signaling pathway, analogous to plasmids in bacteria that transfer antibiotic resistance.
More promising data came from Manterola et al. [110]. They found that the expression levels of one small noncoding RNA (RNU6-1) and two microRNAs (miR-320 and miR-574-3p) were significantly associated with GBM diagnosis. Here, RNU6-1 was consistently an independent predictor of GBM diagnosis [110]. A similarly promising indicator of glioma diagnosis and prognosis is exosomal miR-21, since its levels were shown to correlate notably with tumor recurrence or metastasis [111].
All these are increasing evidence for the high potential of small non-coding RNA signatures in microvesicles, which can be isolated from the serum of GBM patients. Exosomes are thus increasingly emerging as reliable differential diagnostic biomarkers in cancer, which could be a valuable addition to the current diagnostics, relying mainly on imaging. A long-term vision would be to apply these findings, not only for diagnostics, but also in a follow-up and therapeutic context [112].
However, until exosomes can be used analogously to, for example, PSA in prostate carcinoma, some more investigations will be necessary, because their role in tumors seems to be complex and anything but one-dimensional. For example, their molecular cargo can exert both pro-and antitumor effects [113,114]. Probably, some kind of panel of different partial analyses of exosomal cargo (DNA, mRNA, miRNA, proteins) could increase the accuracy, and thus become a diagnostic building block.
Before clinical use can be considered here, however, a simple, standardized, and reproducible isolation method is needed.
Analysis of Exosomes for Diagnostic Purposes: Standardization as the Crucial Foundation
The increasingly deep understanding of exosomes and their potential application value, calls for reproducible isolation and enrichment. To date, most current isolation technologies cannot fully separate exosomes from lipoproteins with similar biophysical properties and other non-endosomal EVs [115]. In addition to the purity issue, maintaining biological integrity is also critical. Thus, we are currently faced with the difficulty of how best to enrich exosomes from biofluids and prepare them for downstream analyses.
The most commonly chosen isolation methods include ultracentrifugation and density gradient centrifugation, polymer precipitation, size-based isolation techniques, and immunoaffinity capture techniques.
Ultracentrifugation (UC) is currently considered the gold standard for exosome extraction and separation. The basis of this isolation method is the size and density differences of the components of the sample. It is also suitable for the separation of large-volume sample components with significant differences in sedimentation coefficient [116]. In addition, the exosomes do not need to be labeled, which can avoid cross-contamination [117]. However, problematic in terms of reproducibility are the centrifugation time, centrifugal force, rotor type, and the parameters affecting the yield and purity of the target exosomes [116,118].
Density gradient centrifugation is usually used in combination with ultracentrifugation to improve exosome purity. A sucrose density gradient is usually used, but this is poor at distinguishing exosomes from retroviruses, due to their biophysical properties [119]. In addition, this method provides a high purity of exosomes, but the high viscosity caused by the sucrose solution leads to a reduction in the sedimentation speed and thus a higher time.
Another approach is size-exclusion chromatography (SEC). The separation principle is that the macromolecules cannot enter gel pores and are thus eluted along gaps between the porous gels with the mobile phase, while the small molecules remain in the gel pores and are finally eluted from the mobile phase. Various exosome purification columns based on the SEC principle are now commercially available. The application of SEC is fast, simple, and inexpensive. However, the exosome isolates may be contaminated with other particles of similar size, resulting in reduced purity [117].
Polymer precipitation is another method that was originally used to isolate viruses. Polyethylene glycol (PEG) is usually used as the medium, and the exosomes are harvested under centrifugation conditions by reducing the solubility of the exosomes. The method is also suitable for large sample volumes and has relatively short analysis times. Nevertheless, the purity and recovery rate are relatively low, which can lead to false positive results. Since the resulting polymer is difficult to remove, it makes subsequent experimental functional analysis rather difficult [120].
Immunoaffinity chromatography (IAC) is based on the specific binding of antibodies and ligands to separate desired substances from heterogeneous mixtures. In principle, the antigens used in this method should be proteins with high abundance on the surface of exosome membranes, such as ESCRT complex-related proteins or proteins of the fourtransmembrane protein superfamily. In addition to general components on the surface of exosomes, specific structures typical of certain cell types can also be target proteins. The latter is used to isolate exosomes of a specific origin. IAC can be used for qualitative and quantitative determination of exosomes and has good specificity and sensitivity [121].
However, the storage conditions of exosomes obtained by immunoaffinity chromatography are relatively challenging, and the method is not suitable for large-scale separation of exosomes.
Given the strong increase in interest in this field and thus various isolation techniques, we kindly refer the reader for a more detailed description of further methods to reviews on isolation techniques [115,122].
For different purposes and applications, different methods have advantages and disadvantages. However, to compare samples from several patients interindividually and intraindividually over a longer period of time, a standardized method is required. Many common methods such as ultracentrifugation, density gradient centrifugation, or precipitation are not practical in the clinical routine for many reasons already mentioned (including high time consumption, need for larger quantities, contamination, loss of exosomes).
Thanks to the growing interest in this field, there is an increasing supply of commercial kits. After several years of experience with different techniques of exosome purification, our research group has succeeded in establishing a novel isolation method for obtaining exosomes from blood, and thus reducing the laboratory workload to a few hours. Here, we work with glycan particles based on galectin, which are able to capture exosomes in plasma and thus make ultracentrifugation unnecessary. The quality of exosomal isolation is monitored by electron microscopy and nano tracking analyses (ZetaViewer®, Mebane, NC, USA). The typical size range of exosomes is 30-150 nm (Figure 2). A functional assay for T-cell activation was able to show that the exosomes isolated in this way remain biologically active.
Mebane, NC, USA). The typical size range of exosomes is 30-150 nm (Figure 2). A functional assay for T-cell activation was able to show that the exosomes isolated in this way remain biologically active. Our results first showed that galectin-coupled magnetic beads, EXÖBead® (Biovesicle Inc., Taipei, Taiwan), can isolate EVs from plasma with low contamination of lipoprotein. In addition, we could also use a lactose-based elution buffer to successfully obtain the intact functional EVs after elution. A manuscript describing this method is under publication.
Exosomes in Therapy Monitoring: with Multiple Markers to Higher Sensitivity
Glioblastoma is a particularly aggressive tumor entity that has a highly variable response to therapy, which to date makes prediction regarding therapy response very difficult. The standardized diagnostics since the invention of cross-sectional imaging are CT, and especially MRI. The field of radiomics is making great efforts to provide more precise information and to better distinguish pseudo-progression from pseudo-regression. However, it is likely that the use of AI, while increasing prognostic accuracy, will always leave a residual uncertainty. As anywhere in life, it helps to look at things from a different angle to get closer to the matter at hand. There is evidence, for example, that exosomes adjust their content (including miRNA, mRNA, and proteins) during the course of the disease [123,124]. This allows monitoring changes during the therapy.
Zeng et al. [125] investigated the effect of exosomal miR-151a expression level on temozolomide (TMZ) resistance. Using quantitative PCR analysis in two TMZ-resistant GBM cell lines, it was determined, that these cells secreted lower levels of miR-151a-containing exosomes. Conversely, overexpression of miR-151a sensitized chemoresistant GBM tumor cells to TMZ by suppressing the XRCC4 DNA repair pathway. Thus, obtaining miR-151 a-containing exosomes not only has diagnostic significance in terms of liquid biopsy to aid in the choice of therapy but could also be a component of treatment for TMZ- Our results first showed that galectin-coupled magnetic beads, EXÖBead® (Biovesicle Inc., Taipei, Taiwan), can isolate EVs from plasma with low contamination of lipoprotein. In addition, we could also use a lactose-based elution buffer to successfully obtain the intact functional EVs after elution. A manuscript describing this method is under publication.
Exosomes in Therapy Monitoring: With Multiple Markers to Higher Sensitivity
Glioblastoma is a particularly aggressive tumor entity that has a highly variable response to therapy, which to date makes prediction regarding therapy response very difficult. The standardized diagnostics since the invention of cross-sectional imaging are CT, and especially MRI. The field of radiomics is making great efforts to provide more precise information and to better distinguish pseudo-progression from pseudo-regression. However, it is likely that the use of AI, while increasing prognostic accuracy, will always leave a residual uncertainty. As anywhere in life, it helps to look at things from a different angle to get closer to the matter at hand. There is evidence, for example, that exosomes adjust their content (including miRNA, mRNA, and proteins) during the course of the disease [123,124]. This allows monitoring changes during the therapy.
Zeng et al. [125] investigated the effect of exosomal miR-151a expression level on temozolomide (TMZ) resistance. Using quantitative PCR analysis in two TMZ-resistant GBM cell lines, it was determined, that these cells secreted lower levels of miR-151acontaining exosomes. Conversely, overexpression of miR-151a sensitized chemoresistant GBM tumor cells to TMZ by suppressing the XRCC4 DNA repair pathway. Thus, obtaining miR-151 a-containing exosomes not only has diagnostic significance in terms of liquid biopsy to aid in the choice of therapy but could also be a component of treatment for TMZ-resistant GBM [125]. The critical role of XRCC4 in brain tumors had already been described by other research groups [126]. Yet, the exact function of XRCC4 in oncogenicity and TMZ resistance in GBM remains to be elucidated in more detail, before work can be done on clinical applications.
Radiotherapy is one of the established components of glioblastoma therapy, along with surgery and chemotherapy. Li et al. investigated whether the measurement of miRNAs can monitor the efficacy of radiotherapy in GBM patients [93]. For this purpose, exosomal miRNA levels were sequenced before and after radiotherapy in a cohort study. All miRNAs that showed significantly altered expression were examined. Different databases came into use to analyze the target genes of the corresponding miRNAs.
A large proportion of the target genes were involved in the p53 signaling pathway and various known tumor progression pathways, suggesting that these miRNAs may play an impacting role in glioma development and progression via their effects on target genes. Thus, at the exosomal level, it has been possible to partially map the response of the tumor and its environment to radiotherapy, providing another step towards biomarkers for therapy monitoring.
In addition, especially in the monitoring of new therapies, the analyses of exosomes and their components can have an important role in therapy control. In our own study of patients enrolled in a glioma vaccination trial, we showed that the exosomal levels in serum correlated positively with the corresponding tumor stage. This translated into a decreasing exosome load in response to therapy, while higher levels were measured in clinical tumor increase. Exosomal immune-related protein levels also correlated positively with different grades of glioma. Furthermore, clinical response to the given tumor-related vaccine correlated with a change in serum exosomal immune-related gene expression, which gives them a promising role in therapy monitoring [49].
Exosomes in Therapy: Much Hope but Still a Long Way to Go
Fortunately, in recent years it has been found that exosomes, contrary to what has been assumed for decades, are not mere garbage cans of the cell. Since the first observations of these EVs in reticulocytes [127], there has been a tremendous increase in the understanding of exosomes as transmitters of cellular information. The fact that they can divulge much about their parental cells has led to intensive research into the diagnostic use of exosomes. Therapeutic use of exosomes is also anticipated to be very promising in the treatment of cancer, although research in this area is much less advanced.
Exosomes are a well-studied class of EVs known to transport proteins and nucleic acids and to protect their contents from proteases and RNAases through their double membrane. They are histocompatible, not recognized by the complement system, do not trigger adverse immune responses due to their self-derived origin, and their nanoscale size reduces their clearance from the mononuclear phagocyte system [128]. Biological therapeutics, including short interfering RNA and recombinant proteins, are susceptible to degradation, have limited ability to cross biological membranes, and can trigger adverse immune responses. For this reason, delivery systems for such drugs are under intense investigation. Exosomes possess a number of essential characteristics that make them extremely valuable as drug delivery vectors. Their advantageous structure allows them to deliver different types of molecules and target particular cells, due to transport specifity [129].
Especially in the context of brain tumors, the ability of exosomes to cross the bloodbrain barrier plays a central role [99,100]. Using a transgenic zebrafish as a brain cancer model, Yang et al. [130] showed that exosomes also delivered anticancer drugs across the BBB and into the brain.
However, the complexity of exosomes also comes with certain risks, especially the potential for off-target effects, and is a major challenge on the way to translation to the clinic. In the following, the potential therapeutic applications will be briefly elaborated.
Exosome-Based Gene and Drug Delivery
Various techniques are used to load exosomes with the desired cargo. A distinction is made between exogenous loading and endogenous loading. In the latter case, the modifications take place during the formation of the exosomes. In the former, the exosomes are isolated and then modified by freeze-thaw cycles, incubation, sonication, extrusion, or electroporation.
The successful use of exosomes for the targeted delivery of siRNA has already been demonstrated by Alvarez-Erviti [131] et al., in 2011. The exosomes were harvested, purified, and loaded with siRNA against an important protein in Alzheimer pathogenesis (BACE1) by electroporation and systemically injected. This resulted in a decrease (55%) of the harmful β-amyloid 1-42 protein in the brain. This biotechnological approach to create exosome-based delivery systems was the first demonstration of an exosome-based drug delivery system that showed efficient in vivo delivery of siRNA [131]. Other promising candidates for GBM therapy are miRNAs. However, the challenges include identifying the most effective miRNAs and the delivery method. Hamideh et al. used a rat model of glioblastoma to administer exosomes loaded with a miR-21-sponge, which led to a significant reduction in the volume of the tumors [132]. Munoz et al. [133] found that anti-miR-9-releasing exosomes increased multidrug transporter expression and sensitivity to TMZ in drug-resistant GBM cells. Increased caspase activity and cell death rate in tumor cells was the hoped-for consequence [133].
Other ways to overcome chemotherapy resistance involve packaging the substance itself into exosomes. Exosomes loaded with paclitaxel led to an increase in cytotoxicity of almost 50-fold in multidrug-resistant neoplasms compared with paclitaxel without exosomes [134]. Doxorubicin has also been successfully packaged into exosomes and proposed as a potential module for tumor therapy [135].
However, as much as we know about exosomes, we are still far from overlooking the complexity of this group of MVs and cannot predict with sufficient certainty the effect of modifications on their behavior. It is also possible that not all components of exosomes will be required for their function, so an alternative strategy is to synthetically recreate these vesicles. Much work is already being done on this as well. As early as 2012, Kooijmans et al. [128] considered how functional exosome mimetics could be produced by assembling liposomes containing only the crucial components of natural exosomes. By using components that are already well characterized, the pharmaceutical acceptability of such systems can be greatly increased. However, it needs to be determined which exosomal components are suitable [128].
Exosomes as Therapeutic Agents
Natural killer (NK) cells are an important part of the first-line defense in controlling tumor growth and metastasis. Like all cells, NK cells emit biologically active EVs that reflect their protein and genetic repertoire, and allow assessing the current NK cell status in cancer patients. In turn, these NK-derived exosomes could contribute to the improvement of cancer therapy by interacting with tumor and/or immune cells. However, a better understanding of the exact interactions is still needed [136]. Other approaches shed light on exosomes from umbilical cord blood-derived human mesenchymal stem cells, which also have partial antitumor effects via regulation of MiR-10a-5p/PTEN signaling [137]. Jia et al. [138] showed that modification of exosomes can simultaneously produce good results for targeted imaging and therapy. Thereto, they loaded exosomes with curcumin and superparamagnetic iron oxide nanoparticles in the first step and conjugated the exosome membrane with neuropilin-1-targeted peptide (RGERPPR, RGE) by click chemistry in the second step. The engineered exosomes easily passed the BBB and were found to be suitable for simultaneous diagnosis and therapy in the orthopic glioma model and in glioma cells [138]. However, these approaches are still in their infancy and time will tell what will finally prevail.
Exosome-Based Immunotherapy and Exosome Blocking
Exosomes have great potential in the field of cancer immunotherapy, with the potential to become the most effective cancer vaccines, as well as targeted antigen/drug carriers. Due to their ability to induce tumor-specific immunity, they are being traded as potential cancer vaccines, with studies in animals and in the clinic [85]. Due to the dual properties of exosomes (they can both inhibit and promote cancer development) a particularly good understanding of the underlying mechanisms is needed. At this point, the reader is referred to the expanding literature on this topic (for example, Xu et al., 2020 [139], Shi et al., 2020 [140], and Sinha et al., 2021 [141]).
Attributing to TEX a tendency toward protumorgenic activity, despite their dual properties, a finding by Massachusetts colleagues Atai, Balaj, Skog, Breakfield, and Maguire is also worth noting. They showed that the incubation of glioma-cell-derived exosomes with heparin resulted in a reduced exchange between donating and receiving cells [142,143]. By all appearances, the blockade occurred at the surface. However, we are not aware of any publications on therapeutic approaches in this field.
Conclusions
In the early days of cancer research, the focus was mainly on the malignant cells themselves, but in the last decades the spectrum has broadened significantly, and much attention is paid to the tumor microenvironment with its immune cells and intercellular communication. EVs, especially exosomes, are part of a novel communication system, as biologically active, selectively produced particles [144]. Moreover, exosomes carry much of the information of their parental cells and are detectable in almost all body fluids. This makes them ideal candidates for liquid biopsies in cancer diagnosis, and therapy monitoring [145], which has led to exosome research becoming a new area of interest for researchers worldwide.
Recent studies have highlighted the role of exosomes in tumor-induced immunosuppression, and analysis of these pathways helps in a deeper understanding of how glioblastomas and other tumors interact with different classes of immune cells [9,27]. Particularly with respect to B cells, very little is known, and certainly more research is needed to better understand their role in the TME. In addition, the use of exosomes as diagnostic and prognostic markers in brain tumors has evolved significantly in recent years [140]. Compared to other liquid biopsy techniques, such as ctDNA or CTCs, exosomes are more copious in blood and show increased stability. Both are criteria that may be relevant in the busy work environment [146].
Nevertheless, there are still many challenges in the clinical application of exosomes. Their isolation and purification are not standardized, yet this is exactly what is needed when seeking to use exosomes on a large scale in clinical trials. In addition, more reliable biomarkers should be confirmed. In the analysis of single biomarkers, only a few have been suitable for application to date. In all likelihood, the future will be a diagnostic panel that combines multiple biomarkers.
There have also been promising results in therapeutic applications of exosomes in vitro and in animal studies, but several challenges still need to be overcome, such as ensuring biosafety and targeting efficacy, as well as avoiding adverse effects, before exosomes can be successfully implemented in cancer therapy.
However, we are confident that with advances in technology, we will see the clinical use of exosomes in the diagnosis, treatment, and prognosis of gliomas and other tumors. Funding: This work was supported by the "pro patient" grant (pp18-08).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Data available in a publicly accessible repository.
Conflicts of Interest:
Dapi Chiang is the founder of Biovesicle Inc. All experiments were conducted without financial contribution simply scientific advice was obtained. L.M. is scientific advisor of Biovesicle Inc. and did not receive profit. | 2021-04-04T06:16:29.443Z | 2021-03-30T00:00:00.000 | {
"year": 2021,
"sha1": "36473a1873cc96cdbc725d0d96e6c30f56e1367b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/22/7/3600/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a93d0f32b69f1b81d35d78ee515bca6384c095c9",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
114965679 | pes2o/s2orc | v3-fos-license | Analysis of Preparation and Properties on Shape Memory Hydrogenated Epoxy Resin Used for Asphalt Mixtures
: The objective of this investigation is to prepare the shape memory hydrogenated epoxy resin used for asphalt mixtures (SM-HEP-AM) and study its properties. The shape memory hydrogenated epoxy resin (SM-HEP) is prepared using hydrogenated bisphenol A epoxy resin (AL-3040), polypropylene glycol diglycidylether diacrylate (JH-230), and isophorone diamine (IPDA). The formulations of the SM-HEP-AM are obtained by the linearly fitted method. The thermo-mechanical property, molecular structure, and shape-memory performance of the SM-HEP-AM are studied. The glass-transition temperature ( T g ) is determined using the differential scanning calorimeter (DSC). The results proved that the T g level increased when the JH-230 content decreased. The thermo-mechanical property of the SM-HEP-AM is measured by dynamical mechanical analysis (DMA). The storage modulus of the SM-HEP-AM decreased with the increase in the JH-230 content. The above phenomena are attributed to the change in the JH-230 content. The shape memory performance results of the SM-HEP-AM indicate that specimen deformation can completely recover after only several minutes at T g + 10 ◦ C and T g + 20 ◦ C. The shape recovery time of the SM-HEP-AM increases with increased JH-230 content, and the change between the shape recovery time and JH-230 content gradually decreased as the temperature increased. The deformation recovery performance of asphalt mixture with and without the SM-HEP-AM ( T g = 40 ◦ C) was tested by the deformation recovery test. This was used to prove that the SM-HEP-AM helps to improve the deformation recovery performance of the asphalt mixture. was attributed to the presence of epoxy groups. The disappearance of the absorption peak on the spectra of the SM-HEP-AM was due to the complete reaction of the epoxy groups in AL-3040 and the active hydrogen in IPDA. Meanwhile, it indicated that the SM-HEP-AM had been completely cured. The band at 1092 cm − 1 on the spectra of the JH-230 and SM-HEP-AM corresponded to the C–O stretching vibration band in the raw materials (JH-230). The bands located around 1730 cm − 1 in the JH-230 and SM-HEP-AM were assigned to the C=O stretching vibration band presented in the raw materials (JH-230). They also reflected the differences residing in the spectra, depending on the JH-230 content. These bands, with a very low intensity in specimen JH-230-0.0353, appear very visible on the spectra of specimen JH-230-0.0685 and specimen JH-230-0.0796. This indicated that the intensity of the bands at 1092 and 1730 cm − 1 increased with an increased JH-230 content. It can be explained by the fact that there exists a large amount of C–O and C=O stretching vibration bands in JH-230, and with the increased JH-230 content, the C–O and C=O stretching vibration bands of SM-HEP-AM also increased. The peaks at 2859 and 2932 cm − 1 were associated with the –CH 3 symmetric stretching peak and saturation C–H stretching vibration peak. on the spectra of the SM-HEP-AM was due to the complete reaction of the epoxy groups in AL-3040 and the active hydrogen in IPDA. Meanwhile, it that the SM-HEP-AM had been completely cured. The band at 1092 cm −1 on the spectra of the JH-230 and SM-HEP-AM corresponded to the C–O stretching vibration band in the raw materials (JH-230). The bands located around 1730 cm in the JH-230 and SM-HEP-AM were assigned to the C=O stretching vibration band presented in the raw materials (JH-230). They also reflected the differences residing in the spectra, depending on the JH-230 content. These bands, with a very low intensity in specimen JH-230-0.0353, very visible on the spectra of specimen JH-230-0.0685 and specimen JH-230-0.0796. This that the intensity of the bands at 1092 and 1730 cm −1 increased with an increased JH-230 content. It can be explained by the fact that there exists a large amount of C–O and C=O stretching vibration bands in JH-230, and with the increased JH-230 content, the C–O and C=O stretching vibration bands of SM-HEP-AM also increased. The peaks at 2859 and 2932 cm associated with the symmetric stretching peak and saturation C–H stretching vibration
Introduction
Rutting caused by heavy loads at a high temperature is one of the typical distresses in asphalt pavements. Rutting has an important influence on the performance of asphalt pavements during their life period [1]. When the vehicle load is applied to the asphalt pavement surface at high temperatures, deformation of the asphalt mixture will occur. Because the asphalt mixture is one kind of self-healing material, the deformation can partially recover once the load is removed. Since the primary self-healing principle of asphalt mixture is the capillary flow of the bitumen through the cracks at high temperatures, the self-healing process of asphalt mixture is a very slow process and the cracks may need a long time to completely self-heal [2]. Furthermore, in practice, the asphalt pavements undergo continual vehicle loads, and the cracks of asphalt mixture are impossible to completely self-heal [3]. Meanwhile, there
Materials
The epoxy resin in this study is hydrogenated bisphenol A epoxy resin (AL-3040) with an epoxy value of 0.43 eq/100 g. The flexibilizer is polypropylene glycol diglycidyl ether (JH-230) with an average molecular weight of 2500 g/mol, viscosity at 25 °C of 425 mPas, and density of 0.925 g/cm 3 . The epoxy resin and flexibilizer in this study were both manufactured by Yantai Aolifu Chemical Industry Co., Ltd. (Yantai, China). The isophorone diamine (IPDA) with a 170.3 g/mol average molecular weight was used as a curing agent, which was purchased from Hubei Giant Technology Co., Ltd. (Hubei, China). The chemical structures of these materials are shown in Figure 1.
Preparation of the Shape Memory Hydrogenated Epoxy Resin (SM-HEP)
The SM-HEP consists of the hydrobisphenol A epoxy resin (AL-3040), polypropylene glycol diglycidyl ether (JH-230), and curing agent, i.e., the isophorone diamine (IPDA) in certain proportions, which were mixed at 60 °C and placed into a beaker to blend. Then, the prepolymer solution was degassed at 60 °C in a vacuum oven to obtain a bubble-free prepolymer. After degassing, the prepolymer solution was then placed into the polytetrafluoroethylene mold. A thermal curing program was performed at 120 °C for four hours. During the thermal curing program, the flexible groups in the JH-230 were introduced into the network structure of the AL-3040, and the active hydrogen of IPDA completely reacted with the epoxy group in the AL-3040. After the curing process, the SM-HEP specimens were demolded for the tensile-recovery shape memory test and cut into rectangular shapes for dynamic mechanical analysis (DMA). The glass-transition temperature (Tg) of the SM-HEP can be actively controlled by adjusting the epoxy resin/flexibilizer stoichiometric ratio [20].
To prepare the SM-HEP with an appropriate level of Tg for the asphalt mixture (SM-HEP-AM), the SM-HEP with an epoxy resin/flexibilizer stoichiometric ratio between 0.90:0.10-0.98:0.02 was firstly synthesized. Then, the differential scanning calorimeter (DSC) (NETZSCH Instruments, Bremen, Germany) was used to measure the SM-HEP Tg, and the relationship between the Tg and JH-230 content was fitted by the linear fitting method. According to the fitted equation, an appropriate JH-230 content with the required Tg could be back-calculated. Finally, the formulations of SM-HEP-AM were obtained by the repeated fitting-back-calculation process.
Preparation of the Shape Memory Hydrogenated Epoxy Resin (SM-HEP)
The SM-HEP consists of the hydrobisphenol A epoxy resin (AL-3040), polypropylene glycol diglycidyl ether (JH-230), and curing agent, i.e., the isophorone diamine (IPDA) in certain proportions, which were mixed at 60 • C and placed into a beaker to blend. Then, the prepolymer solution was degassed at 60 • C in a vacuum oven to obtain a bubble-free prepolymer. After degassing, the prepolymer solution was then placed into the polytetrafluoroethylene mold. A thermal curing program was performed at 120 • C for four hours. During the thermal curing program, the flexible groups in the JH-230 were introduced into the network structure of the AL-3040, and the active hydrogen of IPDA completely reacted with the epoxy group in the AL-3040. After the curing process, the SM-HEP specimens were demolded for the tensile-recovery shape memory test and cut into rectangular shapes for dynamic mechanical analysis (DMA). The glass-transition temperature (T g ) of the SM-HEP can be actively controlled by adjusting the epoxy resin/flexibilizer stoichiometric ratio [20].
To prepare the SM-HEP with an appropriate level of T g for the asphalt mixture (SM-HEP-AM), the SM-HEP with an epoxy resin/flexibilizer stoichiometric ratio between 0.90:0.10-0.98:0.02 was firstly synthesized. Then, the differential scanning calorimeter (DSC) (NETZSCH Instruments, Bremen, Germany) was used to measure the SM-HEP T g , and the relationship between the T g and JH-230 content was fitted by the linear fitting method. According to the fitted equation, an appropriate JH-230 content with the required T g could be back-calculated. Finally, the formulations of SM-HEP-AM were obtained by the repeated fitting-back-calculation process.
Preparation of the Asphalt Mixture Mixed with the Shape Memory Hydrogenated Epoxy Resin (SM-HEP)
Firstly, the aggregate and mineral powders were heated in an oven at 170 • C for four hours. Secondly, the asphalt was heated until it completely melted, the asphalt with a mass ratio of 6% was then added to the heated aggregate, and the mixture was stirred at 180 • C for 90 s in an asphaltmischer. Thirdly, the mineral powder with a mass ratio of 6% was added to the asphalt mixture and the mixture was stirred at 180 • C for 90 s in an asphaltmischer. Then, the SM-HEP (specimen JH-230-0.0796) with a mass ratio of 1% was added to the asphalt mixture following the same method. Finally, the asphalt mixture rutting test (T0719-2011) [27] was used to prepare samples for the deformation recovery performance.
Differential Scanning Calorimeter (DSC)
The glass-transition temperature (T g ) of the shape memory hydrogenated epoxy resin used for the asphalt mixtures was measured on a 200F3 NETZSCH Instrument. The specimens were heated from −40 to 140 • C in the protective atmosphere of N 2 with a heating rate of 10 • C/min. The specimens were milled as powder and their weights ranged from 9 to 12 mg. The temperature holding time was 3 min.
Fourier Transform Infrared Spectroscopy (FT-IR)
The molecular structures of the shape memory hydrogenated epoxy resin used for the asphalt mixtures were investigated by an Fourier transform infrared (FT-IR) spectrometer (TENSOR 37) developed by the BRUKER Instruments (Karlsruhe, Germany). The FT-IR spectra ranged from 3000 to 300 cm −1 . The test method was the KBr compression method and the spectrums were then obtained after 16 scans. The device was driven by "OPUS" software (7.5.18, BRUKER Instruments, Karlsruhe, Germany, 2014) for the acquisition and data processing.
Dynamic Mechanical Analysis (DMA)
The thermo-mechanical properties of the shape memory hydrogenated epoxy resin used for the asphalt mixtures were investigated using a DMA Q800 system (TA Instruments, New Castlee, DE, USA) in a multi-frequency-strain mode from 30 to 120 • C. The specimen dimensions were 35 × 10 × 4 mm 3 . The specimens were heated at a rate of 3 • C/min, and an applied strain of 0.1% was oscillated at a constant frequency of 1 Hz. The amplitude was 20 µm.
Tensile-Recovery Shape Memory Test
The dumbbell-shaped specimens were used to evaluate the shape-memory performance of the shape memory hydrogenated epoxy resin used for the asphalt mixtures. The specimens with five different formulations were measured under each condition. The tensile-recovery shape memory test was performed according to the following steps: (i) The original length of the specimen was recorded as L 0 . The dumbbell-shaped specimen was setup in the clamp of the tensile instrument we prepared. Then, a constant temperature system was heated up to a certain temperature by circulating hot water; the hot water was then put into the tensile instrument equipped with the specimen. The specimen was held for 5 min for full heating. The temperatures used were set to T g − 10, T g , T g + 10, and T g + 20 • C; (ii) The specimen was stretched to a certain length. The stretch rate was 5 cm/min and the length was recorded as L 1 . The stretched specimen was then quickly cooled with a constant external force and the tensile deformation was frozen. Afterwards, the stretched specimen was dipped into a cold water bottle (−10 • C) in the refrigerator for 10 min. The length was recorded as L 2 at this moment; (iii) To quantify the shape-memory performance, the stretched specimen was placed into an electric-heated thermostatic water bath set to the temperature at which the specimen was deformed, and the shape recovery process was observed. The recovery time was recorded as the point at which the specimen did not change. Five parallel specimens for each of the SM-HEP-AM were measured at the designated temperature to obtain the average recovery time. This recovery time was designated as the shape recovery time for that temperature. The length was then recorded as L 3 . The shape fixity ratio (R f ) and shape recovery ratio (R r ) were calculated using Equations (1) and (2).
Deformation Recovery Test
Rectangular samples were used to evaluate the deformation recovery performance of the asphalt mixture and the asphalt mixture mixed with the SM-HEP (specimen JH-230-0.0796). A deformation recovery test was performed according to the following steps: (i) The intermediate loading position of the rectangular samples was marked, a constant temperature system was maintained at 10 • C by an environmental cabinet, and the sample was held for 30 min in the environmental cabinet; (ii) The failure load of the sample was obtained with a compression rate of 5 cm/min used by the three point bending test of the universal testing machine; (iii) The bending creep test of the universal testing machine was used to load the sample, a failure load of 20% was oscillated at 10 • C, and the holding time was 6 min. The testing equipment is shown in Figure 2; (iv) The sample was immediately removed after loading, the sample was kept at room temperature for 3 min, and the deformation of the loading position was recorded as D 1 . (v) The sample was put into a water bath set at 60 • C for 60 s and 120 s before being removed, and the deformation of the loading position was recorded as D 2 . (vi) To quantify the deformation recovery, the deformation recovery ratio (R) was calculated using Equation (3).
Appl. Sci. 2017, 7, 523 5 of 16 Appl. Sci. 2017, 7, x; doi: www.mdpi.com/journal/applsci was deformed, and the shape recovery process was observed. The recovery time was recorded as the point at which the specimen did not change. Five parallel specimens for each of the SM-HEP-AM were measured at the designated temperature to obtain the average recovery time. This recovery time was designated as the shape recovery time for that temperature. The length was then recorded as L3. The shape fixity ratio (Rf) and shape recovery ratio (Rr) were calculated using Equations (1) and (2).
Deformation Recovery Test
Rectangular samples were used to evaluate the deformation recovery performance of the asphalt mixture and the asphalt mixture mixed with the SM-HEP (specimen JH-230-0.0796). A deformation recovery test was performed according to the following steps: (i) The intermediate loading position of the rectangular samples was marked, a constant temperature system was maintained at 10 °C by an environmental cabinet, and the sample was held for 30 min in the environmental cabinet; (ii) The failure load of the sample was obtained with a compression rate of 5 cm/min used by the three point bending test of the universal testing machine; (iii) The bending creep test of the universal testing machine was used to load the sample, a failure load of 20% was oscillated at 10 °C, and the holding time was 6 min. The testing equipment is shown in Figure 2; (iv) The sample was immediately removed after loading, the sample was kept at room temperature for 3 min, and the deformation of the loading position was recorded as D1. (v) The sample was put into a water bath set at 60 °C for 60 s and 120 s before being removed, and the deformation of the loading position was recorded as D2. (vi) To quantify the deformation recovery, the deformation recovery ratio (R) was calculated using Equation (3).
Figure2.
The testing equipment of the deformation recovery test. Figure 2. The testing equipment of the deformation recovery test.
Controllable Preparation Result of the Shape Memory Hydrogenated Epoxy Resin (SM-HEP)
The SM-HEP specimens were named JH-230 N; the "N" is the JH-230 molar mass of the specimens. The formulations of specimens JH-230 0.02-JH-230 0.10 are shown in Table 1. To study the relationship between the JH-230 content and T g , the T g of the specimens with the specifications in Table 1 were tested by the differential scanning calorimeter (DSC). As can be seen in Figure 3, the SM-HEP T g linearly decreased as the JH-230 content increased, which means that T g could be changed by adjusting the JH-230 content. The T g is a key characteristic parameter of the thermo-mechanical behaviour and shape recovery performance of SMPs. The flexible change in the T g of the SM-HEP is a very useful feature that can expand the applications of the material to meet different demands.
Controllable Preparation Result of the Shape Memory Hydrogenated Epoxy Resin (SM-HEP)
The SM-HEP specimens were named JH-230 N; the "N" is the JH-230 molar mass of the specimens. The formulations of specimens JH-230 0.02-JH-230 0.10 are shown in Table 1. To study the relationship between the JH-230 content and Tg, the Tg of the specimens with the specifications in Table 1 were tested by the differential scanning calorimeter (DSC). As can be seen in Figure 3, the SM-HEP Tg linearly decreased as the JH-230 content increased, which means that Tg could be changed by adjusting the JH-230 content. The Tg is a key characteristic parameter of the thermo-mechanical behaviour and shape recovery performance of SMPs. The flexible change in the Tg of the SM-HEP is a very useful feature that can expand the applications of the material to meet different demands. The Tg values obtained from the DSC curves are also marked in Figure 3. As shown in Figure 3, all SM-HEP specimens show Tg values ranging from 33.5 to 71.4 °C. The Tg values gradually decreased with an increased JH-230 content. To further study the relationship of the JH-230 content and Tg, the relationship was fitted linearly. Figure 4 shows the linear fit results. The preliminary linear fit equation is presented in Equation (4).
In Equation (4), Y is the Tg of SM-HEP and X is the JH-230 content. As can be seen in Figure 4, the Tg of SM-HEP linearly decreased with the increase in the JH-230 content. The correlation coefficient R 2 is 0.9405, which proves that the linear fit degree is high. The formulations of the SM-HEP with different Tg values can be back-calculated using Equation (4). The T g values obtained from the DSC curves are also marked in Figure 3. As shown in Figure 3, all SM-HEP specimens show T g values ranging from 33.5 to 71.4 • C. The T g values gradually decreased with an increased JH-230 content. To further study the relationship of the JH-230 content and T g , the relationship was fitted linearly. Figure 4 shows the linear fit results. The preliminary linear fit equation is presented in Equation (4).
In Equation (4), Y is the T g of SM-HEP and X is the JH-230 content. As can be seen in Figure 4, the T g of SM-HEP linearly decreased with the increase in the JH-230 content. The correlation coefficient R 2 is 0.9405, which proves that the linear fit degree is high. The formulations of the SM-HEP with different T g values can be back-calculated using Equation (4). The SM-HEP studied in this paper was mainly used to reduce the accumulation of permanent deformation on an asphalt pavement at high temperatures and then improve the rutting resistance of the asphalt pavement. Due to the fact that SM-HEP can correct its temporary deformed shape and restore its original shape upon an external stimulus, the deformation temperature may be below the Tg of the SM-HEP [14]. When the temperature of the asphalt mixture ranges from 30 to 70 °C, the cracks will start to self-heal [28]. The SM-HEP Tg used for the asphalt mixture was set as 40, 45, 50, 55, and 60 °C to investigate the improvement effect of the SM-HEP on the rutting resistance of the asphalt pavement. The back-calculated formulations of the SMP-HEP with Tg values of 40, 45, 50, 55, 60 °C are shown in Table 2. The corresponding measured Tg values tested by DSC are displayed in Figure 5. The SM-HEP studied in this paper was mainly used to reduce the accumulation of permanent deformation on an asphalt pavement at high temperatures and then improve the rutting resistance of the asphalt pavement. Due to the fact that SM-HEP can correct its temporary deformed shape and restore its original shape upon an external stimulus, the deformation temperature may be below the T g of the SM-HEP [14]. When the temperature of the asphalt mixture ranges from 30 to 70 • C, the cracks will start to self-heal [28]. The SM-HEP T g used for the asphalt mixture was set as 40, 45, 50, 55, and 60 • C to investigate the improvement effect of the SM-HEP on the rutting resistance of the asphalt pavement. The back-calculated formulations of the SMP-HEP with T g values of 40, 45, 50, 55, 60 • C are shown in Table 2. The corresponding measured T g values tested by DSC are displayed in Figure 5. The SM-HEP studied in this paper was mainly used to reduce the accumulation of permanent deformation on an asphalt pavement at high temperatures and then improve the rutting resistance of the asphalt pavement. Due to the fact that SM-HEP can correct its temporary deformed shape and restore its original shape upon an external stimulus, the deformation temperature may be below the Tg of the SM-HEP [14]. When the temperature of the asphalt mixture ranges from 30 to 70 °C, the cracks will start to self-heal [28]. The SM-HEP Tg used for the asphalt mixture was set as 40, 45, 50, 55, and 60 °C to investigate the improvement effect of the SM-HEP on the rutting resistance of the asphalt pavement. The back-calculated formulations of the SMP-HEP with Tg values of 40, 45, 50, 55, 60 °C are shown in Table 2. The corresponding measured Tg values tested by DSC are displayed in Figure 5. According to Table 2 and Figure 5, the preliminary fitted T g and measured T g were inconsistent, especially for the specimen JH-230 0.0274, for which the difference was 4.5 • C. To ensure the accuracy of the controllable preparation result, the SM-HEP values in Tables 1 and 2 were both used to linearly fit the relationship of the JH-230 content and T g . The second linear fitting results are shown in Figure 6. According to Table 2 and Figure 5, the preliminary fitted Tg and measured Tg were inconsistent, especially for the specimen JH-230 0.0274, for which the difference was 4.5 °C. To ensure the accuracy of the controllable preparation result, the SM-HEP values in Tables 1 and 2 were both used to linearly fit the relationship of the JH-230 content and Tg. The second linear fitting results are shown in Figure 6. As shown in Figure 6, the R 2 of the second linear fitting results is 0.9462, and the second linear fitting equation of Tg and the JH-230 content is shown in Equation (5).
With the same method, the corresponding back-calculated formulations of the Tg values of 40, 45, 50, 55, 60 °C are shown in Table 3. The measured Tg values tested by DSC are also displayed in Table 3. In Table 3, the linear fitted Tg and measured Tg are almost the same. The 45 degrees contour map was used to prove that the controllable preparation result was accurate.
As can be seen in Figure 7, there is a high consistency between the fitted Tg of the specimen obtained by the second linear fitting equation and the measured Tg determined by the DSC. It was concluded that the second linear fit displays a good fitting precision and reproducibility. The Tg is not a fixed value, but is variable in a certain range. Therefore, the preparation results obtained by the linear fit method are reasonable. In conclusion, the formulations of the SM-HEP-AM are shown in Table 3. As shown in Figure 6, the R 2 of the second linear fitting results is 0.9462, and the second linear fitting equation of T g and the JH-230 content is shown in Equation (5).
With the same method, the corresponding back-calculated formulations of the T g values of 40, 45, 50, 55, 60 • C are shown in Table 3. The measured T g values tested by DSC are also displayed in Table 3. Table 3. Specimens with the second back-calculated formulations and their T g values. In Table 3, the linear fitted T g and measured T g are almost the same. The 45 degrees contour map was used to prove that the controllable preparation result was accurate.
As can be seen in Figure 7, there is a high consistency between the fitted T g of the specimen obtained by the second linear fitting equation and the measured T g determined by the DSC. It was concluded that the second linear fit displays a good fitting precision and reproducibility. The T g is not a fixed value, but is variable in a certain range. Therefore, the preparation results obtained by the linear fit method are reasonable. In conclusion, the formulations of the SM-HEP-AM are shown in Table 3.
The Thermal Property of the Shape Memory Hydrogenated Epoxy Resin Used for Asphalt Mixtures (SM-HEP-AM)
The DSC thermographs of the SM-HEP-AM are shown in Figure 8. The Tg values obtained from the DSCanalysis are also marked in Figure 8. The results demonstrated that the JH-230 content in the SM-HEP-AM had a significant effect on the Tg. More specifically, the Tg decreased with an increased JH-230 content in the SM-HEP-AM. Tg is an essential transition temperature from the freezing to free-motion states of the segments in a polymer network. In the SM-HEP-AM network, the increased JH-230 content decreased the cross-link density and resulted in an increased mobility of the segments. Furthermore, the JH-230 molecular weight is 2500 g/mol and its flexibility is high. Therefore, the chain flexibility of the SM-HEP-AM network improved as the JH-230 content increased, leading to a decrease in Tg [29].
The Molecular Structure of the Shape Memory Hydrogenated Epoxy Resin Used for Asphalt Mixtures (SM-HEP-AM)
The chemical structures of the IPDA, JH-230, AL-3040, and SM-HEP-AM were confirmed through the FT-IR spectra in Figure 9. The absorption peak at 909 cm −1 on the infrared spectra of the AL-3040 was attributed to the presence of epoxy groups. The disappearance of the absorption peak
The Thermal Property of the Shape Memory Hydrogenated Epoxy Resin Used for Asphalt Mixtures (SM-HEP-AM)
The DSC thermographs of the SM-HEP-AM are shown in Figure 8. The T g values obtained from the DSCanalysis are also marked in Figure 8. The results demonstrated that the JH-230 content in the SM-HEP-AM had a significant effect on the T g . More specifically, the T g decreased with an increased JH-230 content in the SM-HEP-AM. T g is an essential transition temperature from the freezing to free-motion states of the segments in a polymer network. In the SM-HEP-AM network, the increased JH-230 content decreased the cross-link density and resulted in an increased mobility of the segments. Furthermore, the JH-230 molecular weight is 2500 g/mol and its flexibility is high. Therefore, the chain flexibility of the SM-HEP-AM network improved as the JH-230 content increased, leading to a decrease in T g [29].
The Thermal Property of the Shape Memory Hydrogenated Epoxy Resin Used for Asphalt Mixtures (SM-HEP-AM)
The DSC thermographs of the SM-HEP-AM are shown in Figure 8. The Tg values obtained from the DSCanalysis are also marked in Figure 8. The results demonstrated that the JH-230 content in the SM-HEP-AM had a significant effect on the Tg. More specifically, the Tg decreased with an increased JH-230 content in the SM-HEP-AM. Tg is an essential transition temperature from the freezing to free-motion states of the segments in a polymer network. In the SM-HEP-AM network, the increased JH-230 content decreased the cross-link density and resulted in an increased mobility of the segments. Furthermore, the JH-230 molecular weight is 2500 g/mol and its flexibility is high. Therefore, the chain flexibility of the SM-HEP-AM network improved as the JH-230 content increased, leading to a decrease in Tg [29].
The Molecular Structure of the Shape Memory Hydrogenated Epoxy Resin Used for Asphalt Mixtures (SM-HEP-AM)
The chemical structures of the IPDA, JH-230, AL-3040, and SM-HEP-AM were confirmed through the FT-IR spectra in Figure 9. The absorption peak at 909 cm −1 on the infrared spectra of the AL-3040 was attributed to the presence of epoxy groups. The disappearance of the absorption peak
The Molecular Structure of the Shape Memory Hydrogenated Epoxy Resin Used for Asphalt Mixtures (SM-HEP-AM)
The chemical structures of the IPDA, JH-230, AL-3040, and SM-HEP-AM were confirmed through the FT-IR spectra in Figure 9. The absorption peak at 909 cm −1 on the infrared spectra of the AL-3040 was attributed to the presence of epoxy groups. The disappearance of the absorption peak on the spectra of the SM-HEP-AM was due to the complete reaction of the epoxy groups in AL-3040 and the active hydrogen in IPDA. Meanwhile, it indicated that the SM-HEP-AM had been completely cured. The band at 1092 cm −1 on the spectra of the JH-230 and SM-HEP-AM corresponded to the C-O stretching vibration band in the raw materials (JH-230). The bands located around 1730 cm −1 in the JH-230 and SM-HEP-AM were assigned to the C=O stretching vibration band presented in the raw materials (JH-230). They also reflected the differences residing in the spectra, depending on the JH-230 content. These bands, with a very low intensity in specimen JH-230-0.0353, appear very visible on the spectra of specimen JH-230-0.0685 and specimen JH-230-0.0796. This indicated that the intensity of the bands at 1092 and 1730 cm −1 increased with an increased JH-230 content. It can be explained by the fact that there exists a large amount of C-O and C=O stretching vibration bands in JH-230, and with the increased JH-230 content, the C-O and C=O stretching vibration bands of SM-HEP-AM also increased. The peaks at 2859 and 2932 cm −1 were associated with the -CH 3 symmetric stretching peak and saturation C-H stretching vibration peak.
The Thermo-Mechanical Property of the Shape Memory Hydrogenated Epoxy Resin Used for Asphalt Mixtures (SM-HEP-AM)
The storage modulus (E′) values of the SM-HEP-AM are shown in Figure 10a. The changes in the E′ values below and above the Tg are also marked in Figure 10a. Generally speaking, a good SMP should have a change of E′ of more than two to three orders below and above the Tg.
The Thermo-Mechanical Property of the Shape Memory Hydrogenated Epoxy Resin Used for Asphalt Mixtures (SM-HEP-AM)
The storage modulus (E ) values of the SM-HEP-AM are shown in Figure 10a. The changes in the E values below and above the T g are also marked in Figure 10a. Generally speaking, a good SMP should have a change of E of more than two to three orders below and above the T g . Figure 10a shows that the E' of all of the SM-HEP-AM decreased by almost three orders of magnitude. The glass/rubber modulus ratio is defined as the elastic ratio, and higher elastic ratios would be beneficial to the SM-HEP-AM shape retention. The SM-HEP-AM with a larger glass/rubber modulus ratio below and above T g is in favour of improving the shape fixity ratio [30]. Figure 10a also revealed that the glass modulus increased with the decreased JH-230 content. The molecular weight of the JH-230 is large (2500 g/mol), and there exist lots of -C-O-single bonds. The molecular chain rotates about a single bond, and the chain is very flexible, which decreased the intermolecular force and increased the mobility of the chain segment, thus leading to the decrease in the glass modulus of the SM-HEP-AM. According to different standards, T g can be determined in several ways. In this study, the peak of the loss modulus versus temperature curve was defined as T g . The T g values obtained from the loss modulus versus temperature curve are shown in Figure 10b. According to this definition, dynamic mechanical analysis (DMA) T g gradually increased with a decreased JH-230 content. For a clearer illustration, the T g values obtained from DSC and DMA are summarized in Figure 11. The storage modulus (E′) values of the SM-HEP-AM are shown in Figure 10a. The changes in the E′ values below and above the Tg are also marked in Figure 10a. Generally speaking, a good SMP should have a change of E′ of more than two to three orders below and above the Tg. Figure 10a shows that the E' of all of the SM-HEP-AM decreased by almost three orders of magnitude. The glass/rubber modulus ratio is defined as the elastic ratio, and higher elastic ratios would be beneficial to the SM-HEP-AM shape retention. The SM-HEP-AM with a larger glass/rubber modulus ratio below and above Tg is in favour of improving the shape fixity ratio [30]. Figure 10a also revealed that the glass modulus increased with the decreased JH-230 content. The molecular weight of the JH-230 is large (2500 g/mol), and there exist lots of -C-O-single bonds. The molecular chain rotates about a single bond, and the chain is very flexible, which decreased the intermolecular force and increased the mobility of the chain segment, thus leading to the decrease in the glass modulus of the SM-HEP-AM. According to different standards, Tg can be determined in several ways. In this study, the peak of the loss modulus versus temperature curve was defined as Tg.
The Tg values obtained from the loss modulus versus temperature curve are shown in Figure 10b. According to this definition, dynamic mechanical analysis (DMA) Tg gradually increased with a decreased JH-230 content. For a clearer illustration, the Tg values obtained from DSC and DMA are summarized in Figure 11. The Tg values obtained from DSC were almost consistent with those obtained from DMA. This difference can be explained by the frequency effect in the DMA test. It can be concluded that the formulations of the SM-HEP-AM obtained by the preparation method are reasonable.
The Shape-Memory Performance of the Shape Memory Hydrogenated Epoxy Resin Used for Asphalt Mixtures (SM-HEP-AM)
To investigate the shape-memory performance of the SM-HEP-AM, the specimens were tested at Tg − 10, Tg, Tg + 10, and Tg + 20 °C using the tensile-recovery shape memory test. The Tg values were obtained from DSC. According to the results, the shape fixity ratio of the SM-HEP-AM could reach 98.5%. Full recovery could be observed after only several minutes at Tg + 10 and Tg + 20 °C, and the shape recovery ratio of the SM-HEP-AM could reach 92%. It revealed that the SM-HEP-AM displayed a good shape memory performance. Figure 12 showed the relationship between the shape recovery ratio and recovery time of the JH-230-0.0796, JH-230-0.0685, JH-230-0.0574, JH-230-0.0463, and JH-230-0.0353 specimens at different temperatures. As can be seen in Figure 12, it required less time to complete the shape recovery process at higher temperatures for the same specimen. The free volume of the SM-HEP-AM increased with increased temperature, and thus the frozen segments and the frozen force were gradually released. Therefore, the shape recovery time decreased as the temperature increased. According to the results, it has a relatively low recovery rate at the start and at the terminal stage. At the start stage, the molecular chain segments of the specimens were still frozen. It slowly moved under the The T g values obtained from DSC were almost consistent with those obtained from DMA. This difference can be explained by the frequency effect in the DMA test. It can be concluded that the formulations of the SM-HEP-AM obtained by the preparation method are reasonable.
The Shape-Memory Performance of the Shape Memory Hydrogenated Epoxy Resin Used for Asphalt Mixtures (SM-HEP-AM)
To investigate the shape-memory performance of the SM-HEP-AM, the specimens were tested at T g − 10, T g , T g + 10, and T g + 20 • C using the tensile-recovery shape memory test. The T g values were obtained from DSC. According to the results, the shape fixity ratio of the SM-HEP-AM could reach 98.5%. Full recovery could be observed after only several minutes at T g + 10 and T g + 20 • C, and the shape recovery ratio of the SM-HEP-AM could reach 92%. It revealed that the SM-HEP-AM displayed a good shape memory performance. Figure 12 showed the relationship between the shape recovery ratio and recovery time of the JH-230-0.0796, JH-230-0.0685, JH-230-0.0574, JH-230-0.0463, and JH-230-0.0353 specimens at different temperatures. As can be seen in Figure 12, it required less time to complete the shape recovery process at higher temperatures for the same specimen. The free volume of the SM-HEP-AM increased with increased temperature, and thus the frozen segments and the frozen force were gradually released. Therefore, the shape recovery time decreased as the temperature increased. According to the results, it has a relatively low recovery rate at the start and at the terminal stage. At the start stage, the molecular chain segments of the specimens were still frozen. It slowly moved under the excitation temperature.
The release of the internal stress was followed by a relatively strong friction between the segments, thereby decreasing the recovery rate. As time went on, the molecular segments constantly adjusted, and friction between the segments decreased. Therefore, during the middle stage, the shape recovery rate was relatively high. At the terminal stage, the slope of the curve became flat. This can be explained by the possibility that the stored strain energy has been largely released at the middle stage, so the recovery rate became slow. The same trend in the shape-recovery rate has been noted for other shape memory epoxies and shape-memory composites [31,32].
Appl. Sci. 2017, 7, 523 12 of 16 Appl. Sci. 2017, 7, x; doi: www.mdpi.com/journal/applsci excitation temperature. The release of the internal stress was followed by a relatively strong friction between the segments, thereby decreasing the recovery rate. As time went on, the molecular segments constantly adjusted, and friction between the segments decreased. Therefore, during the middle stage, the shape recovery rate was relatively high. At the terminal stage, the slope of the curve became flat. This can be explained by the possibility that the stored strain energy has been largely released at the middle stage, so the recovery rate became slow. The same trend in the shape-recovery rate has been noted for other shape memory epoxies and shape-memory composites [31,32]. The relationship of the shape recovery ratio and shape recovery time of the five specimens at Tg, Tg + 10, and Tg + 20 °C are shown in Figure 13. When the temperature remained the same, the shape recovery time increased as the JH-230 content increased. Moreover, the trend between the shape recovery time and the JH-230 content gradually decreased as the temperature increased. The crosslink density decreased with the increased JH-230 content. In the tensile-recovery shape memory The relationship of the shape recovery ratio and shape recovery time of the five specimens at T g , T g + 10, and T g + 20 • C are shown in Figure 13. When the temperature remained the same, the shape recovery time increased as the JH-230 content increased. Moreover, the trend between the shape recovery time and the JH-230 content gradually decreased as the temperature increased. The crosslink density decreased with the increased JH-230 content. In the tensile-recovery shape memory test, strain energy was stored in the form of internal stress in a temporary shape. At a higher temperature, the SM-HEP-AM recovered its shape before deformation by using its energy in the form of a restoring force. From Figure 10a, the storage modulus decreased as the crosslink density decreased, which means that there was less strain energy stored as the crosslink density decreased, during which the specimen was deformed. The shape recovery time would increase as the crosslink density decreased. Thus, the recovery force decreased and the shape recovery time was extended. test, strain energy was stored in the form of internal stress in a temporary shape. At a higher temperature, the SM-HEP-AM recovered its shape before deformation by using its energy in the form of a restoring force. From Figure 10a, the storage modulus decreased as the crosslink density decreased, which means that there was less strain energy stored as the crosslink density decreased, during which the specimen was deformed. The shape recovery time would increase as the crosslink density decreased. Thus, the recovery force decreased and the shape recovery time was extended.
The Deformation Recovery Performance of the Asphalt Mixtures with and without the Shape Memory Hydrogenated Epoxy Resin Used for Asphalt Mixtures (SM-HEP-AM)
As can be seen from Figure 14, the deformation recovery ratio of the asphalt mixture mixed with the SM-HEP-AM has a higher deformation recovery ratio than that of the matrix asphalt mixture, and the deformation recovery ratio increased with the increase in the recovery time. The asphalt mixture is a typical viscoelastic-plastic material, and the obvious deformation occurred under a high temperature and continuous load. Asphalt mixture is one kind of self-healing material, so the deformation can partially recover. However, the self-healing process of the asphalt mixture is very slow. What's more, in practice, for asphalt mixture subjected to repeated loading, the asphalt mixture deformation is impossible to fully recover. In this paper, the SM-HEP-AM added to the asphalt is specimen JH-230 0.0796, the corresponding Tg is 40 °C, and the recovery temperature is 60 °C. As the SM-HEP-AM is one kind of thermoset SMP, when the asphalt mixture mixed with the SM-HEP-AM was molded under a certain high temperature, the SM-HEP-AM was shaped and then retained the deformation under room temperature. It also caused partial deformation when the specimen was loaded. As the asphalt mixture mixed with the SM-HEP-AM was kept at 60 °C, the SM-HEP-AM was stimulated and produced a certain restoring force. The restoring force drove the surrounding materials, and finally improved the asphalt mixture deformation recovery As can be seen from Figure 14, the deformation recovery ratio of the asphalt mixture mixed with the SM-HEP-AM has a higher deformation recovery ratio than that of the matrix asphalt mixture, and the deformation recovery ratio increased with the increase in the recovery time. The asphalt mixture is a typical viscoelastic-plastic material, and the obvious deformation occurred under a high temperature and continuous load. Asphalt mixture is one kind of self-healing material, so the deformation can partially recover. However, the self-healing process of the asphalt mixture is very slow. What's more, in practice, for asphalt mixture subjected to repeated loading, the asphalt mixture deformation is impossible to fully recover. In this paper, the SM-HEP-AM added to the asphalt is specimen JH-230 0.0796, the corresponding T g is 40 • C, and the recovery temperature is 60 • C. As the SM-HEP-AM is one kind of thermoset SMP, when the asphalt mixture mixed with the SM-HEP-AM was molded under a certain high temperature, the SM-HEP-AM was shaped and then retained the deformation under room temperature. It also caused partial deformation when the specimen was loaded. As the asphalt mixture mixed with the SM-HEP-AM was kept at 60 • C, the SM-HEP-AM was stimulated and produced a certain restoring force. The restoring force drove the surrounding materials, and finally improved the asphalt mixture deformation recovery performance. Therefore, the deformation recovery performance of the asphalt mixture mixed with the SM-HEP-AM increased. It can be concluded that applying the SM-HEP-AM to asphalt mixture at a temperature higher than the T g of the used SM-HEP-AM may improve the deformation recovery performance of the asphalt mixture and slow down the accumulation of plastic deformation. performance. Therefore, the deformation recovery performance of the asphalt mixture mixed with the SM-HEP-AM increased. It can be concluded that applying the SM-HEP-AM to asphalt mixture at a temperature higher than the Tg of the used SM-HEP-AM may improve the deformation recovery performance of the asphalt mixture and slow down the accumulation of plastic deformation.
Conclusions
In this paper, a new type of shape-memory hydrogenated epoxy resin (SM-HEP) was prepared from hydrogenated bisphenol A epoxy resin (AL-3040), polypropylene glycol diglycidyl ether (JH-230), and isophorone diamine (IPDA). The preparation of SM-HEP used for asphalt mixture (SM-HEP-AM) showed that the fitted glass-transition temperature (Tg) and measured Tg are almost the same. It was concluded that the needed Tg of the SM-HEP specimen can be accurately obtained by the preparation method. Furthermore, when IPDA has a fixed content of 0.5 mol, we can predict the Tg when JH-230 has a content ranging from 0.0796 to 0.0353, and the required Tg ranges from 40 to 60 °C. The DSC results demonstrated that the SM-HEP-AM Tg decreased with the increased JH-230 content. This may be explained by the increase in the JH-230 content decreasing the cross-link density, thus resulting in an increased segment mobility. The storage modulus of the SM-HEP-AM changed almost three orders magnitude below and above the Tg, which means that the SM-HEP-AM is a good kind of SMP. Meanwhile, when the chain segment mobility increased, the glass modulus increased as the JH-230 content decreased. The changes in the relationship between the JH-230 content and Tg in the loss modulus versus temperature curve tested by DMA is consistent with the DSC results. As for the SM-HEP-AM with the same formulation, the Tg measured by DSC and that obtained by DMA were almost uniform. The SM-HEP-AM revealed a good shape-memory performance, and the full recovery can be observed after only several minutes at Tg + 10 and Tg + 20 °C. The SM-HEP-AM specimens require less time to complete the shape recovery process at higher temperatures than at lower temperatures. On account of the influence of the crosslink density, the shape recovery time decreased with the decreased JH-230 content. The deformation recovery performance of the asphalt mixture mixed with the SM-HEP-AM was better than that of the matrix asphalt mixture. This may provide a method to slow down the accumulation of plastic deformation.
Conclusions
In this paper, a new type of shape-memory hydrogenated epoxy resin (SM-HEP) was prepared from hydrogenated bisphenol A epoxy resin (AL-3040), polypropylene glycol diglycidyl ether (JH-230), and isophorone diamine (IPDA). The preparation of SM-HEP used for asphalt mixture (SM-HEP-AM) showed that the fitted glass-transition temperature (T g ) and measured T g are almost the same. It was concluded that the needed T g of the SM-HEP specimen can be accurately obtained by the preparation method. Furthermore, when IPDA has a fixed content of 0.5 mol, we can predict the T g when JH-230 has a content ranging from 0.0796 to 0.0353, and the required T g ranges from 40 to 60 • C. The DSC results demonstrated that the SM-HEP-AM T g decreased with the increased JH-230 content. This may be explained by the increase in the JH-230 content decreasing the cross-link density, thus resulting in an increased segment mobility. The storage modulus of the SM-HEP-AM changed almost three orders magnitude below and above the T g , which means that the SM-HEP-AM is a good kind of SMP. Meanwhile, when the chain segment mobility increased, the glass modulus increased as the JH-230 content decreased. The changes in the relationship between the JH-230 content and T g in the loss modulus versus temperature curve tested by DMA is consistent with the DSC results. As for the SM-HEP-AM with the same formulation, the T g measured by DSC and that obtained by DMA were almost uniform. The SM-HEP-AM revealed a good shape-memory performance, and the full recovery can be observed after only several minutes at T g + 10 and T g + 20 • C. The SM-HEP-AM specimens require less time to complete the shape recovery process at higher temperatures than at lower temperatures. On account of the influence of the crosslink density, the shape recovery time decreased with the decreased JH-230 content. The deformation recovery performance of the asphalt mixture mixed with the SM-HEP-AM was better than that of the matrix asphalt mixture. This may provide a method to slow down the accumulation of plastic deformation.
Author Contributions: Biao Ma built the overall framework. Xueyan Zhou built the trial protocol, carried out the experiments, analyzed the experimental data, and studied the results. Kun Wei discussed the test results. Yanzhen Bo carried out the experiments. Zhanping You professionally revised the whole paper and the corresponding grammar, spelling mistakes, and vague descriptions. All authors discussed and contributed to the manuscript.
Conflicts of Interest:
The authors declare no conflict of interest. | 2019-04-15T13:11:58.808Z | 2017-05-23T00:00:00.000 | {
"year": 2017,
"sha1": "74dc99f4202515901090305175e29570eab99922",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/7/6/523/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "dac9bc3032aef8113a48c534e2dc545b294d4a6b",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
} |
119238281 | pes2o/s2orc | v3-fos-license | Neutrino Mass Matrix Model with a Bilinear Form
A neutrino mass matrix model with a bilinear form $M_\nu = k_\nu (M_D M_R^{-1} M_D^T)^2$ is proposed within the framework of the so-called yukawaon model, which has been proposed for the purpose of a unified description of the lepton mixing matrix $U_{PMNS}$ and the quark mixing matrix $V_{CKM}$. The model has only two adjustable parameters for the PMNS mixing and neutrino mass atios. (Other parameters are fixed from the observed quark and charged lepton mass ratios and the CKM mixing.) The model gives reasonable values $\sin^2 2\theta_{12} \simeq 0.85$ and $\sin^2 2\theta_{23} \sim 1$ and $\sin^2 2\theta_{13} \sim 0.09$ together with $R_\nu \equiv \Delta m^2_{21}/\Delta m^2_{32} \sim 0.03$. Our prediction of the effective neutrino mass $$ in the neutrinoless double beta decay takes a sizable value $\simeq 0.0034$ eV.
Introduction
Many particle physicists have searched for models which provide a unified description of the mass spectra and mixing patterns of quarks and leptons, the Cabibbo-Kobayashi-Maskawa mixing matrix V CKM [1] and the Pontecorvo-Maki-Nakagawa-Sakata mixing matrix U P M N S [2]. As one of such models, the so-called "yukawaon" model [3,4,5,6] has been proposed. The model is a kind of "flavon" model [7].
In this model, Yukawa coupling constants Y f (f = u, d, e, · · · ) in the standard model are understood as vacuum expectation values (VEVs) of scalars ("yukawaon") with 3 × 3 components, i.e. by y f Y f /Λ, where Λ is an energy scale of the effective theory. Our policy in building the yukawaon model is as follows: (i) We consider that the hierarchical structures of the effective Yukawa coupling constants can be understood only based on the charged lepton masses. For the moment, we do not ask for the origin of the charged lepton mass spectrum. (For an attempt to understand the origin of the charged lepton mass spectrum, for example, see Ref. [8].) (ii) We assume a U(3) (or O(3)) family symmetry and R charge conservation. Structures of yukawaon VEVs Y f are obtained from SUSY vacuum conditions for a given superpotential, so that the VEV matrices are related to other yukawaon VEVs. (As stated in (i), the charged lepton mass values are inputs for the moment, we do not discuss a mechanism which gives the observed charged lepton masses.) The first task in the yukawaon model is to search a superpotential form which gives reasonable mass spectra and mixings (in other words, to search for fields with suitable representations of U(3) and R charges. (iii) Effect of SUSY breaking depends on a SUSY breaking scenario. For the moment, we do not consider the SUSY breaking effects for yukawaon sector. We assume that the SUSY breaking in the quark and lepton sectors is induced by gauge mediation (this "gauge" means the conventional SU(3) c ×SU(2) L ×U(1) Y symmetries). (iv) At present, our aim is to search for a mass matrix model which can give a reasonable fit to whole of quark and lepton mass ratios and V CKM and U P M N S mixing matrices with parameters as few as possible. At present, our concern is in the construction of phenomenological mass matrix relations, not of a field theoretical model, i.e. neither in economizing of the yukawaon fields nor in making the superpotential compact. It is our next step to search for a model with more economical fields and with concise structure of superpotential.
The yukawaon model is in the process of research and development at present. In the yukawaon model, there are, in principle, no family-number-dependent parameters except for the charged lepton mass matrix M e . Regrettably at present, we need a phase matrix P u (or P d ) with two phase parameters in order to obtain reasonable values of quark mixing matrix V CKM [5,6]. However, the final goal of our model is to remove such family dependent parameters.
The yukawaon model is constructed by using fundamental VEV matrices of scalar fields. In earlier yukawaon models [3], the mass matrices are directly related to a fundamental VEV matrix matrix Φ e ≡ diag( √ m e , √ m µ , √ m τ ), while in recent yukawaon models, even the charged lepton mass matrix M e is given by a more fundamental VEV matrix Φ 0 . Here, we define VEV matrices which are associated with the mass matrix for up, down quarks, and charged leptons by a common form Here, for convenience, we have dropped the notations " " and " " on the VEV matrices. We will assign Φ 0 to (3 * , 3) of U(3)×U(3) ′ in the next section, so that we will denote Φ 0 asΦ 0 . In the present section in which we discuss the VEV matrices, for simplicity, we do not distinguish between Φ 0 andΦ 0 ( and also between Y f andȲ f , and so on). X 3 and 1 are also VEV matrices of other scalar fields. The matrices Φ 0 , X 3 and 1 are defined by Here, we have assumed that there is a basis in which the VEV matrix Φ 0 takes a diagonal form and the VEV matrix X 3 takes a democratic form. Our mass matrix model is described on the premise that there can be such the flavor basis. The values of (x 1 , x 2 , x 3 ) with x 2 1 +x 2 2 +x 2 3 = 1 are fixed by the observed charged lepton mass values under the given value of a e . The form (1+a e X 3 ) is due to a family symmetry breaking U(3)→ S 3 [6] as we discuss later. The coefficients a f play an essential role in obtaining the mass ratios and mixings, while the family-number independent coefficients k f do not.
In this paper we propose a new model which improves the neutrino mass matrix. As far as mass matrices M e , M d and M u of the charged leptons and down-and up-quarks are concerned, we assume the same VEV structures as those in the previous yukawaon model [4,5,6]: ). Here and hereafter, we omit family-number independent coefficients (k f in Eq.(1.1) and so on), because we are interested only in family structures of 3 × 3 matrices. What is new in the present model is in the neutrino mass matrix M ν : we assume that M ν takes the following form (1.5) Here we take which will be discussed in Section 2.
Let us stress the difference of the form for the neutrino mass matrix between the present model and the previous one. In the previous yukawaon model [4,5,6], the neutrino mass matrix M ν was given by a form where ξ ν -term was an additional term which was brought in order to fit neutrino mixing parameters sin 2 θ 23 and sin 2 θ 12 . However, the model could not give reasonable fit for sin 2 θ 13 . On the other hand, the mass matrix (1.4) with (1.7) in the new model has no such the ξ ν -term. Nevertheless, we can fit whole the observed mixing values sin 2 θ 23 , sin 2 θ 12 and sin 2 θ 13 together with the ratio of neutrino mass-squared difference R ν = ∆m 2 21 /∆m 2 32 by using (1.5), as stated in Section 3. (The big drawback in the previous yukawaon models was that the model could not give the observed large value [10] of sin 2 2θ 13 ∼ 0.09.) In Sec.2, we give VEV matrix relations in the new model. In Sec.3, we discuss parameter fitting of observed values only for the PMNS mixing and neutrino mass ratios because we revised the model only in the neutrino sector. The parameter values in the down-quark sector are effectively unchanged, so that we can obtain the same predictions for the down-quark mass ratios and CKM matrix parameters without changing the successful results in the previous paper [9].
VEV matrix relations
We assume that a would-be Yukawa interaction is given as follows: where ℓ = (ν L , e L ) and q = (u L , d L ) are SU(2) L doublets. Assignments of these fields to family symmetries U(3)×U(3) ′ are given in Table 1. We denote the yukawaons with (6 * , 1) and (6, 1) asȲ and Y , respectively. Note that in Eq.
Here and hereafter, sometimes, we denoteȲ ℓ and Y q as Y f for simplify. In order to distinguish each yukawaon from others, we assume that Y f have different R charges from each other under consideration of R charge conservation. (Of course, the R charge conservation is broken at the energy scale Λ.) We obtain VEV matrix relations from the superpotential which is invariant under the family symmetries U(3)×U(3) ′ and is R charge conserving. In the yukawaon model, the VEV matrix relations are phenomenological ones, and they are dependent on the R charge assignments. Since Table 1: derivations of the VEV matrix relations are essentially similar to those in the previous papers [3,4,5,6,9], although the U(3)×U(3) ′ assignments and R charges are different. Besides, we must consider a complicated superpotential form in order to derive the desirable mass matrix relations. The purpose of the present paper is not to derive those mass matrix relations uniquely, but to investigate a possibility that the neutrino mass matrix M ν is given by a form from the phenomenological point of view. Therefore, in this section, we present only the results of the mass matrix relations, the derivation of which is discussed in Appendix: Here, the fieldsΦ iα 0 and X αi are assigned to (3 * , 3 * ) and (3, 3) of U(3)×U(3) ′ , respectively. The field X has phenomenologically been introduced in the previous model [9], the VEV of which has the form (2.10) The form (2.10) leads to together with X X = X , where X 3 and X 2 is defined by Eqs. (1.2) and (1.8), respectively.
Here, for simplicity, we have put v X = 1 because we are interested only in the relative ratios among the family components. At present, there is no idea for the origin of the form (2.10). We may speculate that this form is related to a breaking pattern of U(3)×U(3) ′ (for example, discrete symmetries U(3)×U(3) ′ →S 2 ×S 3 ). In the present paper, the form (2.10) is only ad hoc assumption. However, as seen later, we can obtain a good fitting for the neutrino mixing angle sin 2 2θ 13 due to this assumption.
Parameter fitting
We again summarize our mass matrix model as follows: where, for convenience, we have dropped the notations " " and " ". In numerical calculations, we use dimensionless expressionsΦ 0 = diag(x 1 , x 2 , x 3 ) (with x 2 1 + x 2 2 + x 2 3 = 1) andP d = diag(e −iφ 1 , e −iφ 2 , 1). The parameters are re-refined by Eqs.(3.1)-(3.8). In Eqs.(3.7) and (3.4), we have denoted a u and a D as a u e iαu and a D e iα D , respectively, since we assume that the parameters a e and a d are real, while a u and a D are complex in our M D ↔ M u and M e ↔ M d correspondence scheme.
In this model, we have two parameters (a D , α D ) for neutrino sector, four parameters a D , ξ d 0 and (φ 1 , φ 2 ) for down-quark mass ratios and V CKM , and three parameters a e , (a u , α u ) for charged lepton mass ratios and up-quark mass ratios as shown in Table 2. Especially, it is worthwhile noticing that the neutrino mass ratios and U P M N S are functions of only two parameters after a e and (a u , α u ) have been fixed from the observed CKM mixing and up-quark mass ratios. There is effectively no change in the mass matrix structures except for Y ν from the previous paper [9], so that we can use the same parameter values for a e and (a u , α u ) as those in the previous study [9], which are given by a e = 7.5, (a u , α u ) = (−1.35, 7.6 • ). (3.9) Therefore, as far as PMNS mixing and neutrino mass ratios are concerned, we have only two free parameters (a D , α D ) in the present neutrino mass matrix model. Table 2: Process for parameter fitting. Since the parameters listed in each step can slightly affect predictions listed in the other steps, we need fine tuning after the 5h step. New parameter fitting in the present paper starts from the 5th step.
Step Since the parameters (a D , α D ) are sensitive to the observables sin 2 2θ obs 12 and R obs ν , we use the observed values of sin 2 2θ 12 and R ν in order to fix our parameter values (a D , α D ). In Fig.1, we illustrate an allowed parameter region of (a D , α D ) obtained from the observed values of sin 2 2θ obs 12 and R obs ν . As seen in Fig.1, the observed values uniquely fix the parameter values (a D , α D ) as (a D , α D ) = (8.7, 12 • ). (3.12) It is worthwhile noticing that the parameter values (3.12) uniquely give a prediction of sin 2 2θ 13 ≃ 0.09. For reference, in Fig.2, we illustrate behaviors of sin 2 2θ 12 and R ν versus α D in the case of a D = 8.7. We find that the choice α D = 12 • gives excellent fittings to the observed values of sin 2 2θ 12 and R ν simultaneously: Then, we obtain our predictions for sin 2 2θ 23 and sin 2 2θ 13 using (3.12) as follows: where δ ℓ CP is the CP violating phase in the standard expression and J ℓ is the rephasing invariant [12]. We can also predict neutrino masses: m ν1 = 0.00061 eV, m ν2 = 0.00899 eV, m ν3 = 0.05011 eV, (3.16) by using the input value [13] ∆m 2 32 = 0.00243 eV 2 . (Note that, in the present model, we cannot obtain an inverted neutrino mass hierarchy, because the hierarchies of the mass matrices are related to the hierarchy of the charged lepton mass hierarchy, i.e. to the VEV matrix Φ 0 .) We also predict the effective Majorana neutrino mass [14] m in the neutrinoless double beta decay as This predicted value is considerably larger than those in other models with normal hierarchy [15]. Finally, we list the predicted values of the CKM mixing parameters and down-quark mass ratios, although they are essentially the same as those in the previous model [9]:
Concluding remarks
In conclusion, we have proposed a new neutrino mass matrix form within the framework of the yukawaon model, in which we have only two adjustable parameters, (a D , α D ), for PMNS mixing and neutrino mass ratios. We have been able to remove the unnatural term [ξ ν term in Eq.(1.9)] in the previous model. Nevertheless, we can obtain reasonable results for PMNS mixing and neutrino mass ratios as shown in Eqs.(3.13) -(3.17) for the parameter values (a D , α D ) = (8.7, 12 • ). As seen in Fig.2, it is worthwhile noticing that only when we choose a reasonable value of R ν ≃ 0.033, we can obtain a reasonable value of sin 2 2θ 13 ≃ 0.09. Also, note that our prediction gives a sizable value of m ≃ 0.0034 eV among normal mass hierarchy models. Of course, we have also obtained reasonable results for CKM mixing and quark mass ratios as same as those in the previous paper [9].
Such the phenomenological success is essentially based on the following assumptions: (i) We have assumed that only Y D takes the mass matrix form with X 2 (not X 3 ), while others Y f (Φ f ) take the form with X 3 as given in Eq.(1.1). In Ref. [6], the form X 3 has been understood by a symmetry breakdown U(3)×U(3) ′ → U(3)×S 3 . However, for the form X 2 , the model is still in a phenomenological level. (ii) We have the bilinear form of the neutrino mass matrix, M ν = Φ ν Φ ν , as well as the up-quark mass matrix M u = Φ u Φ u . From the theoretical point view, there is no reason for the bilinear forms. We merely assigned R charges so that bilinear forms are realized for M u and M ν .
In spite of such the phenomenological success, the model still leave some basic problems: (i) The model is not economical. At present, we need many flavons in order to prepare reasonable VEV matrix relations. Since the purpose of the present paper is to investigate phenomenological relations among mass matrices, the structure of the superpotential given in Appendix is a temporal one. The superpotential will be improved in our future work. (ii) We have not discuss scales of yukawaons. The present model is based on an effective theory with an energy scale Λ. The scale Λ must be, at least, larger than 10 3 TeV from the observed K 0 -K 0 mixing (and also D 0 -D 0 mixing) [11]. In earlier version of the yukawaon model, it was considered to be Λ ∼ 10 15 GeV. However, VEVs of individual yukawaons depend on parameters in the superpotential (µ f in mass terms and couplings λ f ). We do not fix those scales in the present paper, although we expect that effects of those flavons are visible. (iii) We did not discuss SUSY breaking effects. As we stated in Section 1, for the time being, we assume that the SUSY breaking effects do not affect yukawaon sector. (iv) Our goal is to understand the hierarchical structures of all quark and lepton mass matrices on the basis of only the observed charged lepton masses. However, in the present model, we are still obliged to introduce flavonP d whose VEV matrix includes flavor-dependent parameters φ 1 and φ 2 as seen in (A.11).
Generally speaking, the yukawaon model suggests that our direction to unified understanding of the flavor problems is not wrong, although we have many problems in the yukawaon model. By leaving the settlement of the problems to our future tasks, the yukawaon model will be improved step by step.
The VEV matrix relations (2.2) -(2.9) are obtained from SUSY vacuum conditions, ∂W/∂Θ A = 0 (A = e, ν, · · · ). Since we assume that all Θ fields take Θ = 0, SUSY vacuum conditions with respect to another fields do not lead to meaningful relations, because such conditions always contain, at least, one Θ . In Eqs.(A.5) and (A.6), we have introduced fields E ′′ u , E ′′ d ,Ē u andĒ d in addition to E ′′ andĒ in order to distinguish the R charges ofΘ u and Θ d from that of Θ e . All VEV matrices E are given by the forms E ∝ 1 as seen in (A.10). The VEV matrix relations (2.2) -(2.9) have already been presented by replacing E → 1.
We list the SU(2) L ×SU(3) c ×U(3)×U(3) ′ assignments and R charges for additional fields in Table 3. The assignments of R charges are done so that the total R charge of the superpotential term is R(W ) = 2. We have 17 constraints on the R charges of the fields from Eqs.(2.1) and (A.1) -(A.6), while we have 34 fields even except for Θ fields in Tables 1 and 3. Therefore, we cannot uniquely fix R charge assignments of those fields. Here, let us give only typical constraints: From Eq.(A.7), we obtain r ′′ +r ′′ E = r E +r E . When we take R(E ′′ )+R(Ē ′′ ) = R(E)+R(Ē) = R(P d ) + R(P d ) = 1, we can introduce the following superpotential: from which we obtain relations E Ē ∝ 1 and P d P d ∝ 1. We assume following specific solutions of those relations: as the explicit forms of E , Ē and P d . We assume similar superpotential forms for (E,Ē), (E u ,Ē u ), (E d ,Ē d ), (E ′′ ,Ē ′′ ), (E ′′u ,Ē ′′ u ), (E ′′ d ,Ē ′′ d ) and (E ′ ,Ē ′ ). The term µ d E d in Eq.(A.6) has been introduced in order to adjust the down-quark mass ratio m d /m s as seen in Sec.3. Additional terms like µ d E d in the lepton and up-quark sectors do not appear, because we take R(E) = R(E d ) and R(E u ) = (E d ). | 2013-04-08T08:34:39.000Z | 2013-01-18T00:00:00.000 | {
"year": 2013,
"sha1": "b2e08d40c8d79d1a6ec4e8e8fe91cd47f7544157",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1301.4312",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b2e08d40c8d79d1a6ec4e8e8fe91cd47f7544157",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
144835726 | pes2o/s2orc | v3-fos-license | Response to Prof Thilo Marauhn's Opening Address on ’Land Tenure and Good Governance from the Perspective of International Law’
Professor Marauhn's address, in which he addressed land tenure from the perspective of international law, is published here as an oratio, followed by a response by Gerrit Ferreira of Potchefstroom. The growing importance, but need for further development of the notion of good governance in international environmental law and how land use is a key concern in this regard, is clearly demonstrated.
Introduction
I haven't had the privilege of reading prof Marauhn's contribution before my response this morning, but would nevertheless in my discussion focus on what I believe to be important pre-requisites for good governance on the international level.
My contribution will therefore concentrate on what I would like to term good global governance; that is, good governance on the global or, if you so wish, the international level. Of course, the few remarks I wish to make will of necessity in certain respects also apply to the position in individual states.
Definition
As a point of departure I would like to refer to the definition of good governance as formulated by the United Nations Economic and Social Commission for Asia and the Pacific. That body describes good governance as follows: It is participatory, consensus orientated, accountable, transparent, responsive, effective and efficient, equitable and inclusive and follows the rule of law. It assures that corruption is minimized, the views of minorities are taken into account and that the voices of the most vulnerable in society are heard in decision-making. It is also responsive to the present and future needs of society.
It is implicit in this definition that good governance on the global or the international will not, due to time constraints, be possible to discuss these notions extensively. A brief overview will have to suffice.
International legal personality
Good governance requires the inputs of a variety of role players including states, international and regional organisations, NGOs, multi-national corporations, minority groups and even individuals. Of these only states and international and regional organisations are recognised as subjects in public international law. In order to allow the others to play their full role in ensuring good governance, the question arises as to whether they should be granted some form of international legal personality. This would allow for proper control by the United Nations, for example, over the activities of these institutions on the international level.
An international rule of law
Good governance on the international level presupposes an international rule of law.
Although many arguments can be advanced against the recognition and effective application of an international rule of law as opposed to a domestic or national rule of law, it is nevertheless a fact that the restructuring of international society through globalisation increasingly necessitates international relations to be governed by the rule of law. One of the most difficult issues in this regard relates to agreement being reached on the core values underpinning an international rule of law, and the creation of an international court with compulsory jurisdiction to enforce the values agreed upon. A particularly controversial area in which an international rule of law as part of good governance would have to operate is the promotion and protection of international human rights. As a result of and in view especially of the prevailing cultural and religious differences from nation to nation, the debate on universalism and cultural relativism in the context of the application of international human rights will have to be re-visited, as it is imperative that consensus be reached between states on at least a minimum number of human rights to be protected.
Democracy
It is generally accepted that good governance can flourish only in a democratic environment. On the international level the United Nations as an institution promotes democracy and good governance at every opportunity, but unfortunately the Security Council itself suffers from a serious democratic deficit. The United Nations will have to be restructured if it wishes to function as a legitimate forum of democratic global governance. The lack of democratic structures on the international level is further exacerbated by the customary international law-making process as opposed to the international treaty-making process. Norms of customary international law and ius cogens are created by state practice without any formal democratic participation of role players such as minority groups and the international community of states at large.
Sovereignty
Although the nature of sovereignty has undergone many changes over the past centuries, it can still be described as the cornerstone of the modern interstate system. Good global governance can be established only if states are prepared to accept far-reaching limitations on their sovereignty. The current debate on the legitimacy of humanitarian intervention by one state in the territory of another state is a point in question. At the same time the encroachment upon the sovereignty of states cannot be driven too far as it may lead to the eventual demise of the nationstate as we know it today. It is clear, however, that article 2(4) of the Charter of the United Nations can no longer be interpreted so narrowly that almost anything a state does within its own territory can be classified as falling within the domestic jurisdiction of such a state.
Requirements of statehood
As the binding nature of international law is based to a large extent on the effectiveness of the applicable norms, one of the requirements of statehood has always been an effective government. But in view of the changing nature of international law from the effectiveness of norms to the legality of norms, and the heavy emphasis on good governance on both the national and the international level, it is possible to ask if the requirement of an effective government should not be replaced by the requirement of good governance. Such a change would imply amongst other things that states that do not practice good governance should be assisted by the international community (possibly through a reinstatement of the United Nations Trusteeship System) to rectify any deficiencies in order to properly function as fully-fledged members of the international community of states.
Conclusion
From this very brief overview it should be clear that as global governance and good governance have become key concepts in public international law, no state can afford to simply ignore the political and legal consequences these notions might have on its own position within the international community of states. Yet at the same time it must be stated that the full deployment of these notions in public international law is dependent upon agreement being reached within the international community of states on a number of issues. To re-iterate, states will have to attain clarity and certainty on whether or not to extend international legal personality to organisations and institutions other than the current international organisations and states, the contents of an international rule of law, together with at least a core of human rights that should bind all states irrespective of cultural differences, how to proceed with the democratisation process in undemocratic states and international organisations, the extent to which the sovereignty of states could and should be limited, and the elevation of the concept of good governance to an independent requirement of statehood.
Surely, this is probably easier said than done. But the reality of ongoing globalisation and the ever-increasing interdependence of states resulting from this process leave the international community of states with no other option but to seriously consider these issues without unreasonably hiding behind their own selfish interests. | 2017-09-06T05:19:03.086Z | 2011-06-20T00:00:00.000 | {
"year": 2011,
"sha1": "b49f27c9fabef902d0a78d666744f1849d6b367c",
"oa_license": "CCBY",
"oa_url": "https://journals.assaf.org.za/index.php/per/article/download/2611/2354",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b49f27c9fabef902d0a78d666744f1849d6b367c",
"s2fieldsofstudy": [
"Law",
"Political Science",
"Education"
],
"extfieldsofstudy": [
"Sociology"
]
} |
2242461 | pes2o/s2orc | v3-fos-license | Common Value Auctions with Voluntary and Qualified Entry
We study auctions under different entry rules. In the field, individuals self-select into auctions and regulations often require them to meet specific qualifications. In this experiment we assess the role of voluntary entry and financial requirements on the incidence of severe overbidding and bankruptcies, which are widespread in common value auctions. We show that voluntary entry amplifies overbidding and increases bankruptcy rates. Qualified entry has only modest impacts on overbidding. This study adds new insights to existing experiments where all subjects are usually placed exogenously into auctions.
INTRODUCTION
In common value auctions, bidders compete for an item that has the same value for everyone.Typically the item value is uncertain and bidders base their decisions on estimates of the true value, which is generally observed only after the auction is over.Canonical examples of this type of auction include procurement of public construction and public works projects, and leases and sales of government assets such as mineral extraction rights and the radio spectrum.Reverse auctions for private procurement of inputs or services also have a strong common value component.Persistent overbidding is a robust empirical finding for these types of auctions, both in naturally-occurring data and in data from controlled experiments (Wilson [1992]; Kagel and Levin [2002]).The winning bidder often incurs systematic losses, a phenomenon known as the "Winner's Curse." This study investigates the implications of sampling biases due to self-selection and restricted entry on the winner's curse using laboratory common value auctions.Our main research question is whether selecting bidders in different ways leads to an improvement in auction performance and bidding behavior.This could occur, for example, through additional opportunities for individual learning, since previous experimental research on the winner's curse shows that bidding performance improves over time.To vary learning opportunities, we consider a benchmark situation where bidders are randomly assigned to a given auction and compare it with two other situations where entry into auctions occurs either through the self-selection of bidders or through qualification-in which only the better-earning bidders can bid in the more risky and higher-stake auctions.One key measure of performance is bankruptcy rates, as bankruptcy generally implies a lack of completion of the transaction or task, generating a cost for society and the termination of (possibly long-term) supply relationships.We also compare behavior with the theoretical predictions for equilibrium bidding in these treatments, and seek to identify the characteristics of bidders who self-select into bidding in common value auctions.This study joins a wave of experimental investigations of specific auction rules that are of interest for field applications (Armantier, Holt and Plott [2013]; Merlob, Plott and Zhang [2012]).
This paper focuses on a comparison of different mechanisms through which the sample of participants is selected into bidding.In the field, auction participants self-select into bidding and are not a random sample of the population.We know little about the bidding behavior of those who seek to enter these auctions compared to the population at large.Moreover, the pool of potential bidders presents an additional selection bias because entry in many auctions is restricted by the auctioneer.This is particularly relevant for public procurement auctions, although it is present also in auctions with indicative bidding. 1lmost all auction experiments, by contrast, do not disclose the nature of the experiment when recruiting subjects and require everyone to participate in auction bidding. 2 The only selection that takes place in the laboratory is generally through an eventual bankruptcy, which leads the subject to drop out of the auction, or low earnings that discourage subjects from returning to "experienced" sessions (Casari, Ham and Kagel [2007]).Moreover, very few auction experiments have considered endogenous entry.Nearly all of those consider the independent private values setting (Ivanova-Stenzel and Salmon [2004]; Palfrey and Pevnitskaya [2008]; Ertaç, Hortaçsu and Roberts [2011]), where the winner's curse does not occur since bidders know their own value with certainty.In these experiments bidders tend to enter the auction too often.Cox, Dinkin and Swarthout [2001] is the only previous experiment that studies endogenous entry in common value auctions.Subjects' alternative to auction bidding was the collection of a known "safe haven" payment.Cox, Dinkin and Swarthout [2001] study market size given that entry is endogenous.In our paper market size is fixed and we study the selection of bidders in markets.We allow subjects to choose between different bidding activities, and also compare different selection procedures for entry and not only voluntary self-selection.
It is important to study self-selection or "qualification" requirements to enter common value auctions because they could affect the extent and origins of the winner's curse, which is a severe departure from the predictions of the risk neutral Nash equilibrium whose source is still unknown.One leading interpretation is that bidders' reasoning fails to account for the adverse selection implicit in the winning event.Even if ex-ante estimates were unbiased for everyone, the winner is expected to have the highest estimate among all bidders.Hence, when conditioning on the event of winning, the winner's estimate will be (ex-post) biased upward (Charness and Levin [2009]).Regardless of the source of the winner's curse, substantial evidence exists that it fades away only very slowly and when bidders are allowed enough exposure to the task.Such convergence toward the equilibrium predictions is achieved through a combination of individual learning and harsh selection through the survival of the smartest (Casari, Ham and Kagel [2007]).Our design allows participants to gain some experience through low-stake tasks with a similar underlying logic, which sets up a more favorable situation for learning and sorting the most able bidders into the high-stake task.Moreover, we include an alternative task that is simpler than an auction and expectations about others' information or rationality levels play no role.It is possible that selection at entry also reduces or eliminates the winner's curse in common value auctions.In particular, there may be important welfare implications depending on whether the adjustment takes place through learning, survival, or selection at entry.
Policy measures that prevent bankruptcies, such as in public procurement, are intended to improve social welfare.As noted above, government auctions for public works and reverse auctions for private procurement have a strong common value component.Participation in such auctions is often highly regulated and limited to qualified bidders in order to prevent the bankruptcy of winning contractors.Such bankruptcies are in practice very costly due to the social cost of the consequent delay in the completion of public infrastructure, delivery of 2. Theoretical models considering endogenous entry include Harstad [1990], Hausch and Li [1993], Levin andSmith [1994], andMcAfee andMcMillan [1987].needed inputs or services, and the disappearance of the organizational capital embedded in a firm (goodwill).Our experimental design manipulates access to the auction markets in some ways analogous to these qualification regulations and assesses the impact on bankruptcy.
We report two main results.First, voluntary entry into common value actions does not reduce the winner's curse, as the fraction of overbidders is higher when participants self-select into auctions than in the case of random assignment.Second, a simple version of the qualification procedure based on cumulative earnings does not eliminate winner's curse bidding but only marginally reduces it.Thus, voluntary entry does not improve auction performance or reduced the winner's curse in our experiment, both when compared to random assignment and to a simple qualified entry mechanism.
SELECTION IN FIELD AUCTIONS
A well-known example of the costly impact of overbidding and bankruptcies is the 1996 FCC auction for the C-block radio spectrum, which received winning bids of $10.2 billion.The FCC established that auction receipts would be collected through an installment plan that permitted the winning bidders to pay their debt obligations over a ten-year period.At the time, the C-block auction was viewed as a huge success.Several licensees later declared bankruptcy, however, and many others returned the bandwidth originally assigned to them.As a result, less than 10% of spectrum issued in the C-block auction was allocated as bid, with the remainder either tied up in lengthy bankruptcy court proceedings or returned to the FCC for re-auction (Committee on Commerce [1998]; Plott [2000]).This case shows the welfare cost of bankruptcy, both in terms of lost organizational capital for the winning entities and in terms of unused assets.
In order to avoid a socially undesirable outcome of bankruptcy by the selected contractor, it is common to require that bidders pre-qualify.For example, European law restricts participation to public work auctions through a certification system.To be certified a firm must meet several criteria regarding financial soundness and technical capabilities.In the European Union there are criteria concerning the current ability to successfully complete the project and others about the recent experience in projects of similar type and amount (Directives 93/37/EEC, 97/52/EC, and 2001/78/EC).
In particular, a contractor can be excluded as unsuitable in accordance with criteria of economic and financial standing and of technical knowledge or ability.First, a bidder must not have already asked for bankruptcy protection.Second, each bidder is required to supply proof of good financial and economic standing in terms of guarantees by banks, balance sheets, or in other forms. 3This provision helps ensure that the bidder will be able to absorb 3. National legislations that implemented the European Directives provide additional details.For instance, the Italian legislation (Law 109/94 and Ordinance DPR 34/2000) requires a deposit in a locked bank account.Higher discounts on the baseline budget of the procurement auction require a greater deposit.Deposit requirements range from a minimum of 2% for no bid discount to 12% for a 20% bid discount, to 32% for a 30% bid discount.Legislation about qualifying an eventual loss originating from a miscalculated bid without going bankrupt.In our experimental design, we adopt similar criteria to restrict participation in the auctions. 4Third, a bidder must possess the technical capability to complete the project, including evidence about management's skills, equipment availability and current workforce.Fourth, the firms must have already had substantial experience in carrying out projects of the same type and scale.This experience refers to the proper completion of projects according to the rules of the trade in the last five years, Interestingly, Dyer and Kagel [1996] mention specialization as a voluntary strategy of contractors in order to avoid the worst effect of the winner's curse.Within an economic model of auction bidding, specialization could provide a restriction in the support of the distribution of the private estimate of the object, or a reduction in the variance, which would reduce the common value component of the object.This aspect does not play a direct role in our experimental design.
Some national legislation requires bidders to have experience on how to handle projects of comparable size of the one currently bid.When a firm is in the official list of recognized contractors, it is generally authorized to bid in government auctions within a maximum baseline budget. 5A newly established firm will have to acquire experience in small projects before being able to bid in large projects.This regulated progression from small to large value auctions could provide an effective solution to the high rate of bankruptcies of inexperienced bidders.Some auctioneers may not care about bidder bankruptcies, of course, and could view the overbidding and bidder losses as profitable outcomes to encourage.In private procurement settings, for example, buyers could benefit when suppliers suffer from the winner's curse.In other cases the auctioneer has longterm objectives and may have concerns for bidders' long-term viability.Consider for instance auction houses that may want to discourage overbidding as a way to maintain a good reputation among the public, or buyers of services or material inputs that will be delivered frequently over a long-term contract or are completed with a long horizon.The entry restrictions in public procurement and privatization settings indicate that some auctioneers place a value on avoiding bankruptcies.
bidding is common also in other nations; for instance, see the registration requirements for the Singapore Building Construction Authority [2014].
4. According to canonical theory, the rationale for the second criterion may be to avoid the problem of rational "overbidding" when there is little to lose, which increases the risk of bankruptcy.Moreover, even in equilibrium, ex-post profits of the winner may be negative.The criterion on technical capability is not relevant in the abstract experimental design.
5. Italian law lists 47 distinct types of projects (art. 18 DPR 34/2000), where experience in a different type of project is irrelevant for pre-qualification.For instance, experience in building power plants does not help to qualify for maintenance works of the power grid, and experience in providing lighted road signs for highways does not help to qualify for bidding to supply non-lighted road signs.Moreover, a firm is placed in one of eight budget categories, each one characterized by a maximum budget ranging from a quarter of a million euros to fifteen million euros and above.In the recent past the firm must have successfully completed at least one project of the same type with a budget of at least forty percent the maximum ceiling of the category (art. 3 DPR 34/2000).This condition could be met both with projects for the government or for the private sector.
THEORETICAL CONSIDERATIONS
All subjects placed bids in three activities: a high-stake, a medium-stake and a low-stake activity, which differed in the level of equilibrium earnings as well as in the level and type of risk.
The high-and medium-stake activities were common value auctions with identical rules, except for the level of the equilibrium earnings in dollars.In each period the item value x 0 was randomly drawn from a uniform distribution with upper and lower bounds [50, 950].In each auction each bidder received a private information estimate, x, drawn from a uniform distribution on an interval centered on the actual item value [x0 −15, x0 +15].The instructions illustrate this situation with the following example: Value of the item x 0 = 328 Lower limit 313
Upper limit 343
The private estimate x may be anywhere in this interval We implemented a first price sealed-bid auction procedure: the high bidder paid her bid amount b 1 and earned profits equal to x0 -b1 .For risk neutral bidders the symmetric risk neutral Nash equilibrium (RNNE) bid function f (x) is given by Kagel and Richard [2001] where and n is the number of active bidders in the auction.This equilibrium bid function combines strategic considerations similar to those involved in first-price private value auctions, and item valuation considerations resulting from the bias in the estimate value conditional on the event of winning.We deal with the latter first.
In common value auctions bidders usually win the item when they have the highest, or one of the highest estimates of value.Define E[x0 | X = x1n ] to be the expected value of the item conditional on having x1n , the highest among n estimate values, then 6.The Nash equilibrium solution and other theoretical aspects of common value auctions will be discussed only in reference to estimates in the interval 65 £ x £ 935 (called region 2), where by design about 97% of the observations lie (Wilson [1977]; Milgrom and Weber [1982]).Within region 2, bidders have no end point information to help in calculating the expected value of the item. (3) This provides a convenient measure of the extent to which bidders suffer from the winner's curse since in auctions in which the high estimate holder always wins the item, bidding above E[x0 | X = x1n ] results in negative expected profit. 7 In each activity there were n = 5 subjects, which previous studies suggest being sufficient for the winner's curse to emerge.With n = 5 the bid factordefined as the signal minus the bid-that generates zero expected profits is 10.00, or approximately 67% of the total bid factor in the RNNE. 8 The low-stake activity was a company takeover game where there was a buyer and a seller who moved sequentially (e.g., Samuelson [1984]; Casari, Zhang and Jackson [2016]).For this activity there is no competition with other subjects and no strategic risk.We used this auction environment to provide a bidding activity for bankrupt subjects and those who wished to avoid bidding in the interactive common value auctions.Similar to the common value auction, subjects who fail to condition on the event of winning may suffer from the winner's curse.This bidding activity thus allows us to assess subjects' general and initial propensities to overbid but in a simplified environment that eliminates strategic uncertainty and has smaller opportunities to gain and lower risk to lose money.In this auction the buyer made a take-it-or-leave-it offer b ∈[0, 36] to a computer seller whose company's value was s.The seller either rejected or accepted the bid.The payoff for the seller was s if she rejected and b if she accepted.The payoffs for the buyer were 0 if the seller rejected and (1.5s −b) if she accepted.The company could have all possible values s between 6 and 24.When making a decision, the seller had private information about s, while the buyer only knew that each realization of s had equal probability.The computer seller accepted all bids greater or equal to the seller's company value.
Hence, the task was a bilateral bargaining problem against a computer with asymmetric information and valuations.The informational disadvantage of the buyer was offset by an assumption that the buyer's value was 1.5 times the seller value, s.A rational buyer had the following objective function (Holt and Sherman [1994]): b −6 b −6 Rational objective: 1 A bid of 12 is optimal for the risk-neutral rational buyer who accounts for the selection effect arising from the fact that sellers only accept bids that exceed their valuation s.This bid yields an expected profit of 0.5.7.This design mostly followed Casari, Ham and Kagel [2007].Even with zero correlation between bids and estimate values, if everyone else bids above E x0 | X = x1n , bidding above E x0 | X = x1n results in negative expected profit as well.As such, if the high estimate holder frequently wins the auction, or a reasonably large number of rivals are bidding above E x0 | X = x1n , bidding above E x0 | X = x1n is likely to earn negative expected profit.
8. This approximation is based on the fact that within region 2 the RNNE bid function is essentially f (x)= x −15 , because the negative exponential term h(x) in equation ( 1), approaches zero rapidly as x moves beyond 65.
EXPERIMENTAL PROCEDURES Overview
The activities in each session are outlined in Table 1.Each session had 15 subjects who underwent an investment part, a training part, and a main part.Each session opened with a simple task to measure subjects' preferences toward risk, along the lines of Gneezy and Potters [1997].Everyone chose an amount up to $5 to place into a risky investment that yielded 0 or three times the invested amount with equal probability.The outcome of this risky investment decision was determined at the end of the session.
The training part was identical across treatments and aimed at familiarizing subjects with the various activities: low-stake, medium-stake and high-stake auctions.For each activity there was a sequence of one dry run (unpaid) period followed by three periods for profit.After each sequence, the participants were randomly divided into three independent markets with five bidders each.Subjects received full feedback at the end of each auction. 9 The starting balance was $10.In the low-and medium-stake activities, additional earnings in points were converted into dollars at a rate of $1 for every 4 points.In the high-stake activity, the conversion rate was $1 for every 2 points and each subject also received $0.25 in every period.As a result, the high-stake activity yielded equilibrium earnings more than five times as large as the lowstake activity (Figure 1). 10 9.In the medium-and high-stake auctions an admissible bid was any number between 0.00 and x + 22.50 .This upper restriction on allowable bids was intended to prevent bankruptcies resulting from typing errors, while still permitting substantial overbidding.Bids could be specified in up to two decimal places.The instructions informed the subjects about the underlying distribution of s, x 0 and x.A copy of the instructions are included in the appendix.
10.This participation bonus does not change the optimal bidding strategy.It was necessary to make these auctions financially more attractive, since overbidding (documented below and throughout this common value auction literature) typically led to negative trading profits for the winning 2).
Treatments
The experiment involved three treatments, which differed in the way subjects were allocated into activities: random assignment, qualified entry, and voluntary entry.Five subjects bid in low-stake auctions, five in the medium-stake auction, and five in the high-stake auction.The allocation of participants into activities remained fixed for a block of five periods.At the start of every block subjects observed the list of all individual U.S. dollar profits earned in the previous block sorted by activity and without identities. 11The rule to allocate subjects to activities varied by treatment and was explained after the training phase.
Under the Random Assignment treatment, subjects were reassigned to activities through independent random draws at the start of every five-period block. 12 Under the Qualified Entry treatment, we assigned subjects to an activity according to a noisy measure of bidder ability based on past performance.All subjects in a session were ranked according to their accumulated point earnings at the start of every five-period block.This earnings ranking excluded the extra points assigned for merely participating in the common value auctions.The top five earners entered the high-stake auction, the bottom five earners entered the low-stake auction, and subjects ranked 6 to 10 were placed in the medium-stake auction.Any ties were broken randomly.The selection procedure was intended to place the more successful bidders (based on past performance) in the high bidder.Before each activity of the training phase, an experimenter read aloud the instructions while subjects followed along on their own copy.At the conclusion of these initial instructions subjects answered five computerized quiz questions to test their instruction comprehension for that activity, and were paid $1 for each correct answer.Besides providing incentives for subjects to consider the instructions carefully, this quiz also provided explanations for any wrong answers.
11.In one of the 12 sessions, more than five bidders were bankrupt during six of the final periods.In that case, we reduced the market size of the medium-stake auction to four bidders, with the number of bidders always posted on subjects' computer screens.
12. In principle it was possible for a subject to be assigned to the same activity in all periods.In practice this never occurred, except for one subject who was bankrupt in all periods due to large losses during the training periods.This subject thus always bid in the low-stake auction.
Actual earnings
Equilibrium earnings value auction, similar to the good financial standing and successful bidding experience included in the qualification procedure for auctions in the field discussed in the section on selection in field auctions.Of course, the technical capability required for qualification is not a criterion relevant for a laboratory experiment.Past performance is partly due to luck, namely the randomness of the signals received.Past performance is also due in part to skill, such as the ability to avoid negative earnings and the winner's curse.
Under the Voluntary Entry treatment, subjects chose the activity for which they wanted to bid in for the upcoming five-period block.They stated their first, second and third choice, and the allocation algorithm provided subjects with the incentive to truthfully reveal their preferences over activities without interference from strategic considerations about over-or under-subscription of activities.The algorithm first placed five subjects into the high-stake auction.Subjects obtained their first choice whenever possible.Since the capacity was five bidders in each auction activity, sometimes an activity was over-subscribed.In such cases the assignment to the high-demand activities was randomly determined among those who ranked that activity highest.When an activity was under-subscribed, we next allocated those subjects who ranked that activity as second choice.Subjects who did not get their first choice were placed into their second choice whenever possible.If there were still slots available, we then considered also those who ranked it third choice.The algorithm then placed five subjects into the medium-stake auction following the same rules as above. 13
Details
In all treatments, the number of bidders per high-and medium-stake auctions was held constant at n = 5 to preserve comparability of results. 14We provide full feedback each period about the activity outcome.In the low-stake activity, after every period a subject observed the realized company value for the buyer, their period earnings in points, and their cumulative balance in dollars.In the medium-and high-stake activities, each auction involved new random draws for the true item value ( x 0 ) and for item private estimates (x).All bids were posted from highest to lowest along with the corresponding estimate values as well as all individual profits (or losses) (bidder identification numbers were suppressed) and the value of x 0 .
13. Before proceeding to assign subjects to the medium-stake auction, the algorithm removed their preferences for the high-stake auction from their rankings since this auction was already filled.Bidders were placed into a common value auction that they least preferred in only two times (out of 261 non-bankrupt activity rankings).If a subject who could not get their first choice specified a common value auction as their second choice, then they were assigned this second choice when space was available.If a subject who could not receive their first choice instead indicated that the low-stake auction was their second choice, then they were placed in the low-stake auction unless the common value auctions did not yet have five bidders each.
14.For brief periods following bidder bankruptcies the bidder numbers fell below five.A subject is bankrupt if she had a negative US dollar cumulative balance.Because they are no longer liable for losses, bankrupt bidders may engage in irresponsibly high bidding.For this reason they were automatically assigned to the low-stake auction in all treatments.Cox, Dinkin and Swarthout [2001] find no evidence that limited liability increases the winner's curse.Since in our experiment some markets occasionally had fewer than five bidders, the number of bidders in the subject's market was always posted at the top of bidders' computer screens.
We recruited 180 subjects by email using ORSEE (Greiner [2015]), drawn from the diverse student population at Purdue University.Each treatment involved 60 subjects divided into four sessions.The experiment was programmed and conducted with the software z-Tree (Fischbacher [2007]).No eye contact was possible among subjects during the experiment due to visual dividers between computer stations.Average earnings were $20.08 per subject (standard deviation $7.20).Sessions lasted less than two hours, including instruction reading, quizzes, and a post-experiment questionnaire.
RESULTS
Here we report five main results.The focus is on the voluntary and qualified entry into auctions with different stakes, and the random assignment serves as a baseline that replicates the standard procedure in auction experiments.This section is articulated into subsections that focus on the main treatment effects (Results 1-3), the impact of self-selection (Result 4), and the types of individual bidders (Result 5).Before presenting the main results, it is useful to comment on the patterns of bidder turn-over across the auctions.If the group composition is determined largely because of stable preferences for a specific auction, then there may be in little turn-over.A similar outcome may occur if the group composition depends on individual cumulative earnings, given that the highstake auction generates higher potential profit.In the experiment, instead, the turn-over rates were generally high.In the Qualified Entry treatment only 20% of the bidders remained in the high-stake auction for the entire session, which is an average of one out of five bidders.This fraction was just 8% in the Voluntary Entry and 0% in Random Allocation treatment.Bidder selection was more effective in keeping subjects out of the high-stake auction.About 53% never entered it in the Qualified Entry treatment, 32% in Voluntary Entry, and 15% in Random Allocation treatment.
Main treatment effects
Self-selecting into their preferred activity is generally credited for improving agents' welfare.Our first result indicates, however, that assigning people to activities according to their revealed preferences made them worse off on average, despite preferences being elicited in an incentive-compatible way.
Result 1: The pooled profits from all activities were lower when subjects could voluntarily choose where to bid than in the other treatments.
Support: Table 2 reports the total profits earned by bidders.Subjects in the Voluntary Entry treatment on average earned over 20% lower profit compared to the other two treatments, and cross-sectional regressions shown in the appendix (with robust variance estimates clustering to account for intra-session correlation) indicate that these profits are significantly lower than the Qualified Entry treatment (p-value = 0.039) and marginally significantly lower than the Random Assignment control treatment (p-value = 0.075).Many participants randomly assigned to a common value auction often placed bids with negative expected profits.The data are thus consistent with the literature documenting the winner's curse (Kagel and Levin [2002]).Our novel finding is that the frequency of these winner's curse bids increased in the Voluntary Entry treatment.
Result 2: When bidders voluntarily enter into the common value auctions, they suffered from the winner's curse more frequently than in the other treatments.
Support: Table 3 and Figure 2 provide support for Result 2. Table 3 summarizes the profits and frequency of winner's curse bids in the common value auctions for the three treatments (col. 1, 2 and 4, 5, respectively) based on the 25 periods following the initial training periods.Note: b = bid, x 0 = item value, x1n = highest private estimate, RNNE = Risk Neutral Nash Equilibrium.Only periods with five bidders, pooling medium-stake and high-stake auctions, and value draws in region 2 are included.(1) reports the average period profits of the winner in each treatment (including relevant participation points), with standard errors in parentheses; (2) displays the average profits that would be earned at the RNNE for the realized value and estimate draws, with standard errors in parentheses; (3) indicates the percentage of auctions in which the bidder with the highest estimate won the auction; (4) and ( 5) show the percentage of bids that are winner's curse bids, which are defined as bids that exceed the item's expected value conditional on being the highest estimate.The winner's curse frequency in the Voluntary Entry treatment is nearly onehalf of the bids, compared to about one-third of the bids in the other two treatments (Table 3).In order to compare the bidding performance of the auctions across treatments, we focus on the propensity to submit winner's curse bids using standard panel data econometrics.In particular, we estimate Probit models to compare overbidding across treatments, using robust variance estimates that allow for intra-subject and intra-session correlation.Treatment differences are assessed through dummy variables.These estimates are reported in the appendix, and they indicate that the winner's curse frequency is marginally significantly higher in the Voluntary Entry treatment compared to the Random Assignment control treatment (p-value = 0.056) and compared to the Qualified Entry treatment (p-value = 0.081). 15 The above comparisons refer to all bidders, but of course the directly payoffrelevant bids in a given period are the highest, winning bids.Here we consider winning bidders who bid above the conditional expected value and therefore suffered from the winner's curse (col.5 of Table 3).Based on panel regression estimates shown in the appendix (with robust variance estimates for session clustering), we conclude that in the Qualified Entry treatment the 55 percent rate is significantly lower (p-value < 0.001) than the 76% rate in the Voluntary Entry treatment.
With experience, subjects learn to avoid in part the winner's curse but learning appears retarded in the Voluntary Entry treatment.Figure 2 shows that 15.Small differences across treatments exist in the training periods, but these bids occur before any treatment manipulations are introduced.The same statistical tests applied to these training periods never reveal any significant differences across treatments.This indicates that the random assignment of subjects to treatments worked properly.the frequency of winner's curse bids starts at approximately one-half and then declines over time in all treatments.The decline starts earlier in the Qualified Entry and Random Assignment treatments, compared to the Voluntary Entry treatment where this frequency fluctuates upward in many early periods and remains near or above one-half of the bids until the final third of the session.
We now turn to performance under Qualifying Entry.
Result 3: In the Qualified Entry treatment, pooled profits were indistinguishable from Random Assignment.Only with respect to severe overbidding, in the Qualified Entry treatment bidders were marginally better.
Support: Tables 2, 3, and 4 provide support for Result 3. The difference in terms of pooled profits between the Qualified Entry and Random Assignment treatments was not significant (p-value = 0.367).In terms of overbidding frequency, the Qualified Entry treatment is marginally significantly lower than the 67% rate in the Random Assignment treatment (p-value = 0.084, Table 3).In addition to profits and overbidding, another measure of performance is the rate of bankruptcies.By the end of the session, 5% of bidders go bankrupt in the Qualified Entry treatment, which was significantly lower than the 13% in the Random Assignment treatment (p-value = 0.005) and the 18% in the Voluntary Entry treatment (p-value = 0.003).16
SELF SELECTION: WHO CHOOSES TO ENTER THE HIGH-STAKE AUCTIONS?
Recall that in the Voluntary Entry treatment, every five periods the nonbankrupt subjects ranked the three activities and entered into their most preferred activity whenever possible.When ranking activities, subjects' decision screens displayed the historical profit performance of individual bidders (shown anonymously) in each activity during the preceding block of periods.This information revealed that the high-stake auction exhibited the lowest average profit and the highest (variance) risk (Table 2), and therefore a subject who believes he would achieve typical earnings should avoid it.Nevertheless, the high-stake auction was the first choice in 40% of bidders' rankings, the medium-stake auction was first choice in 32%, and the low-stake auction was the first choice the remaining 27%.This suggests that subjects focused on factors other than the mean and variance returns of the alternative bidding activities when choosing which auction to enter.
There exist several reasons to expect better performance of bidders in the Voluntary Entry compared to the Random Assignment treatment but also reasons to expect worse performance, depending on the type of bidders who voluntarily enter the auctions.For this exploratory study we offer for consideration the following six factors through which entry might affect the frequency of bankruptcies and winner's curse bids.Factors 1, 2, and 3 point toward improved performance and factors 4, 5 and 6 point toward detrimental effects.
First, confused subjects may avoid the high-stake common value auctions.Those subjects who did not understand the rules of the common value auction and are forced to participate in the Random Assignment treatment may opt to stay out in the Voluntary Entry treatment.Second, subjects with no prior auction experience may stay out.In the field, bidders in highly complex auctions are generally professionals who specialize and self-select into that activity.These factors are conjectures based on a notion of ambiguity aversion (e.g., Chen, Katuscák and Ozdenoren [2007]).
Third, subjects who plan to place "passive" bids may enter in greater numbers in common value auctions.A bid is passive when the aim is not to be competitive and win but instead to obtain the $0.25 participation payment awarded each period to bidders in the common value auction markets.This factor is specific for our experimental design and biases the experiment toward finding a better performance under the Voluntary Entry treatment.The experiment was calibrated to include this participation payment to maintain the attractiveness of the common value auctions in light of the large and systematic winner's curse.With the current design, this provides the opportunity for a small but risk-free payment each period for a bidder willing to bid passively.Thus, the average earnings by other aggressive bidders may be irrelevant for a subject who is considering a passive bidding strategy.
Fourth, subjects with greater tolerance for risk may enter in larger numbers into common value auctions and bid aggressively.This factor is also a conjecture as there is no theoretical result providing unambiguous impacts of risk attitude on bidding in common value auctions, but in some circumstances more risk seeking agents place higher bids (Kagel and Richard [2001]).Fifth, subjects who prefer contests and competition the most may enter more frequently into common value auctions.This factor is based on behavioral results that show how subjects' "joy of winning" is a component of the utility function in bidding activities (Cooper and Fang [2008]), even when it leads to negative earnings (Sheremeta [2010]).We posit that its influence is weakest in the low-stake auction because it does not involve a direct competition with other bidders.Sixth, overconfidence may also play role (Camerer and Lovallo [1999]).It is not the presence of overconfidence per se that can damage the performance of self-selection into activities but its correlation with abilities.If the degree of overconfidence is negatively correlated with the ability to bid, self-selection into the activities can make session participants worse off than random assignment.
To provide some initial evidence regarding these factors in mind, we next explore systematically what characteristics influenced subjects' decision on whether to enter the high-stake auction.
Result 4: Subjects who seek to enter the high-stake auction are more frequently male, have no previous experience in field auctions, have high cumulative earnings, and have avoided losses more frequently in previous common value auctions.Subjects who display a greater tolerance for risk are less likely to enter.Support: Support for Result 4 comes from Table 5, which presents two Probit models of bidders' choice to rank the high-stake auction as their top choice.Model (1) includes as regressors the frequency of experienced losses and highest private estimates in earlier periods, and model (2) employs instead the subject's accumulated earning balance up to the period of entry choice.Since these earnings are endogenous, we use an instrumental variable approach that employs the frequency of receiving the high estimate in previous common value auctions and the period number as instruments for this variable.The results are consistent across both specifications. 17 The increased entry likelihood for male subjects is consistent with research documenting men's greater willingness to enter competitions (e.g., Croson and Gneezy [2009]).The estimates also show that factors 2, 4, and 5 discussed above are significant in influencing voluntary entry, although not always in the expected direction.Consider first the evidence on factors expected to improve the performance of the Voluntary Entry treatment (1, 2, and 3).Confusion does not appear to play a significant role.Table 5 includes variables to capture subject comprehension and confidence, but none of these variables are significantly associated with high-stake auction entry. 18The high-stake auction does not attract bidders that have more auction experience in the field; in fact, it is more likely to attract naïve bidders (i.e., those who report no auction experience in the field), which may be an important reason for the high rates of the winner's curse and bankruptcy in this Voluntary Entry treatment.Our initial conjecture goes in the opposite direction to the empirical evidence.A possible interpretation is that high risk aversion is associated to low cognitive ability (Dohmen et al. [2010]), which is relevant for bidding in a complex setting such as common value auctions.
Passive bidders exist but are few in number.A risk-free bid for the current parameters is one that is 15 experimental points or more below a subject's value estimate.Such bids are certain to lie at or below the true common value, but they won only 2 of the 300 high-stake auctions after the training periods.Such bids represent only 5.8% of all high-stake auction bids after training.This rate of riskfree bidding was much higher in the Voluntary Entry treatment (12.2%), however, compared to the Qualified Entry (1.8%) and Random Assignment (3.5%) treatments.This provides evidence that bidding in the common value auctions varied depending on how bidders selected into the alternative bidding activities.17.These models exclude some other factors that are never correlated with auction preference, such as self-reported grade point average, class standing, and major field of study.We also include a dummy for only the final block of periods, since all other period block dummy variables were never statistically significant.Estimates of similar models for preference of the medium-stake auction do not reveal any significant explanatory variables, so we do not report them here.
18. To measure confidence, after reading the instructions for the allocation rules after the training periods were over, we asked subjects "How do you think you will rank in terms of earnings among all participants?"There were five possible options, ranging from being among the three highest earners to be among the lowest three earners out of group of fifteen.This "confidence" question was not incentivized.Consider now the evidence on factors expected to make the performance of the Voluntary Entry treatment worse (factors 4, 5, and 6).Subjects who are most willing to take on risk according to our separate risk assessment task, investing at least $4 out of their $5 stake in an attractive but risky investment, are significantly less likely to want to enter the high-stake auction.This is opposite to what was conjectured, but is consistent with the substantially greater frequency of passive and risk-free bids submitted in the Voluntary Entry treatment mentioned above, indicating that these are submitted by the more risk averse bidders who entered this auction.These passive bidders were not the cursed winners who suffered losses and sometimes went bankrupt. 19The frequency of both winner's cursed bids and risk-free bids is higher in the Voluntary Entry than in other treatments, which suggests that the high-stake auction attracts different types of bidders.Some cautious bidders enter but seek mostly to collect the highstake auction participation payment rather than bid competitively, while some other aggressive bidders frequently suffer from the winner's curse perhaps due to lower cognitive ability.On balance the latter group tends to dominate since aggregate profits are lower and bankruptcies are higher in this treatment.
Types of individual bidders
The overall treatment comparisons reported above obscure substantial variation across individual subjects.Some subjects bid much higher than others and often go bankrupt, some overbid but do not always bid above the conditional expected item value, others bid closer to Nash equilibrium levels and avoid losses (but rarely win auctions), and a few subjects are passive and bid low, effectively withdrawing from the auction.In order to classify subjects into different types, we employ their median bid factor, where the bid factor equals to the bid minus the private estimate.This median is calculated considering all post-training common value auction bids submitted by each individual.Note: Classification based on median bid factors in post-training periods when pooling medium and high-stake auction bids.Total number of classified subjects is 167, which submitted common value auction bids in the post-training periods.Of the 180 subjects who participated in the experiment, 13 always bid in the low-stake auction, often because they were already bankrupt during the training periods.Category b includes 21 subjects who fit the definition plus a 22nd subject who had a median bid factor of -17.9.This individual could have also been included in the withdrawal group, and this reclassification would have no influence on the conclusions drawn here.
Based on median bid factors, we classified 167 subjects into five categories, which are shown in Table 6.A small group of bidders had median bid factors of less than -28 and thus effectively withdrew from bidding.The risk neutral Nash equilibrium bid factor was around -15, except for the infrequent cases of item values near the boundary of the value domain (outside of "region 2").Bidders in category b had median bid factors within one unit of this level.The vast majority of bidders overbid compared to this benchmark, and our classification procedure divides them into those with a bid factor that implies typical expected winner's curse bids (bid factor > -10; category d) and those who overbid by a smaller amount (bid factor -10; category c). 20A small number of subjects (category e) had positive median bid factors indicating bids that often exceeded their estimate.The lowest two classes d and e contain the subjects whose median overbid was large enough to exceed the conditional expected value, so they can be considered winner's curse bidders.The next result that Voluntary Entry leads to more winner's curse bidders in the common value auction than does Qualified Entry.
Result 5: Bidders prone to winner's curse bids participate more frequently in common value auctions in the Voluntary Entry treatment than the Qualified Entry treatment.
Support: In the Voluntary Entry treatment about 52.6% of subjects are winner's curse bidders (row (f) in Table 6), which is not significantly different from the Random Assignment baseline (p-value = 0.285) 21 , and is on the threshold of marginal significance compared to the 35.3% in the Qualified Entry treatment (p-value = 0.106).In the Voluntary Entry treatment nearly one-half of the bids submitted in the common value auctions were placed by individuals who were classified as winner's curse bidders (row (g) in Table 6); by contrast, only about 30% of the common value auction bids were submitted by such bidders in the other two treatments.The difference in these frequencies between Voluntary Entry and Random Assignment is not quite statistically significant (p-value = 0.158), but the difference between Voluntary Entry and Qualified Entry is significant (p-value = 0.034).
CONCLUSIONS
Bankruptcies and winner's curse are widespread and robust phenomena in common value auction experiments.Implications of these results for the field can be questioned because in naturally-occurring settings firms and individuals voluntarily enter when deciding to bid in auctions, and in many cases the auctioneer screens potential bidders in order to have only qualified participants.Qualifying bidding is especially important in procurement auctions, such as in public works projects.We report the first laboratory experiment on common value auctions 20.Individual median bid factors are highly correlated between the training periods and the posttraining treatment periods (correlation coefficient = 0.85).Consequently, the bidder classification is similar in the training and the post-training periods.Individual subjects typically either remain in their same class or improve by one class, due to the general reduction in overbidding over time (illustrated in Figure 2).
21.The statistical tests reported in this paragraph are all based on Probit models shown in the appendix that employ robust variance estimates that allow for intra-session correlation.
that incorporates simplified versions of these entry mechanisms and study their impact on bidding behavior, profits, the winner's curse, and bankruptcies.
To date, there is no shared theoretical explanation for the observed winner's curse phenomenon, although it is likely linked to cognitive limitations in statistical reasoning (Kagel and Levin [2002]; Casari, Ham and Kagel [2007]).Here we study whether self-selection and individual learning are behavioral mechanisms that can lessen the winner's curse and the bankruptcy rate.The design is motivated by the rules observed in the field, not by theory.
We report two main findings.First, letting auction participants self-select into the activities without barriers to entry has null or negative consequences on performance.Voluntary entry actually increases the fraction of overbidders in common value auctions compared to the benchmark of random allocation of subjects to auctions and does not lower bankruptcy rates.This result is not due to more people entering into the auction, as we kept market size constant.Thus, voluntary entry does not improve auction performance over random allocation of bidders.
Second, qualifying entry-using simplified criterion similar to those that restrict participation in large field auctions-reduces winner's curse bidding only marginally in comparison to random assignment of subjects to auctions.This small behavioral difference arises even though some of the past performance in winning and profits earned is due to luck (i.e., the particular signal draws) rather than just skill at avoiding overbidding and the winner's curse.Qualification also reduces the frequency of bankruptcies, as expected, but without fully eliminating it.
Some general considerations are in order.Previous experiments under random assignment report the importance of aggregate improvements in bidding over time.Such improvements may originate from a combination of individual learning and survival of the smartest through avoided bankruptcies.One main conclusion of this study is that individual learning does not substantially differ under three different entry rules.Allowing participants to learn the logic of common value auctions with low-stakes and then eventually opting for a high-stake task does not seem to reduce the winner's curse.We also find that entry rules impact bankruptcy rates but not pooled profits.While qualified entry almost mechanically reduces bankruptcies, the level of "ecological" rationality of the market does not improve once a degree of freedom is added in terms of voluntary entry.Ex-ante one could postulate some about arguments to expect an improvement and others to expect a deterioration of performance.This study provides empirical evidence showing a net detrimental effect of self-selection in common value auctions.A larger sample of participants might have identified more precisely the type of self-selection at work.These arguments and explanations, though, remain exploratory since the topic of entry rules has largely been neglected by the theoretical literature on auctions.
In the field, qualifications for entry are both financial (as in the experiment) and technical.This study shows that purely financial criteria help in reducing bankruptcy rates but fail to select the most competent bidders.Whenever there is a common value component in auctions, regulations about technical and experiential requirements should also be a key element in restricting entry to bidders.
Experiments can complement field data in the study of common value auctions because they overcome the unobservability of the individual private estimate and, to a lesser extent, the true value of the object for each bidder.Consider for instance that most field auctions are hybrid with both a private and a common value components.The ideal field dataset to study the questions in this paper would be a pure common value good that is auctioned through a mechanism that undergoes an exogenous and unanticipated change in entry rules, while bidders remain constant in numbers and the underlying process generating estimates is unaffected.Although difficult to find, such a setting would help strengthen the external validity of this study.
Figure 1 .
Figure1.Equilibrium and actual earnings of the three possible activities
Table 1 .
Session chart Note: Group composition could change after every block.Every line of the table is a block.Groups A + B + C = 15 participants, with 5 in each group.
Table 2 .
Mean period earnings (US dollars)
Table 3 .
Summary statistics for common value auction by treatment
Table 4 .
Accumulated profits and bankruptcy rates Note: Standard error of the mean shown in parentheses.Subjects began session with $10 endowment.
Table 5 .
Probit models of preference for high-stake auction (Voluntary Entry treatment only)
Table 6 .
Classification of bidders into types | 2016-02-20T08:33:50.931Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "d0fa94f015288c4202055b965bacd26201bf9a3c",
"oa_license": "CCBYNC",
"oa_url": "https://cris.unibo.it/bitstream/11585/588249/6/Casari_auction_selection%20POst.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "0a0edf54ec89b9f07b126cb8208219e3d355415e",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
187166353 | pes2o/s2orc | v3-fos-license | The Replication Hypothesis along the Take-Off Run and a System of Equilibrium Equations at the Lift-Off of a Protobird
: An extant bird resorts to flapping and running along its take-off run to generate lift and thrust in order to reach the minimum required wing velocity speed required for lift-off. This paper introduces the replication hypothesis that posits that the variation of lift relative to the thrust generated by the flapping wings of an extant bird, along its take-off run, replicates the variation of lift relative to the thrust by the flapping wings of a protobird as it evolves towards sustained flight. The replication hypothesis combines experimental data from extant birds with evidence from the paleontological record of protobirds to come up with a physics-based model of its evolution towards sustained flight while scaling down the time span from millions of years to a few seconds. A second hypothesis states that the vertical and horizontal forces acting on a protobird when it first encounters lift-off are in equilibrium as the protobird exerts its maximum available power for flapping, equaling its lift with its weight, and its thrust with its drag.
Introduction
Lift is often considered the primordial force leading the primitive, evolving bird, referred to here as a protobird, towards sustained flight, a pervasive concept in the field of flight biomechanics. A protobird is a non-flying, non-descript animal capable of running while generating thrust (and some "residual" lift) by flapping its wings. The limited flapping kinematics of a protobird is assumed to involve a low level of specific kinetic energy (per unit mass) available at its wings, that increases along evolution, until reaching a critical level. Further increase in its kinetic energy allows it to reach level flight (L = W) without the possibility of ascending flight, at which point, the flyer is referred to as a bird.
The use of the word "lift" may mislead one to the apparently foregone (and reasonable) conclusion that lift invariably lifts the protobird up during lift-off, resulting in spontaneous ascending flight. As a result, lift is frequently referred to as "fighting" gravity along the evolution to sustained flight [1,2]. Moreover, the mention of the words "flapping wing" brings to mind the word "lift", but rarely the concept of "thrust". This paper presents: (i) the replication hypothesis, applicable along the take-off run of a protobird, and (ii) a hypothesis that proposes the protobird to be in equilibrium when first encountering lift-off, a condition defined by two simultaneous equations.
Both of these hypotheses make use of the normalized lift, η L , normalized thrust, η T and the normalized drag, η D (counterparts to the lift coefficient C L , the thrust coefficient C T , and the drag coefficient, C D ), nondimensional numbers that have a physical meaning, and can be applied directly The lift L of a fixed translating wing of an aircraft during its take-off run or during cruise flight and the flapping wing of a flyer is, in both cases, perpendicular to the translation (or forward) velocity, v∞. The main difference between these two wings is that the wing of an aircraft, which has a fixed angle of incidence with respect to the fuselage, is able to generate only lift L, not thrust T. By contrast, the flapping wing has the ability to supinate (during the upstroke) and pronate (during the downstroke), allowing the airfoil to align itself to the incoming flow and is able to generate thrust T, Figure 1.
The wing velocity, vw, is the vector summation of the other two velocities: ̅ ( , ) = ̅ ( , ) + ̅ ( , ). (1) It is seen that each of the above velocities vary with time t and wing span station "y" and, so, a double integral is in order when vw (actually vw 2 ) is used for the calculation of the aerodynamic forces [2].
The "objective" (there are no objectives per se along evolution) behind a bird's take-off run is to equal (in a protobird) or exceed (in a bird) the minimum wing velocity, vw min, at which point lift L equals (or exceeds) weight W, as the thrust T acts as the primordial force that allows the flyer to reach for this all-important velocity. In this vector scenario, the all-important wing velocity vw (which takes the role of the translation velocity, v∞, in the lift equation for fixed wings) is calculated by applying the labor-intensive blade element method.
The scalar scenario: The kinetic energy scenario is a more practical approach to calculating the average aerodynamic forces by flapping wings involving its translation velocity, v∞, and angular velocity, ω. The two relevant kinetic energies available at a flapping wing are shown in Figure 2 [4]. The lift L of a fixed translating wing of an aircraft during its take-off run or during cruise flight and the flapping wing of a flyer is, in both cases, perpendicular to the translation (or forward) velocity, v ∞ . The main difference between these two wings is that the wing of an aircraft, which has a fixed angle of incidence with respect to the fuselage, is able to generate only lift L, not thrust T. By contrast, the flapping wing has the ability to supinate (during the upstroke) and pronate (during the downstroke), allowing the airfoil to align itself to the incoming flow and is able to generate thrust T, Figure 1.
The wing velocity, v w , is the vector summation of the other two velocities: v w (t, y) = v ∞ (t, y) + v f (t, y).
It is seen that each of the above velocities vary with time t and wing span station "y" and, so, a double integral is in order when v w (actually v w 2 ) is used for the calculation of the aerodynamic forces [2]. The "objective" (there are no objectives per se along evolution) behind a bird's take-off run is to equal (in a protobird) or exceed (in a bird) the minimum wing velocity, v w min , at which point lift L equals (or exceeds) weight W, as the thrust T acts as the primordial force that allows the flyer to reach for this all-important velocity. In this vector scenario, the all-important wing velocity v w (which takes the role of the translation velocity, v ∞ , in the lift equation for fixed wings) is calculated by applying the labor-intensive blade element method.
The scalar scenario: The kinetic energy scenario is a more practical approach to calculating the average aerodynamic forces by flapping wings involving its translation velocity, v ∞ , and angular velocity, ω. The two relevant kinetic energies available at a flapping wing are shown in Figure 2 [4]. Note that the scalar scenario describes its relevant physics available at the complete wing, not at a given instant and spanwise station.
Borrowing a parameter from the scalar scenario, the legacy dynamic pressure, q∞, introduced by Prandtl circa 1921 [5], it can be used to quantify the specific kinetic energy due to the translational speed, multiplied by the density of the medium: An expansion of this dynamic pressure allows for the algebraic addition of the kinetic energy due to flapping, ek flap, a function of the average angular velocity due to flapping, ω. This expansion is referred to as the kinetic pressure, Q∞ (from kinetic energy-based pressure): The angular velocity ω in the above equation describes the kinetics involved in the continuous rotation of propeller blades and lift rotors [3], as well as the cyclic wing flapping of birds, bats, and insects [2], as used in this paper and defined numerically by Equation (9) below.
The equation for lift L, thrust T and drag D by flapping wings can now be written as an aerodynamic force F that equals the product of the kinetic pressure, Q∞, a normalized force ηF, (a counterpart to the lift, thrust and drag coefficients CL, CT, and CD), and the corresponding reference areas, Sref [3]: The lift L of flapping wings of wing planform area, Sp, as Sref, translating at a velocity, v∞, and flapping at an average angular velocity ω is The above equation is written as According to Equation (1), the wing velocity, vw, equals The specific moment of inertia I/m is the ratio of the wing's moment of inertia I due to its mass distribution along its wing semispan r and the mass m of the "lifting system", the protobird's. Note that the scalar scenario describes its relevant physics available at the complete wing, not at a given instant and spanwise station.
Borrowing a parameter from the scalar scenario, the legacy dynamic pressure, q ∞ , introduced by Prandtl circa 1921 [5], it can be used to quantify the specific kinetic energy due to the translational speed, multiplied by the density of the medium: An expansion of this dynamic pressure allows for the algebraic addition of the kinetic energy due to flapping, e k flap , a function of the average angular velocity due to flapping, ω. This expansion is referred to as the kinetic pressure, Q ∞ (from kinetic energy-based pressure): The angular velocity ω in the above equation describes the kinetics involved in the continuous rotation of propeller blades and lift rotors [3], as well as the cyclic wing flapping of birds, bats, and insects [2], as used in this paper and defined numerically by Equation (9) below.
The equation for lift L, thrust T and drag D by flapping wings can now be written as an aerodynamic force F that equals the product of the kinetic pressure, Q ∞ , a normalized force η F , (a counterpart to the lift, thrust and drag coefficients C L , C T , and C D ), and the corresponding reference areas, S ref [3]: The lift L of flapping wings of wing planform area, S p , as S ref , translating at a velocity, v ∞ , and flapping at an average angular velocity ω is The above equation is written as According to Equation (1), the wing velocity, v w , equals Aerospace 2019, 6, 21 4 of 21 The specific moment of inertia I/m is the ratio of the wing's moment of inertia I due to its mass distribution along its wing semispan r and the mass m of the "lifting system", the protobird's. Assuming the wing has a mass distribution along its wing length r similar to that of a cylindrical rod about its end, its specific moment of inertia I/m is [5] The angular velocity ω of a flapping wing is a function of the flapping frequency f of the wing (the frequency of the upstroke and downstroke during flapping), in Hz, (f is equal to half the stroke frequency, f st ) and the amplitude or stroke angle Φ in radians, the flapping angle of the wing over one flapping stroke [2]: The wing velocity, v w , Equation (7), is rewritten next by replacing the specific moment of inertia, I/m, by (1/3·r 2 ) from Equation (8), and the angular velocity ω by (2·f ·Φ) from Equation (9): Reorganizing terms, we obtain The product Φ·r is the arc A subtended by the tip of the wing of half span r as it flaps at an amplitude angle Φ. This length multiplied by (2·f ) results in the average tangential velocity v tt at the wing tip (the subscript tt stands for tip, tangential) during a flapping stroke. The average velocity of the wing, v w , is The wing velocity v w is next written as a function of the Strouhal number, the ratio of the tangential velocity at the tip of the flapping wing, v tt , and its forward speed, v ∞ : This definition of the wing velocity, v w , derived using the scalar scenario, varies slightly from Lentink and Dickinson's definition of its characteristic speed, U, derived from the vector scenario [6].
The kinetic pressure Q ∞ in Equation (3) is written as The equation of lift L of flapping wings is For a Strouhal number, St = 0, the above equation equals the legacy lift equation of a translating, non-flapping wing of a gliding bird or an aircraft, where the normalized lift, η L , equals the lift coefficient, C L , and the reference area, S p , its wing planform area. The above format quantifies the thrust T of flapping wings: Likewise, the drag equation can also be written as a function of the Strouhal number, St, but for simplicity, this paper assumes the drag D of a flapping flyer to be similar to its non-flapping counterpart (ω = 0 ⇒ St = 0), of frontal area, S f : This equation is similar to the legacy drag equation where the normalized drag, η D , equals the drag coefficient, C D .
When the aerodynamic force vector (L or D) is positioned (close to) perpendicular to the respective reference areas S ref , (L ⊥ S p , D ⊥ S f ), the maximum value of the corresponding dimensionless number (η L max , η D max ) is expected to be close to 1 (this does not apply to thrust T aero as it is not found close to perpendicular to the reference area S p of a flapping wing, Figure 1). Notwithstanding the fact that the values of η L max and η D max are dependent on Reynolds numbers, the fact that these nondimensional numbers are close to 1 (i) allows them to be used as figures of merit for the comparison of dissimilar lifting systems (like comparing flapping wings and rotating cylinders in Magnus effect in the same footing), (ii) allows them to be estimated when their values are unknown, and (iii) allows them to be interpreted as the ratio of specific work, w, and specific kinetic energy, e k [3]: The reference area S ref for normalizing lift L and drag D are thus selected as the wing planform area, S p , and the frontal area, S f , respectively. The reference area for normalizing thrust T is the wing planform area, S p .
A First Hypothesis: The Replication Hypothesis along the Take-Off Run
For discussion purposes, a bird is defined as a flapping flyer capable of ascending (oblique or vertical) flight after reaching lift-off (L > W) and a protobird is a flapping flyer not capable of ascending flight following lift-off due to insufficient flapping power available, a condition referred to as level lift-off (L = W). An analogous protobird has similar weight, wing loading and wing velocity v w as a bird, as evidenced from its paleontological record. Although both a bird and an analogous protobird are involved in this hypothesis, it is not implied that one evolved from the other.
The replication hypothesis posits that the variation of lift relative to the thrust generated by the flapping wings by a bird along its take-off run closely replicates the variation of the lift relative to the thrust by an analogous protobird along its evolution. This hypothesis scales a process lasting millions of years to one lasting few seconds in an analogous way a wind tunnel test scales down the large size of an aircraft to a more manageable wind tunnel model. The replication hypothesis does not require any Reynolds number corrections.
The lift and thrust vectors generated by flapping wings along the take-off run and averaged over time are perpendicular to each other, Figure 1, and so the magnitude of the resultant vector R is Aerospace 2019, 6, 21 6 of 21 The angle of the resultant R with the horizon is referred to as the lift activity angle, θ: The lift activity angle θ along the take-off run may vary from approximately 10 • to 90 • , and is a measure of the ability by flapping wings to generate a lift L, relative to the thrust T. Its progression as it applies to both bird and protobird is reviewed next by segmenting their take-off run from stand-still to lift-off in three stages.
Thrust-predominant Stage I: This stage is characterized by the prevalence of thrust T over lift L, most of it generated by friction between the soles of its feet and the ground or by paddling over water, T run , and supplemented by the thrust due to flapping, T aero : The aerodynamic thrust T aero is generated by the protobird's mechanism of pitch cycling about the spanwise axis of the wing, a wing's aeroelastic response to flapping in the form of a cyclic pronation and supination of the wing. The low kinetic energy cost of this pitch cycling process may have facilitated the role of thrust T as the primordial force along the protobird's evolution. More on this subject in Section 4.2.
Throughout this first stage, lift L plays a residual role on the flyer, and is illustrated by the flapping and paddling puffin, Figure 3, which finds its lift activity angle θ to be within a range of, say, 10 • and 35 • . Thrust-predominant Stage I: This stage is characterized by the prevalence of thrust T over lift L, most of it generated by friction between the soles of its feet and the ground or by paddling over water, Trun, and supplemented by the thrust due to flapping, Taero: The aerodynamic thrust Taero is generated by the protobird's mechanism of pitch cycling about the spanwise axis of the wing, a wing's aeroelastic response to flapping in the form of a cyclic pronation and supination of the wing. The low kinetic energy cost of this pitch cycling process may have facilitated the role of thrust T as the primordial force along the protobird's evolution. More on this subject in Section 4.2.
Throughout this first stage, lift L plays a residual role on the flyer, and is illustrated by the flapping and paddling puffin, Figure 3, which finds its lift activity angle θ to be within a range of, say, 10° and 35°. At a lift activity angle θ of 45°, the thrust T equals the lift L. This condition, a symbolic milestone referred to as the thrust-to-lift crossover, has no discerning dynamic effects on the bird or protobird.
Lift-predominant Stage III: The lift activity angle, θ, varies from 50° to 90°. Whereas the lift L in Stage I played a "residual" force on the flyer, it now represents a growing, dominant force that allows for the lift-off of a bird (L > W), and the level lift-off of a protobird (L = W). Level lift-off is reached by the protobird while it exerts maximum flapping power, and proper flight is initiated at ground level, without attaining ascending flight. A casual observer witnessing this level lift-off and subsequent flight may not be able to distinguish between "running" (with no contact between the ground and its soles, Trun = 0), and flying at ground level. Such a particular dynamic condition is possibly shared by all individuals of its and upcoming generations until a further gradual increase in the flapping power allows for a gain in altitude, measured in centimeters. When capable of ascending flight, (L > W), the protobird transitions to a bird. Thrust & Lift-shared Stage II: This second stage finds the lift activity angle, θ, varying between 35 • and 50 • , where lift shares similar values with thrust T, and a resulting gradual decrease in the generation of T run due to the increase in lift that reduces the normal force between the soles of the flyer and the ground, as T run equals: At a lift activity angle θ of 45 • , the thrust T equals the lift L. This condition, a symbolic milestone referred to as the thrust-to-lift crossover, has no discerning dynamic effects on the bird or protobird.
Lift-predominant Stage III: The lift activity angle, θ, varies from 50 • to 90 • . Whereas the lift L in Stage I played a "residual" force on the flyer, it now represents a growing, dominant force that allows for the lift-off of a bird (L > W), and the level lift-off of a protobird (L = W). Level lift-off is reached by the protobird while it exerts maximum flapping power, and proper flight is initiated at ground level, without attaining ascending flight. A casual observer witnessing this level lift-off and subsequent flight may not be able to distinguish between "running" (with no contact between the ground and its soles, T run = 0), and flying at ground level. Such a particular dynamic condition is possibly shared by all individuals of its and upcoming generations until a further gradual increase in the flapping power allows for a gain in altitude, measured in centimeters. When capable of ascending flight, (L > W), the protobird transitions to a bird.
At level lift-off, L = W, the flapping protobird reaches a minimum wing velocity, v w min , required for flight (at which point it is referred to a bird), which implies it has also reached a combination of a minimum forward velocity v ∞ min and flapping velocity v f min , Equation (7): According to this equation, and for a given minimum wing velocity required for flight, v w min , a substantial increase of v f min may reduce v ∞ min to zero, resulting in At this condition, treated in Section 7.3, results in the direction of the resultant R reaching the vertical and its magnitude, and its magnitude equal to the lift L, may be found to be lower, equal, or higher than the weight W. Figure 4 shows the case where R = L describes the vertical equilibrium during hovering flight. According to this equation, and for a given minimum wing velocity required for flight, vw min, a substantial increase of vf min may reduce v∞ min to zero, resulting in = . (24) At this condition, treated in Section 7.3, results in the direction of the resultant R reaching the vertical and its magnitude, and its magnitude equal to the lift L , may be found to be lower, equal, or higher than the weight W. Figure 4 shows the case where R = L describes the vertical equilibrium during hovering flight.
An Application of the Replication Hypothesis
The replication hypothesis proposes that the progression of the lift relative to the thrust as generated by flapping wings of a bird along its take-off run is equal or close to the progression of the lift relative to the thrust as generated by an analogous protobird of similar weight W, wing loading W/Sp, and wing velocity vw as it evolves towards sustained flight. The following case study is divided in three parts: (i) the generation of a bird's lift and thrust database, (ii) calculation of relevant parameters common to the bird and analogous protobird, and (iii) and the calculation of the normalized lift and thrust and resulting graphs (i.e., history polars, as discussed in Section 6) of a protobird.
(i) Generating an experimental lift and thrust database along the take-off run of a bird. The database used here has been theoretically calculated for an ornithopter [9], is not experimental in nature, and does not represent a particular bird. The lift L and total thrust T are given in Table 1 as a function of a forward speed along a limited interval of 5 m/s < v∞ <10 m/s.
An Application of the Replication Hypothesis
The replication hypothesis proposes that the progression of the lift relative to the thrust as generated by flapping wings of a bird along its take-off run is equal or close to the progression of the lift relative to the thrust as generated by an analogous protobird of similar weight W, wing loading W/S p , and wing velocity v w as it evolves towards sustained flight. The following case study is divided in three parts: (i) the generation of a bird's lift and thrust database, (ii) calculation of relevant parameters common to the bird and analogous protobird, and (iii) and the calculation of the normalized lift and thrust and resulting graphs (i.e., history polars, as discussed in Section 6) of a protobird.
(i) Generating an experimental lift and thrust database along the take-off run of a bird. The database used here has been theoretically calculated for an ornithopter [9], is not experimental in nature, and does not represent a particular bird. The lift L and total thrust T are given in Table 1 as a function of a forward speed along a limited interval of 5 m/s ≤ v ∞ ≤ 10 m/s.
The thrust total T in Table 1 has been adapted from the original database by Malik et al. [9] by adding T run to T aero , using Equation (21), resulting in thrust total, T. (ii) Calculation of the physically relevant parameters. The replication hypothesis proposes the lift and thrust database of a bird (preferably obtained experimentally) to also apply to an analogous protobird of similar weight, W, wing loading, W/S p , and wing velocity, v w , as can be best deconstructed from the paleontological record. Hence, the bird and the analogous protobird are assumed to have a wing planform area, Sp, of 0.0258 m 2 , a tip-to-tip span r of 40 cm, a (constant) flapping frequency, f, of 7 Hz, a flapping amplitude, Φ, of 60 • (=1.04 radians), while living in a medium of density, ρ, of 1.225 kg/m 3 . The weight of both flyers is estimated at ≈ 0.6 N, which results in a wing loading W/S p (=0.6 N/0.0258 m 2 ) of 23.25 N/m 2 . This simplified case study assumes all the above-mentioned parameters to be constant along evolution, an unlikely event in a process lasting millions of years. In the event of, say, a substantial increase in the wing semispan, r, and a reduction in its weight W of a more evolved protobird, such an increase must be accompanied by a new lift and thrust dataset of an analogous bird that mirrors these variations.
Note that at v ∞ = 10 m/s, the thrust total generated by the protobird is a mere 0.07 N and the lift L is 0.48 N, lower than its weight of 0.6 N. The lift L generated at this point is 80% of its weight W (=0.48/0.6), and the remainder 20% of the weight is counteracted by the ground's vertical reaction through its hind limbs. In other words, the protobird has lost 80% of its traction-generation ability for generating thrust T run . When reaching the condition of zero traction, the protobird is said to have achieved level lift-off, at which point it generates a lift L = W while exerting maximum flapping power, a condition that differs from the lift-off by an accelerating bird. Even though the dynamics of a protobird in equilibrium during level lift-off differs from that of an accelerating bird at lift-off, their lift activity angles θ are expected to be close.
Graphing the lift and thrust in Figure 5 captures the thrust-and-lift crossover at a forward velocity v ∞ ≈ 7.55 m/s. The thrust total T in Table 1 has been adapted from the original database by Malik et al. [9] by adding Trun to Taero, using Equation (21), resulting in thrust total, T.
(ii) Calculation of the physically relevant parameters. The replication hypothesis proposes the lift and thrust database of a bird (preferably obtained experimentally) to also apply to an analogous protobird of similar weight, W, wing loading, W/Sp, and wing velocity, vw, as can be best deconstructed from the paleontological record. Hence, the bird and the analogous protobird are assumed to have a wing planform area, Sp, of 0.0258 m 2 , a tip-to-tip span r of 40 cm, a (constant) flapping frequency, f, of 7 Hz, a flapping amplitude, Φ, of 60° (=1.04 radians), while living in a medium of density, ρ, of 1.225 kg/m 3 . The weight of both flyers is estimated at ≈ 0.6 N, which results in a wing loading W/Sp (= 0.6 N/0.0258 m 2 ) of 23.25 N/m 2 . This simplified case study assumes all the above-mentioned parameters to be constant along evolution, an unlikely event in a process lasting millions of years. In the event of, say, a substantial increase in the wing semispan, r, and a reduction in its weight W of a more evolved protobird, such an increase must be accompanied by a new lift and thrust dataset of an analogous bird that mirrors these variations.
Note that at v∞ = 10 m/s, the thrust total generated by the protobird is a mere 0.07 N and the lift L is 0.48 N, lower than its weight of 0.6 N. The lift L generated at this point is 80% of its weight W (= 0.48/0.6), and the remainder 20% of the weight is counteracted by the ground's vertical reaction through its hind limbs. In other words, the protobird has lost 80% of its traction-generation ability for generating thrust Trun. When reaching the condition of zero traction, the protobird is said to have achieved level lift-off, at which point it generates a lift L = W while exerting maximum flapping power, a condition that differs from the lift-off by an accelerating bird. Even though the dynamics of a protobird in equilibrium during level lift-off differs from that of an accelerating bird at lift-off, their lift activity angles θ are expected to be close.
Graphing the lift and thrust in Figure 5 captures the thrust-and-lift crossover at a forward velocity v∞ ≈ 7.55 m/s. The database in Table 1 can now be expanded to include and the relevant morphological and kinematic data, resulting in Table 2. The database in Table 1 can now be expanded to include and the relevant morphological and kinematic data, resulting in Table 2. The lift activity angle θ, column 3, is calculated using Equation (20) and plotted as a function of the forward speed, v ∞ , in Figure 6. The lift activity angle θ, column 3, is calculated using Equation (20) and plotted as a function of the forward speed, v∞, in Figure 6. The flapping velocity, vf, column 5, is found squared in the second term contained in brackets in Equation (11); the wing velocity, vw, column 6, is defined by Equation (11); the Strouhal number, St, equals (2·f·A/v∞); and the kinetic pressure, Q∞, is calculated using Equation (14).
(iii) Calculation of the normalized lift and normalized thrust. The available normalized lift ηL avail, in Table 3 is calculated by normalizing the lift L in Table 1 (that is, solving for ηL in Equation (15)). The required normalized lift ηL req is calculated using the same equation, but normalizing the protobird's weight, W, of 0.6 N, instead of its lift L. The normalized thrust ηT is obtained by solving for it in Equation (16). The flapping velocity, v f , column 5, is found squared in the second term contained in brackets in Equation (11); the wing velocity, v w , column 6, is defined by Equation (11); the Strouhal number, St, equals (2·f ·A/v ∞ ); and the kinetic pressure, Q ∞ , is calculated using Equation (14).
(iii) Calculation of the normalized lift and normalized thrust. The available normalized lift η L avail , in Table 3 is calculated by normalizing the lift L in Table 1 (that is, solving for η L in Equation (15)). The required normalized lift η L req is calculated using the same equation, but normalizing the protobird's weight, W, of 0.6 N, instead of its lift L. The normalized thrust η T is obtained by solving for it in Equation (16). The normalized thrust η T , the available normalized lift η L avail , and the required normalized lift η L req , are plotted in Figure 7: Aerospace 2019, 6, x FOR PEER REVIEW 10 of 21 The normalized thrust ηT, the available normalized lift ηL avail, and the required normalized lift ηL req, are plotted in Figure 7: The dashed curve in Figure 7 represents the required normalized lift, ηL req, necessary to achieve flight, which is higher than ηL avail, throughout the velocity interval of 5 m/s ≤ v∞ ≤ 10 m/s. Hence, the bird and the analogous protobird are not capable of lift-off in this range of forward speeds.
According to the replication hypothesis, the normalized lift, ηL, and normalized thrust, ηT, at a forward speed, v∞, of, say, 7 m/s, Figure 7, correspond to an instant that finds the bird accelerating towards take-off, and also to a protobird that runs at the same maximum speed, v∞, that is, in horizontal equilibrium. Note that whereas the bird accelerates through the 7 m/s mark (T > D), the protobird is found running at this speed of 7 m/s while in horizontal equilibrium, (T = D), while exerting maximum flapping power. Based on these different conditions encountered by the bird and protobird, a more appropriate graph is discussed in Section 6.
The Cost of the Kinetic Energy of a Wing's Cyclic Pitch
The normalized lift ηL and the thrust ηT are formally calculated by normalizing L and T by the total specific kinetic energy ek available at the flapping wing, as stated by the term Σeki, in Equation (4). From a practical perspective, only two sources of kinetic energy have been considered, namely, the kinetic energy due to translation, ek trans, and due to flapping, ek flap. A neglected source is the wing's kinetic energy, ek pitch, due to its cyclic pitch around its spanwise axis as it pronates and supinates at each flapping cycle. The following equation for the normalized lift, ηL, follows more closely its definition as it uses of the total kinetic energy available at the flapping wings for normalizing L: The specific kinetic energy, ek pitch, corresponds to the wing's cyclic pitch along an angle Φpitch of, say, 20° (0.35 radians), a kinematic mechanism that allows the wing to act as a propeller by the cyclic adjustment of its pitch angle to the incoming flow in order to generate thrust, Taero, and is equal to Figure 7 represents the required normalized lift, η L req , necessary to achieve flight, which is higher than η L avail , throughout the velocity interval of 5 m/s ≤ v ∞ ≤ 10 m/s. Hence, the bird and the analogous protobird are not capable of lift-off in this range of forward speeds.
According to the replication hypothesis, the normalized lift, η L , and normalized thrust, η T , at a forward speed, v ∞ , of, say, 7 m/s, Figure 7, correspond to an instant that finds the bird accelerating towards take-off, and also to a protobird that runs at the same maximum speed, v ∞ , that is, in horizontal equilibrium. Note that whereas the bird accelerates through the 7 m/s mark (T > D), the protobird is found running at this speed of 7 m/s while in horizontal equilibrium, (T = D), while exerting maximum flapping power. Based on these different conditions encountered by the bird and protobird, a more appropriate graph is discussed in Section 6.
The Cost of the Kinetic Energy of a Wing's Cyclic Pitch
The normalized lift η L and the thrust η T are formally calculated by normalizing L and T by the total specific kinetic energy e k available at the flapping wing, as stated by the term Σe ki , in Equation (4). From a practical perspective, only two sources of kinetic energy have been considered, namely, the kinetic energy due to translation, e k trans , and due to flapping, e k flap . A neglected source is the wing's kinetic energy, e k pitch , due to its cyclic pitch around its spanwise axis as it pronates and supinates at each flapping cycle. The following equation for the normalized lift, η L , follows more closely its definition as it uses of the total kinetic energy available at the flapping wings for normalizing L: The specific kinetic energy, e k pitch , corresponds to the wing's cyclic pitch along an angle Φ pitch of, say, 20 • (0.35 radians), a kinematic mechanism that allows the wing to act as a propeller by the cyclic adjustment of its pitch angle to the incoming flow in order to generate thrust, T aero , and is equal to According to physics textbooks, the moment of inertia I of a rectangular flat plate representing, say, the left wing with a rectangular planform of root-to-tip span r (= 0.4 m and chord c (= 0.0645 m), is equal to (1/12·m·c 2 ) [5]. The angular velocity due to the cyclic pitch, ω pitch , Equation (9), equals (2·f ·Φ pitch ). Replacing, we obtain The specific moment of inertia, I/m, in Equation (27), equals 0.00035 m 2 (= 0.0645 2 /12). The angular velocity due to flapping, ω pitch , equals 4.88 1/s (=2 × 7 × 20/57.3). Replacing these two values in Equation (24) results in the specific kinetic energy of the wing due to cyclic pitch, e k pitch of 0.00417 m 2 /s 2 , (= 1 2 × 0.00035 × 4.88 2 ), a negligible value when compared to the average kinetic energy e k available at the wing due to translation and rotation of 29.6 m 2 /s 2 (e k = 1 2 × v w 2 = 1 2 × 7.7 2 ), calculated for a wing velocity, v w , of 7.7 m/s. The combination of the important role by the wing's cyclic pitch in the generation of thrust and the accompanying low cost in kinetic energy may have contributed towards thrust being the primordial aerodynamic force along the evolution towards sustained flight.
A Second Hypothesis: Equilibrium during Lift-Off
Whereas the replication hypothesis uses aerodynamic data along the take-off of a bird and applies it to a protobird along its evolution (L < W), a second hypothesis states that the horizontal and vertical forces acting on a protobird as it reaches level lift-off (L = W, no ascending flight follows) while exerting maximum power during flapping are in equilibrium. This condition, T = D, is expressed by equating Equations (16) and (17): Simplifying, we obtain This equation is not dependent on the dynamic pressure q ∞ (= 1 /2·ρ·v ∞ 2 ).
When level lift-off is achieved by the protobird during maximum power exerted for flapping, then L = W, and so At this condition, a casual observer witnessing the lift-off of a protobird would not distinguish between its apparent "running" and its actual flying at ground level, as both actions share outwardly similar kinematics but different dynamics. This condition is mirrored by a non-running, non-flapping, flying paraglider, Figure 8, as in both cases, they share the generation of lift equal to their weight, and simulate the running action without generating thrust with their soles. With Equations (29) and (30) we have arrived at a system of simultaneous equations describing the dynamics of a protobird at the instant of level lift-off: A cursory examination of these equations yields two observations: (i) The overarching aerodynamic ingredient found in both equations is the product of the parenthesis (1+ ⅓·St 2 ) and the wing planform area, Sp. The result of this product is referred to as the expanded wing planform area, Sp': Conceptually, the expanded wing planform Sp' is a fixed wing planform area with the same capability for generating lift and thrust as the flapping wings. As the Strouhal number, St, and/or the wing planform area, Sp, increases, so does the expanded wing planform area. Equations (31) and (32) are rewritten as (ii) The second equation, Equation (35), is found "dampened" by the presence of the dynamic pressure, q∞, typically a small value during stages I and II. As this is a lift-related equation, the generation of lift L by flapping wings in their early evolutionary stages may have been adversely affected by its presence. Figure 7 in Section 4(iii) shows the graphs of the normalized lift, ηL, and normalized thrust, ηL, as a function of the forward speed, v∞, of an accelerating bird, and an analogous protobird in equilibrium, a simplified correspondence, as the thrust (and accompanying lift) by an accelerating bird must be somewhat larger than the thrust (and lift) by an analogous protobird in equilibrium, both moving forward at the same speed, v∞. This issue is solved by replacing the forward velocity v∞ by lift activity angle, θ, as the common independent parameter for the bird and analogous protobird. With Equations (29) and (30) we have arrived at a system of simultaneous equations describing the dynamics of a protobird at the instant of level lift-off:
Polar Diagrams
A cursory examination of these equations yields two observations: (i) The overarching aerodynamic ingredient found in both equations is the product of the parenthesis (1 + 1 / 3 ·St 2 ) and the wing planform area, S p . The result of this product is referred to as the expanded wing planform area, S p ': Conceptually, the expanded wing planform S p ' is a fixed wing planform area with the same capability for generating lift and thrust as the flapping wings. As the Strouhal number, St, and/or the wing planform area, S p , increases, so does the expanded wing planform area. Equations (31) and (32) are rewritten as With Equations (29) and (30) we have arrived at a system of simultaneous equations describing the dynamics of a protobird at the instant of level lift-off: A cursory examination of these equations yields two observations: (i) The overarching aerodynamic ingredient found in both equations is the product of the parenthesis (1+ ⅓·St 2 ) and the wing planform area, Sp. The result of this product is referred to as the expanded wing planform area, Sp': Conceptually, the expanded wing planform Sp' is a fixed wing planform area with the same capability for generating lift and thrust as the flapping wings. As the Strouhal number, St, and/or the wing planform area, Sp, increases, so does the expanded wing planform area. Equations (31) and (32) are rewritten as (ii) The second equation, Equation (35), is found "dampened" by the presence of the dynamic pressure, q∞, typically a small value during stages I and II. As this is a lift-related equation, the generation of lift L by flapping wings in their early evolutionary stages may have been adversely affected by its presence. Figure 7 in Section 4(iii) shows the graphs of the normalized lift, ηL, and normalized thrust, ηL, as a function of the forward speed, v∞, of an accelerating bird, and an analogous protobird in equilibrium, a simplified correspondence, as the thrust (and accompanying lift) by an accelerating bird must be somewhat larger than the thrust (and lift) by an analogous protobird in equilibrium, both moving forward at the same speed, v∞. This issue is solved by replacing the forward velocity v∞ by lift activity angle, θ, as the common independent parameter for the bird and analogous protobird.
Polar Diagrams
(ii) The second equation, Equation (35), is found "dampened" by the presence of the dynamic pressure, q ∞ , typically a small value during stages I and II. As this is a lift-related equation, the generation of lift L by flapping wings in their early evolutionary stages may have been adversely affected by its presence. Figure 7 in Section 4(iii) shows the graphs of the normalized lift, η L , and normalized thrust, η L , as a function of the forward speed, v ∞ , of an accelerating bird, and an analogous protobird in equilibrium, a simplified correspondence, as the thrust (and accompanying lift) by an accelerating bird must be somewhat larger than the thrust (and lift) by an analogous protobird in equilibrium, both moving forward at the same speed, v ∞ . This issue is solved by replacing the forward velocity v ∞ by lift activity angle, θ, as the common independent parameter for the bird and analogous protobird.
Polar Diagrams
A polar diagram documents the history of the progression of the resultant vector R and the lift activity angle, θ, of a bird along its take-off run, and of an analogous protobird along its evolution towards sustained flight, according to the replication hypothesis. The horizontal axis of the polar diagram represents the normalized thrust, η T , and the vertical axis of the normalized lift η L . Any point on the curve can expressed by Euler's formula [11]: The resultant vector R is distributed inside the parentheses on the right-hand side of the equation, resulting in the addition of the thrust T and the lift L, both the horizontal and vertical components of the resultant: Replacing T and L by Equations (16) and (15), and taking out of the parentheses the common factors, we get Dividing both sides by Q ∞ ·S p , we arrive at The left-hand expression corresponds to the polar notation and the right-hand expression to the Cartesian notation (i.e., "x" and "y" coordinates).
The Instantaneous Polar: Characterizing the aerodynamic condition at an "instant" along a bird's take-off run or along the protobird's evolution towards sustained flight can be done by an instantaneous polar, a snapshot that documents the normalized resultant force R, the lift activity angle, θ, the normalized thrust, η R , and the normalized lift, η R . Point A in Figure 9 documents the normalized thrust η T A and normalized lift η L A , the resultant η R A and the lift activity angle θ A = 60 • (=π/3) of a bird and a protobird. Point B describes the same flyers' hovering flight at ground level or vertical ascending flight, as there is an absence of thrust, η T B = 0, and the normalized resultant η RB equals the normalized lift, η L B , and the lift activity angle, θ B , is 90 • (= π/2).
Aerospace 2019, 6, x FOR PEER REVIEW 13 of 21 A polar diagram documents the history of the progression of the resultant vector R and the lift activity angle, θ, of a bird along its take-off run, and of an analogous protobird along its evolution towards sustained flight, according to the replication hypothesis. The horizontal axis of the polar diagram represents the normalized thrust, ηT, and the vertical axis of the normalized lift ηL. Any point on the curve can expressed by Euler's formula [11]: The resultant vector R is distributed inside the parentheses on the right-hand side of the equation, resulting in the addition of the thrust T and the lift L, both the horizontal and vertical components of the resultant: Replacing T and L by Equations (16) and (15), and taking out of the parentheses the common factors, we get Dividing both sides by Q∞·Sp, we arrive at The left-hand expression corresponds to the polar notation and the right-hand expression to the Cartesian notation (i.e., "x" and "y" coordinates).
The Instantaneous Polar: Characterizing the aerodynamic condition at an "instant" along a bird's take-off run or along the protobird's evolution towards sustained flight can be done by an instantaneous polar, a snapshot that documents the normalized resultant force R, the lift activity angle, θ, the normalized thrust, ηR, and the normalized lift, ηR.
Point A in Figure 9 documents the normalized thrust ηT A and normalized lift ηL A, the resultant ηR A and the lift activity angle θA = 60° (=π/3) of a bird and a protobird. Point B describes the same flyers' hovering flight at ground level or vertical ascending flight, as there is an absence of thrust, ηT B = 0, and the normalized resultant ηRB equals the normalized lift, ηL B, and the lift activity angle, θB, is 90° (= π/2).
With the information available, it cannot be said if lift L at point B is larger, equal, or smaller than the weight W, hence, the bird may be standing and flapping, hovering at ground level or in a vertical climb. With the information available, it cannot be said if lift L at point B is larger, equal, or smaller than the weight W, hence, the bird may be standing and flapping, hovering at ground level or in a vertical climb.
This situation is visited below. The History Polar: The history polar is a succession of instantaneous polars plotted for an increasing value of the lift activity angle θ. A hypothetical history polar, Figure 10, depicted by a dashed line and divided in the three stages, I, II, and III (Section 3), is read from stand-still of the protobird (point 0 in Fig. 10), millions of years ago, and counter-clockwise along the curve, evolving towards its intersection with the vertical axis, at which point, the bird generates only lift, implying flapping at stand-still, hovering at ground level, or in a vertical climb. This situation is visited below.
The History Polar:
The history polar is a succession of instantaneous polars plotted for an increasing value of the lift activity angle θ. A hypothetical history polar, Figure 10, depicted by a dashed line and divided in the three stages, I, II, and III (Section 3), is read from stand-still of the protobird (point 0 in Fig. 10), millions of years ago, and counter-clockwise along the curve, evolving towards its intersection with the vertical axis, at which point, the bird generates only lift, implying flapping at stand-still, hovering at ground level, or in a vertical climb. To define this undefined flight condition, we refer to point 2, Figure 11, where the required normalized lift of the bird, ηL req, from Eq. (18) and repeated below, is added to the history polar, coinciding with, say, point 2: The presence of this horizontal marker helps clarify the fact that, at point 2, the protobird reaches the condition ηL = ηreq, at which point it first encounters level lift-off, L = W, while "running" (actually flying) without the capability of ascending flight. Moving counterclockwise from point 2, the protobird, now referred to as a bird, generates L > W and is capable of ascending flight (i.e., oblique lift-off), reaching altitudes, at first, measured in centimeters. To define this undefined flight condition, we refer to point 2, Figure 11, where the required normalized lift of the bird, η L req , from Equation (18) and repeated below, is added to the history polar, coinciding with, say, point 2: Discussed next are (i) the possible presence of an initial polar slope and (ii) the thrust inflection point at the polar's "3 o'clock position", point 7, Figure 12.
The hypothetical history polar of a protobird, Figure 12, shows the first four points, 1 to 4, placed within a small range of lift activity angles, 45° < θ < 48°, which may imply its flapping wings may The presence of this horizontal marker helps clarify the fact that, at point 2, the protobird reaches the condition η L = η req , at which point it first encounters level lift-off, L = W, while "running" (actually flying) without the capability of ascending flight. Moving counterclockwise from point 2, the protobird, now referred to as a bird, generates L > W and is capable of ascending flight (i.e., oblique lift-off), reaching altitudes, at first, measured in centimeters.
Discussed next are (i) the possible presence of an initial polar slope and (ii) the thrust inflection point at the polar's "3 o'clock position", point 7, Figure 12. The thrust inflection point, represented by point 7, Figure 12, corresponds to the same point 7 in Figure 13, a Cartesian version of Figure 12, where the point number (1 to 10) is inscribed along the "x" axis, and the normalized lift, ηL, and normalized thrust, ηT, are found along the "y" axis. The thrust inflection point, point 7, is thus the point of maximum thrust generated along evolution of the protobird (in the case of a polar that intersects the "y" axis). The hypothetical history polar of a protobird, Figure 12, shows the first four points, 1 to 4, placed within a small range of lift activity angles, 45 • < θ < 48 • , which may imply its flapping wings may have had very low or no camber (i.e., acting as flat plates), and have been equally effective generating thrust in their upstroke and downstroke, and, as a result, also generating as much thrust as lift (η L ≈ η T ) along stage I. Under this assumption, there is no lift-to-thrust crossover to be expected. Admittedly, detecting the presence of this feature may be difficult to capture by documenting the aerodynamic forces along the take-off run of an extant (cambered winged) bird. This topic may be addressed instead by combining the analysis of the generation of lift and thrust by the numerical modeling of low and uncambered wing airfoils in ground effect using computational fluid dynamics, and any fossil evidence for the presence of low or uncambered wings (a significant challenging task).
The thrust inflection point, represented by point 7, Figure 12, corresponds to the same point 7 in Figure 13, a Cartesian version of Figure 12, where the point number (1 to 10) is inscribed along the "x" axis, and the normalized lift, η L , and normalized thrust, η T , are found along the "y" axis. The thrust inflection point, point 7, is thus the point of maximum thrust generated along evolution of the protobird (in the case of a polar that intersects the "y" axis).
The thrust inflection point, represented by point 7, Figure 12, corresponds to the same point 7 in Figure 13, a Cartesian version of Figure 12, where the point number (1 to 10) is inscribed along the "x" axis, and the normalized lift, ηL, and normalized thrust, ηT, are found along the "y" axis. The thrust inflection point, point 7, is thus the point of maximum thrust generated along evolution of the protobird (in the case of a polar that intersects the "y" axis). For a history polar to present a thrust inflection point, the bird may have to undergo substantial a decrease in weight, W, and/or wing loading W/Sp, and/or increase in wing velocity vw (as discussed in Section 7.3). Although these parameters have been assumed constant throughout the history polar, the equations presented in this paper allow for their variations during the construction of the history polar.
Case Studies Involving Level Lift-Off of a Protobird
This section presents three studies related to the level lift-off of a protobird using the same parameters as in Section 4. The wings operate at a normalized lift, ηL of ≈ 0.45.
Vertical Forces during Level Lift-Off
We calculate the wing velocity vw min required for the generation of a lift L equal to its weight of 0.6 N, using the second of the two simultaneous equations, Equation (18): For a history polar to present a thrust inflection point, the bird may have to undergo substantial a decrease in weight, W, and/or wing loading W/S p , and/or increase in wing velocity v w (as discussed in Section 7.3). Although these parameters have been assumed constant throughout the history polar, the equations presented in this paper allow for their variations during the construction of the history polar.
Case Studies Involving Level Lift-Off of a Protobird
This section presents three studies related to the level lift-off of a protobird using the same parameters as in Section 4. The wings operate at a normalized lift, η L of ≈ 0.45.
Vertical Forces during Level Lift-Off
We calculate the wing velocity v w min required for the generation of a lift L equal to its weight of 0.6 N, using the second of the two simultaneous equations, Equation (18): This results in a wing velocity, v w min , of 9.158 m/s that will generate a lift equal to the weight of the protobird, making possible its level lift-off, without the ability for ascending flight.
Horizontal Forces in Equilibrium at Near Take-Off
At level lift-off, a flapping protobird translates at its maximum forward velocity v ∞ min while exerting maximum flapping power, and attains a wing velocity v w min fly in order to generate L = W while flying at ground level. At this condition, its flapping wings generate a thrust T that equals its drag D, Equation (31): This equation calculates the normalized thrust η T at which the wing is operating during the generation of thrust. The two unknowns are the frontal area, S f , and the normalized drag, η D .
The frontal area S f is calculated using the allometric equation suggested by Pennycuick et al. for passerine species [12]. For a mass m of 0.0612 kg, we obtain This value assumes a streamlined body aligned to the flow and hindlimbs retracted. In contrast, the translating protobird may have a more erect body attitude and have its hindlimbs exposed (causing high drag as they involve small Reynolds numbers). The frontal area, S f , is thus doubled, estimated to be ≈ 0.005 m 2 . The normalized drag η D in Equation (17) equals the drag coefficient C D , using frontal area S f as S ref , and is calculated using the following regression equation [13]: η D = C D = 0.82 − (7.5 × 10 −6 )·Re = 0.82 − (7.5 × 10 −6 ) × 40,000 = 0.51 (45) Using the same rationale, the protobird with a more erected body and exposed hind limbs is estimated to have a higher normalized drag η D , and it is assumed to be ≈ 0.65. The right-hand side of Equation (44) (= η D ·S f ) equals 0.00325. Solving for the normalized thrust results in η T ≈ 0.121.
Effect of Flapping Frequency f min on the Forward Speed v ∞ min Required for Lift-Off
The minimum wing velocity, v w min , required for generating a lift equal to the weight of the protobird for initiating level lift-off, Equation (10), is rewritten below: Level lift-off is reached when a combination of minimum forward speed, v ∞ min , flapping frequency f min and minimum amplitude Φ min for a given wing length r results in the minimum wing velocity v w min required for generating a lift equal to its weight.
This section calculates the required increase of the minimum wing flapping frequency, f min , as a hypothetical protobird evolves from a level lift-off (L = W) at a translation speed, v ∞ min , and flapping at f min = 7 Hz (point B in Figure 15) towards a "hummingbird-type" flyer capable of hovering at L = W (no lifting off vertically), point C in Figure 15, at v ∞ min fly = 0. For simplicity, we assume a constant wing velocity, v w min , of 9.158 m/s, as calculated in Section 7.1. From Equation (46), v w min is: The evolution towards vertical lift-off is a process lasting vast periods of time and, in all likelihood, is accompanied by one or more (or all) of the following changes: a substantial reduction in weight, a reduction in wing loading, an increase in the length of the wing (towards a higher aspect ratio, 2·r/ c ), and a substantial increase in flapping amplitude (to possibly double the amplitude of 60°, as considered in this paper) and, most importantly, the power available for allowing for a high flapping frequency as the results in this section may indicate.
If the fossil record shows evidence that one or more of these changes have occurred, a detailed history polar of a protobird that reflects these changes must rely on the appropriate (experimental) lift and thrust database of the corresponding birds reflecting these changes.
Conclusions
The common theme throughout the paper is that flapping wings contribute to the survival skill of a predator or a prey, namely, forward speed. In other words, the "objective" during evolution (there is no objective per se during evolution) may be maximizing forward speed, not lift, as thrust is the common thread along the evolution towards sustained flight. Ironically, the generation of thrust disappears when the bird reaches the epitome of flight: vertical lift-off and hover. Relatively small flapping wings may contribute to thrust to run faster (i.e., chicken). Larger wings may contribute to lift-off after a lengthy take-off run, which increases the forward velocity. Smaller wings flapping at high frequency may end up enabling the bird to lift off vertically, an operating condition that has no need for the generation of thrust. A bird that possesses the ability to lift off vertically and hover can relinquish the need for thrust.
All these aerodynamic conditions can be evaluated experimentally and numerically by applying the replication hypothesis and the simultaneous set of equilibrium equations at the moment of level lift-off.
Reaching a consensus on an idea, which is usually expected to occur over time, seems particularly challenging when delving on the subject of the origin of sustained flight. A reason for this may be the lack of a numerical approach and, as a result, jumping, leaping, climbing, and falling have appeared to be credible steps by flapping wings in their early evolution towards sustained flight. Little can be done to counter these arguments without a physics (that is, numerical) approach to the subject.
The author applauds the experimental approach (i.e., WAIR, [15]) used by Professor Dial, and The subscript "min" indicates that the parameters have the minimum value that is required for generating L = W. From Section 7.1 we have found that the minimum wing velocity, v w min , for reaching L = W, is 9.158 m/s. From the above equation, we solve for f min : where v w min = 9.158 m s .
Keeping the wing velocity v w min constant (not realistic in evolutionary terms), the translation velocity, v ∞ min , is varied from 9 m/s to zero, resulting in an increase in the flapping frequency f min , as shown by the dashed arrow in Figure 15.
The initial point A in Figure 15 shows the protobird taking off with its fixed wings, in "airplane mode", a condition that does not concern us. Point B is the initial condition of this case study that finds the early protobird flapping its wings at a frequency f min of 7 Hz while translating at a velocity, v ∞ min , of 9 m/s ( Table 2). Evolving towards point C, v ∞ min → 0, and the required flapping frequency, f min , increases to 37.87 Hz. At this time, L = W and the (now) bird is capable of a hummingbird-like hover at ground level. If f min > 37.87 Hz, the bird is capable of vertical ascending flight. As a reference, current hummingbirds hover by flapping their wings at a frequency between 12 and 80 Hz [14].
The evolution towards vertical lift-off is a process lasting vast periods of time and, in all likelihood, is accompanied by one or more (or all) of the following changes: a substantial reduction in weight, a reduction in wing loading, an increase in the length of the wing (towards a higher aspect ratio, 2·r/c), and a substantial increase in flapping amplitude (to possibly double the amplitude of 60 • , as considered in this paper) and, most importantly, the power available for allowing for a high flapping frequency as the results in this section may indicate.
If the fossil record shows evidence that one or more of these changes have occurred, a detailed history polar of a protobird that reflects these changes must rely on the appropriate (experimental) lift and thrust database of the corresponding birds reflecting these changes.
Conclusions
The common theme throughout the paper is that flapping wings contribute to the survival skill of a predator or a prey, namely, forward speed. In other words, the "objective" during evolution (there is no objective per se during evolution) may be maximizing forward speed, not lift, as thrust is the common thread along the evolution towards sustained flight. Ironically, the generation of thrust disappears when the bird reaches the epitome of flight: vertical lift-off and hover. Relatively small flapping wings may contribute to thrust to run faster (i.e., chicken). Larger wings may contribute to lift-off after a lengthy take-off run, which increases the forward velocity. Smaller wings flapping at high frequency may end up enabling the bird to lift off vertically, an operating condition that has no need for the generation of thrust. A bird that possesses the ability to lift off vertically and hover can relinquish the need for thrust.
All these aerodynamic conditions can be evaluated experimentally and numerically by applying the replication hypothesis and the simultaneous set of equilibrium equations at the moment of level lift-off.
Reaching a consensus on an idea, which is usually expected to occur over time, seems particularly challenging when delving on the subject of the origin of sustained flight. A reason for this may be the lack of a numerical approach and, as a result, jumping, leaping, climbing, and falling have appeared to be credible steps by flapping wings in their early evolution towards sustained flight. Little can be done to counter these arguments without a physics (that is, numerical) approach to the subject.
The author applauds the experimental approach (i.e., WAIR, [15]) used by Professor Dial, and coauthors that show the importance of flapping in the generation of thrust by an evolving protobird. I may dare argue that the WAIR experiments are close in spirit to the first stage of the replication hypothesis, as these help generate the experimental lift and thrust database that help construct the analogous (early) protobird's history polar.
Any approach towards promoting a hypothesis that explains the origin of sustained flight is, by its nature, conjectural and, as such, unlikely ever to be tested. On the other side, Sir Humphrey Davy has stated that the only use of a hypothesis (or two) is to lead us to experiments to guide us to facts. With these thoughts in mind, this paper presents two falsifiable hypotheses, the replication hypothesis and the equilibrium hypothesis. The first hypothesis applies a (preferably experimental) lift and thrust data obtained from a bird along its take-off run to an analogous protobird, as deconstructed from the paleontological fossil evidence. The aerodynamic basis of the evolutionary process (basically the ratio of lift to thrust) towards sustained flight of flapping wings, lasting millions of years, is scaled down to a matter of seconds. As the maximum forward velocity of the "running" protobird gradually increases and finally reaches level lift-off, the second hypothesis suggests it being subjected to vertical and horizontal forces in equilibrium.
This multidisciplinary approach, with contributions from physics (Newton's 1st and 2nd laws applied throughout this paper), aerodynamics (Prandtl's q ∞ ), paleontology (Darwin) and mathematics (Euler's equation), allows for the numerical and graphical fine-tuning of the protobird's model along its evolution to account for possible variations in weight, morphology, kinematics, and dynamics over time, according to the latest findings in the fossil record. | 2019-06-13T13:20:57.658Z | 2019-02-01T00:00:00.000 | {
"year": 2019,
"sha1": "629d592060ba6f102143fc6e4b92110b8fa22b33",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2226-4310/6/2/21/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "db0d313c821653b11a70799d749a9757833ef8b4",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
12636536 | pes2o/s2orc | v3-fos-license | Canine Uterine Bacterial Infection Induces Upregulation of Proteolysis-Related Genes and Downregulation of Homeobox and Zinc Finger Factors
Background Bacterial infection with the severe complication of sepsis is a frequent and serious condition, being a major cause of death worldwide. To cope with the plethora of occurring bacterial infections there is therefore an urgent need to identify molecular mechanisms operating during the host response, in order both to identify potential targets for therapeutic intervention and to identify biomarkers for disease. Here we addressed this issue by studying global gene expression in uteri from female dogs suffering from spontaneously occurring uterine bacterial infection. Principal Findings The analysis showed that almost 800 genes were significantly (p<0.05) upregulated (>2-fold) in the uteri of diseased animals. Among these were numerous chemokine and cytokine genes, as well as genes associated with inflammatory cell extravasation, anti-bacterial action, the complement system and innate immune responses, as well as proteoglycan-associated genes. There was also a striking representation of genes associated with proteolysis. Robust upregulation of immunoglobulin components and genes involved in antigen presentation was also evident, indicating elaboration of a strong adaptive immune response. The bacterial infection was also associated with a significant downregulation of almost 700 genes, of which various homeobox and zinc finger transcription factors were highly represented. Conclusions/Significance Together, these finding outline the molecular patterns involved in bacterial infection of the uterus. The study identified altered expression of numerous genes not previously implicated in bacterial disease, and several of these may be evaluated for potential as biomarkers of disease or as therapeutic targets. Importantly, since humans and dogs show genetic similarity and develop diseases that share many characteristics, the molecular events identified here are likely to reflect the corresponding situation in humans afflicted by similar disease.
Introduction
Bacterial infection with the severe complication of a systemic inflammatory host response (sepsis) is a serious condition and the most common cause of death in intensive care units at hospitals, with a global incidence that remains rising [1,2]. Despite this, our knowledge of the complex pathophysiology of sepsis is still is incomplete. Diagnosis of sepsis in critically ill patients is demanding because of unspecific clinical signs and imprecise traditional markers [3]. To improve current diagnostic methods for sepsis, it is therefore central to identify clinically useful biomarkers that may facilitate early and precise diagnosis [4,5,6]. Biomarkers may also constitute potential targets for novel treatments of bacterial infections, severe inflammation and sepsis [7].
Dogs are commonly used in experimental studies of sepsis as well as in safety assessment studies of pharmaceuticals since their inflammatory response is similar to humans [8,9]. It is also important to stress that, following the sequencing of the canine genome [10], dogs are currently emerging as attractive models for studying the genetic background for diseases. Bacterial uterine infection (pyometra) is a common disease that develops in 25% of all intact female dogs [11]. The disease is characterized by mainly Gram-negative infection in combination with severe local and systemic inflammation [12]. Pyometra is lethal if left untreated and patients may develop endotoxemia, sepsis or septic shock [13,14].
The most effective treatment is acute surgical removal of the uterus and ovaries (ovariohysterectomy).
Bacterial uterine infection in dogs has many similarities with severe bacterial infections in humans. For example, infection in both species is associated with induction of local and systemic inflammation, cytokine production, an acute phase reaction, endotoxemia and induction of subsequent sepsis. Therefore, an examination of disease mechanisms involved in pyometra may provide important insights to the mechanisms operating during human bacterial infection and sepsis [15,16]. Here we used Affymetrix microarray technology to investigate the mechanisms involved in pyometra. We report that pyometra causes dramatic effects on the uterine gene expression pattern. A large number of genes associated with both innate and adaptive immune responses were upregulated, and there was also a striking upregulation of a wide array of proteases and protease inhibitors. Moreover, the uterine disease was clearly associated with downregulation of a panel of transcription factors of homeobox and zinc-finger type.
Animals
This research study was conducted according to national regulations (The Animal Welfare Act and Ordinance, The Swedish Ministry of Agriculture) and international guidelines (the European Convention and the European Commissions Directive 86/609/EEC on protection of animals used for experimental and other scientific purposes). The study was covered by an application approved by the Uppsala Animal Ethics Committee, Uppsala, Sweden. The dogs were privatelyowned patients admitted and treated according to the routines at the University Animal Hospital, Swedish University of Agricultural Sciences, Uppsala, Sweden. Written owner consent was obtained before any dog was included.
Fifteen female dogs admitted to the University Animal Hospital, Swedish University of Agricultural Sciences, for diagnosis and subsequent surgical treatment (ovariohysterectomy, OHE) of pyometra were included in the study. The control group consisted of 6 healthy female dogs admitted for elective spay (OHE). Case history and physical examination data were noted by the veterinarian in charge on a specific form at admittance, and continued daily during the hospital stay.
Blood-and Tissue Sampling
Blood samples for analysis of haematological and serum biochemical parameters were collected from all dogs before surgery from the distal cephalic vein into either non-additive, EDTA-containing or heparinized collection tubes (Becton-Dickinson, Stockholm, Sweden), chilled on ice and centrifuged. Plasma and serum were stored at 280uC until analysis. The removed uterus was cut open and a fibre swab (Culturette; Becton Dickinson AG) was used to sample the uterine contents for bacterial culturing. Tissue biopsies were snap-frozen in liquid nitrogen and stored at 280uC. The remaining uterine tissue and the ovaries were formaldehyde-fixated and used for histopathological examination.
Histopathological Examinations
Pyometra diagnosis was performed by gross and histopathological examinations of haematoxylin-eosin-stained sections of uteri and ovaries. Diagnostic criteria for pyometra with or without concomitant presence of macro-and/or microscopically visible cystic dilatation and epithelial hyperplasia of endometrial glands were uterine distension of a varying degree with macroscopically visible presence of opaque, yellowish to brownish exudates in the uterine lumen and microscopically visible purulent inflammatory changes in the endometrium and cystic glands.
Microarray Expression Analysis and Data Analysis
Total RNA was prepared from cross-sections of frozen uterine tissue samples, using Nucleospin RNA II (Macherey-Nagel, Düren, Germany). Affymetrix gene chip microarray analysis was performed using the Canine Genome 2.0 Array, as described [18]. The raw data were normalized using the robust multi-array average (RMA) [19] background-adjusted, normalized and logtransformed summarized values. An empirical Bayes moderated ttest was applied to search for differentially expressed genes [20]. The p-values were adjusted to avoid the problem with multiple testing [21]. The Genesis software, version 1.7.1 (http://genome. tugraz.at/), was used to perform hierarchical clustering and to visualize differentially expressed genes [22]. All data is MIAME compliant and that the raw data has been deposited in a MIAME compliant database (the data is accessible via GEO using the accession no: GSE17878).
Real-Time PCR
RNA was prepared using NucleoSpinH (Macherey-Nagel, Germany). First-Strand cDNA was synthesized with Super-Script TM II Reverse Transcriptase (Invitrogen, Inchinnan, UK) according to the manufacturers' instructions with 5 ml RNA and a final concentration of 7.5 ng/ml random hexamers (Invitrogen) in a total volume of 20 ml. The cDNA concentration was determined, and the cDNA was diluted to 800 ng/ml. Real-time PCR (qPCR) was performed on a ABI PRISM 7900 HT using iQ TM SYBR Green Supermix (BioRad, CA, USA) in a total volume of 10 ml, containing 80 ng cDNA and a final primer concentration of 100-300 nM. PCR cycling conditions included a 95uC heating step of 10 min at the beginning of every run. The samples were then cycled 40 times at 95uC for 30 s (denaturation), 58uC for 20 s (annealing) and 72uC for 20 s (extension). A melting curve from 60uC to 90uC was generated at the end of every run. Prior to experiments, the primer efficiency for each primer pair was determined with three different dilutions of the cDNA. The C T -values were plotted against Log concentrations of the dilutions and primer efficiency was calculated according to following formula: efficiency = 10 (21/slope) -1. The results were calculated by the comparative C T method (User Bulletin #2: ABI PRISM 770 Sequence Detection System (P7N 4303859)), using Hypoxanthine guanine phosphoribosyl transferase (Hprt) as housekeeping gene. For primers used and primer efficiency, see Table S1.
Clinical Data
Data from analyses of haematological, biochemical, acute phase protein and inflammatory parameters are displayed in Table 1. As shown in Fig. S1, bacterial infection of the uterus was associated with a profound inflammatory reaction, primarily involving infiltration of mononuclear cells, and there was a particular abundance of plasma cells. Neutrophils and eosinophils were rarely present. In contrast, tissue from healthy individuals lacked signs of inflammation or tissue remodeling. Clinical data for the 4 dogs selected for Affymetrix gene chip analysis (see below) are illustrated in Table 2. Escherichia coli were isolated from all 4 uteri selected for micro-array analysis, whereof 3 strains were haemolytic. By using clinical criteria for assessment of sepsis/ systemic inflammatory response syndrome (SIRS) in dogs with the highest sensitivity (97%) and specificity (64%), three of the four selected dogs were determined SIRS-positive [23]. In one of the dogs (case 3), peritonitis with pus in the abdomen was apparent during surgery.
Affymetrix Gene Chip Analysis
In order to investigate the molecular events associated with the infection, total RNA extracted from uterine tissue of 4 diseased Table 1. Haematological-, biochemical-, acute phase protein-and inflammatory parameters in 15 female dogs with bacterial uterine infection (pyometra) and 6 healthy control dogs. and 4 healthy control animals (blood parameters specified in Table S2) were subjected to Affymetrix gene chip microarray analysis. A hierarchic clustering of the samples revealed that the control and pyometra groups, respectively, clustered well together (Fig. S2).
Using moderated t-statistics we analyzed for differentially expressed genes, using a 2-fold change and adj p,0.05 as thresholds for significance. This analysis revealed that almost 800 genes were significantly upregulated more than 2-fold. In Table 3, the 50 genes showing the largest extent of upregulation are listed and Table S3 displays the complete list of significantly upregulated genes. We also found that almost 700 genes were significantly (adj p,0.05) downregulated more than 2-fold. The 50 genes showing the largest extent of downregulation are depicted in Table 4, and Table S4 displays the complete list of downregulated genes. An examination of the significantly up-and downregulated genes revealed distinct gene families that were highly represented, and these were selected for further analysis and visualization.
Chemokines and Chemokine Receptors
A large number of chemokine genes were found among the most upregulated genes ( Fig. 1; Tables 3 and S3). Not only did we find a number of CCL and CXCL chemokines, we also found a number of chemotactic proteins of the S100 family among the highly upregulated genes. In fact, S100A9 and S100A8 were among the genes showing the highest degree of upregulation of all genes. Out of the CCL and CXCL chemokines, CXCL14/BRAK and CCL2/MCP-1 showed the highest extent of upregulation. Among the chemokine receptors, CCR5 was upregulated to the largest extent.
Cytokines
As displayed in Fig. 2 and Table 3/S3, a number of cytokines were upregulated in the uteri of diseased animals, with IL-8, IL-1 and IL-6 showing the highest extent of induction and there was also a high extent of IL-1 receptor upregulation. Also IL-33 and IL-18 were markedly upregulated. Somewhat unexpectedly, no significant upregulation of TNF or of any of the interferon family members was seen. In line with the upregulated cytokine expression, we noted a significant (3.3-fold) downregulation of suppressor of cytokine signaling 6 (SOCS6)( Table S4).
Complement System
The uterine infection caused a marked upregulation of a number of genes related to the complement system ( Fig. 3; Tables 3/S3). Notably, both the classical and alternative pathways were represented, as shown by the upregulation of both C1 (classical pathway) as well as of Factors D, B and properdin (alternative pathway). Also C3 and C6, i.e. components that are shared by both pathways, were upregulated. Genes involved in the downregulation of the complement system were also upregulated, as shown by the strong upregulation of the C1 inhibitor: serpin peptidase inhibitor (clade G), member 1, and of Factor H. The C5a receptor was also dramatically upregulated.
Proteases/Protease Inhibitors
An examination of the list of upregulated genes reveals a striking representation of genes related to proteolysis. Notably, the gene that showed the highest extent of upregulation among all genes was a protease inhibitor, secretory leukocyte peptidase inhibitor (SLPI; Table 3 and Fig. 4), and its profound upregulation in diseased animals was confirmed by qPCR analysis (Fig. 5A). SLPI is an inhibitor of neutrophil elastase and strong upregulation (,24fold) was also seen for an additional elastase inhibitor, SKALP/ elafin (Table 3 and Fig. 4). Numerous matrix metalloprotease (MMP) members were upregulated, including the collagenases MMP-1 and MMP-13, as well as the gelatinase MMP-9 and MMP-7/matrilysin ( Fig. 4 and Table 3/S3). The robust upregulation of MMP-1 and MMP-9 was confirmed by qPCR analysis (Fig. 5B, C). There was also a striking induction of TIMP-1, and -2, i.e. protease inhibitors with specificity for inhibiting proteases belonging to the MMP family (Fig. 4). Several proteases of the a disintegrin and metalloproteinase (ADAM) and ADAM with thrombospondin type 1-like motifs (ADAMTS) families were also markedly upregulated: ADAMTS2, ADAMTS5, ADAM-DEC1 and ADAM28. Significant induction was also seen for various cysteine proteases, including caspase 4, -12 and -8, and several cysteine cathepsins, including cathepsin H, -S, -C and -B. Cathepsin D, an aspartic protease, was also significantly upregulated. Out of the large family of serine protease genes, the urokinase plasminogen activator gene showed the highest extent of upregulation and this was also reflected by a large extent of upregulation for the corresponding inhibitor, i.e. plasminogen activator inhibitor 1 (PAI-1). An upregulated expression of various mast cell proteases was also evident. In particular, a significant upregulation of mastin, a tetrameric, tryptase-like protease with gelatinase activity [24] was seen in diseased uteri.
Proteoglycans/Anticoagulant Pathways
One of the genes that showed the highest extent of upregulation was the gene coding for the core protein of serglycin proteoglycan (SRGN), as shown both by Affymetrix gene chip analysis (Fig. 6) and by qPCR (Fig. 5D). Significant upregulation was also seen for versican, lumican, syndecan-2, biglycan and syndecan-4 ( Fig. 6 and Tables 3/S3). The biological properties of proteoglycans are critically dependent on the nature of the glycosaminoglycan chains (heparan/chondroitin/dermatan/keratan sulfate or heparin) attached to the respective protein cores. Glycosaminoglycan chain synthesis is accomplished through the concerted action of a number of biosynthetic enzymes and we therefore analyzed the expression of the corresponding genes. As shown in Fig. 6, a strong induction of two heparan sulfate 3-O-sulfotransferase isoforms (HS3ST3A1 and HS3ST3B1) was evident in uterus from infected individuals and upregulation of the genes coding for dermatan sulfate epimerase, carbohydrate (chondroitin 4) sulfotransferase 11 and chondroitin sulfate N-acetylgalactosaminyltransferase 2 was also apparent (Fig. 6). Heparan sulfate 3-O-sulfotransferase catalyzes the incorporation of GlcNAc-3-O-sulfate into heparan sulfate and mast cell heparin, thereby conferring the respective glycosaminoglycan with potent anticoagulant activity [25]. Hence, the upregulated expression of the corresponding genes suggests that activation of anticoagulant mechanisms is a feature of bacterial infection of the uterus. In agreement with this notion, strong upregulation (48-fold) of another anticoagulant component, tissue factor pathway inhibitor 2, was also evident ( Table 3). The role of anticoagulant pathways in the regulation of bacterial disease is also underscored by the established use of activated protein C, an anticoagulant protein, in treatment of sepsis [26].
Prostaglandins
The uterine infection caused a massive (,90-fold) upregulation of the prostaglandin-endoperoxide synthase 2/cyclooxygenase-2 gene (PTGS2)( Table 3). Further, strong upregulation of the prostaglandin D2 receptor (PTGDR) and prostaglandin E synthase (PTGES) genes was evident (Table 3/S3). The upregulation of prostaglandin-related genes is thus in accordance with the increased levels of prostaglandin F 2a metabolite in plasma from diseased dogs (Table 1). In contrast, genes related to leukotriene metabolism were not affected to any major extent.
Immunoglobulins, Antigen Presentation
Various immunoglobulin (Ig)-related genes were highly represented (Table 3/S3), for example, Ig lambda chain V region 4A precursor, Ig kappa chain C region and Ig heavy chain V-III region VH26 precursor. In agreement with a strong upregulation of Ig genes, plasma cells were abundant in afflicted tissue (Fig. S1). Several genes associated with antigen presentation were also upregulated, including MHC class II DR alpha chain and MHC class II DLA DRB1 beta chain, CD48 (adhesion molecule involved in the immunological synapse) as well as cathepsin S, the latter being a cysteine protease implicated in antigen processing [27].
Anti-Bacterial Genes
A natural consequence of the uterine infection would be an upregulated expression of various anti-bacterial proteins. Indeed, strong upregulated expression of lysozyme was apparent and there was also a marked upregulation of acyloxyacyl hydrolase, a lipase that partially deacylates bacterial lipopolysaccharide (LPS)( Table 3/S3). Strong induction was also seen for regenerating islet-derived 3 gamma (REG3G), an anti-bacterial compound that is expressed in an IL-22 and IL-23p19-dependent fashion [28,29], and of bactericidal/permeability-increasing protein (BPI), the latter being an antimicrobial protein with LPS-neutralizing activity [30]. Moreover, clear upregulation of the anti-bacterial chemokines [31], CXCL14 [32] and CCL20 was evident. Somewhat unexpectedly, we did not see a significantly upregulated expression of any of the defensin family members.
Acute-Phase Reactants
Pyometra and other bacterial infections are associated with elevated plasma levels of various acute phase reactants [33,34]. Although the liver is considered to be the primary source for this group of proteins, it is apparent that several acute-phase reactants are also produced within the uterus, as shown by the dramatic (,160fold) upregulation of serum amyloid A (SAA)( Table 3). Notably, the strong upregulation of the SAA gene is consistent with the high levels of SAA found in serum from diseased animals ( Table 1).
Downregulated Genes/Homeobox and Zinc Finger Transcription Factors
The genes that were significantly downregulated in diseased animals included a number of genes with functions associated with signaling pathways (Table 4/S4). In particular, we note a striking presence of numerous members of the homeobox (Fig. 7) and zinc finger (Fig. 8) transcription factor families. The strong downregulation of homeobox genes in dogs afflicted with pyometra was also verified by qPCR analysis, as shown for MSX2 and HOXA6 (Fig. 5E, F). Notably, of all significantly downregulated (more than 2-fold) genes, 16 homeobox genes and 54 zinc finger genes were found. In contrast, homeobox and zinc finger proteins were only minimally represented (altogether 3 genes) among the significantly upregulated genes, being absent from the 100 most upregulated genes.
Discussion
To the best of our knowledge, this is the first study in which the global gene expression pattern in the uterus is studied following a naturally occurring bacterial infection. Importantly, since a spontaneous disease rather than experimentally induced infection was used, the findings reflect a clinically relevant situation. Notably, previous attempts to extrapolate findings derived from experimental models of sepsis into a clinical setting have often encountered serious problems [36]. Another advantage of using the canine uterine disease as a model for sepsis is that the surgical treatment of the disease produces tissue samples readily available for studies of the local inflammatory response to bacterial infection. This is in contrast to corresponding human diseases, in which investigations are often limited to the use of blood sampling [37,38,39]. The pattern of upregulated genes clearly reflects an ongoing inflammatory response, as shown by the upregulated expression of several endothelial adhesion molecules, chemotactic proteins and cytokines. Among the cytokines, IL-6 and IL-1 were upregulated to the largest extent, and this was also reflected by a strong induction of the IL-1 receptor. Marked upregulation of IL-18 and IL-33 was also evident. Notably, IL-1, IL-18 and IL-33 are closely related cytokines, sharing structural and functional properties and, in addition, they are all activated by a caspase-1/ inflammasome-dependent pathway [40]. We may thus suggest that activation of the inflammasome constitutes a major pathway for driving the inflammation seen in diseased animals. Given the wide implication of TNF during the host response to a plethora of pathogens, it was expected that pyometra would be associated with robust upregulation of TNF. However, the TNF gene was only marginally (,1.5-fold) upregulated, not even reaching statistical significance. Most likely, this apparent paradox may reflect that the samples were taken from animals that had reached a late stage of disease, a stage where the initial rise in TNF levels during the early phase of infection may have declined. Following this, targeting of TNF may not constitute the most optimal regimen for therapy and, indeed, clinical trials for sepsis in which TNF was targeted have shown limited success (discussed in [36]).
As judged from the present study, a major feature of uterine infection is the upregulated expression of a large panel of proteases. In particular, there was a profound upregulation of various MMP members implicated in extracellular matrix (ECM) and chemokine [41] turnover, including collagenases (MMP-1, MMP-13), MMP-9 and MMP-7. We also note a robust upregulation of several caspases, a family of cysteine proteases strongly implicated in apoptotic processes, but also in a variety of other settings such as cancer and inflammation [42]. Out of the caspases, the most dramatic upregulation was seen for caspase-4, an ''inflammatory'' caspase that has been shown to promote nuclear factor kappa B (NF-kB) signaling and production of proinflammatory chemokines [43]. Also caspase-8 was upregulated. Caspase-8 is widely implicated in apoptosis but may also contribute to NF-kB activation through TLR4 [44]. Hence, its robust upregulation during uterine infection is in clear agreement with activation of the NF-kB pathway. Interestingly, strong upregulation was also seen for caspase-12, a protease that was recently shown to downregulate NF-kB signaling, thereby dampening the production of antibacterial peptides [45]. Hence, the uterine infection is associated with caspases capable of both promoting and dampening NF-kB-mediated effects on the immune system.
Uterine infection also caused a strong induction of several cysteine cathepsins. Traditionally, cystein cathepsins are mostly known as lysosomal enzymes involved in intracellular degradation processes. However, more recent data have revealed a much wider repertoire of functions, extending from roles in apoptosis to roles in cancer progression, wound healing and also in inflammatory disorders [46]. The present report thus indicates that cysteine cathepsin induction is a prominent feature of bacterial uterine infection. The uterine infection was also associated with a marked upregulation of several ADAM and ADAMTS metalloproteases, primarily ADAMTS2, ADAMTS5, ADAMDEC1 and ADAM28. Previous studies have implicated ADAMTS proteases mainly in ECM turnover and in regulation of angiogenesis [47] and the present report thus introduces the possibility that certain members of this protease family participate in bacterial disease. Members of the ADAM family have previously been implicated in a variety of disorders, such as asthma, cancer and autoimmune disease [48], but we are not aware of any previous in vivo evidence suggesting an involvement of any of the ADAM proteases in bacterial disease.
Since uncontrolled activation of proteolytic pathways may be harmful, it is critical that proteolytic activities are in balance with corresponding inhibitors. Indeed, a major finding in this study was the strong upregulation of various protease inhibitors. Strikingly, out of all upregulated genes, the gene coding for SLPI showed the highest extent of upregulation (,340-fold). SLPI is an inhibitor of neutrophil elastase and its dramatic upregulation thus indicates that control of elastase activity is an important feature of the uterine infection. This notion is also supported by the strong upregulation of another elastase inhibitor, SKALP (24-fold). There was also a robust upregulation of MMP inhibitors, TIMP-1 and -2. In addition, a number of serine protease inhibitors of serpin type were induced. Out of these, plasminogen activator inhibitor 1 (PAI-1) showed the largest degree of upregulation and, notably, this was matched by a strong upregulation of the corresponding target, i.e. urokinase-type plasminogen activator. SRGN, i.e. the gene coding for the core protein of serglycin proteoglycan, was one of the genes showing the largest extent of upregulation in diseased uteri. Serglycin has previously been shown to be critical for maintaining storage of secretory granule proteases in such cells [49] and the upregulated SRGN expression in diseased uteri may therefore be in line with the induction of proteolytic activities. Notably, mice lacking serglycin were previously shown to be more susceptible to Klebsiella infection than were wild type animals [50]. The present data thus support a prominent role for serglycin proteoglycan in host defense and also introduce the possibility to utilize serglycin as a biomarker for infection.
The massive downregulation of a number of homeobox and zinc finger genes during uterine infection is intriguing. Homeobox transcription factors have been widely implicated mainly in embryonal development and in cancer [18,51] and the data presented here thus expand their repertoire of functions by implicating them in bacterial disease. Although we cannot with certainty explain why the homeobox genes are downregulated during disease, we may speculate that homeobox genes, during homeostatic conditions, have a role in suppressing pro-inflammatory pathways, and that downregulated expression of homeobox genes may unleash inflammatory cascades. In agreement with such a scenario, it has been shown that HOXA9 inhibits NF-kBdependent activation of endothelium [52] and that mice with a reduced expression of Cdx2 are hypersensitive to dextran sodium sulfate-induced acute inflammation [53]. It is also of interest to note that a homeobox gene, TSHZ1 (teashirt), has previously been shown to inhibit caspase-4 gene expression [54]. Thus, the robust upregulation of caspase-4 in diseased animals (Table 3) is clearly compatible with the decreased expression of homeobox factors. There is also evidence suggesting that certain zinc finger proteins may have a homeostatic function by repressing pro-inflammatory responses, including suppression of the NF-kB pathway [55,56]. Moreover, a recent study indicated that genes involved in zincrelated biology were downregulated during pediatric septic shock [39].
An obvious extension of the present work will be to evaluate whether any of the identified upregulated genes can be utilized either as biomarkers for disease or as therapeutic targets. Moreover, it will be important to address whether the respective identified gene product is specifically associated with uterine bacterial infection or if its upregulation is a general consequence of bacterial insult. We believe that the results presented here may provide a basis for numerous future investigations where the usefulness of the candidate genes/gene products identified are evaluated in both canine and corresponding human disease. | 2014-10-01T00:00:00.000Z | 2009-11-26T00:00:00.000 | {
"year": 2009,
"sha1": "0bc1f39a13a549742f5ab6919ccdaca21a3eaea7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1371/journal.pone.0008039",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0bc1f39a13a549742f5ab6919ccdaca21a3eaea7",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
259095575 | pes2o/s2orc | v3-fos-license | Faithful Knowledge Distillation
Knowledge distillation (KD) has received much attention due to its success in compressing networks to allow for their deployment in resource-constrained systems. While the problem of adversarial robustness has been studied before in the KD setting, previous works overlook what we term the relative calibration of the student network with respect to its teacher in terms of soft confidences. In particular, we focus on two crucial questions with regard to a teacher-student pair: (i) do the teacher and student disagree at points close to correctly classified dataset examples, and (ii) is the distilled student as confident as the teacher around dataset examples? These are critical questions when considering the deployment of a smaller student network trained from a robust teacher within a safety-critical setting. To address these questions, we introduce a faithful imitation framework to discuss the relative calibration of confidences and provide empirical and certified methods to evaluate the relative calibration of a student w.r.t. its teacher. Further, to verifiably align the relative calibration incentives of the student to those of its teacher, we introduce faithful distillation. Our experiments on the MNIST, Fashion-MNIST and CIFAR-10 datasets demonstrate the need for such an analysis and the advantages of the increased verifiability of faithful distillation over alternative adversarial distillation methods.
Introduction
The state-of-the-art performance of deep neural networks in a variety of different application areas has recently been fuelled by significant increases in their capacity (Brown et al., 2020;Bommasani et al., 2021). However, the increase in the size of networks has led to deployment issues for resource-constrained systems (Gou et al., 2021;Mishra & Marr, 2017) such as self-driving cars and small medical devices (Wang et al., 2018). Simply deploying smaller versions of these networks, trained as usual, tends to hurt performance.
Knowledge distillation (KD) helps to deal with this problem by distilling knowledge from a large, highperforming, expensive-to-run neural network into a smaller network (Gou et al., 2021;Hinton et al., 2015;Tung & Mori, 2019). This has been shown to improve the performance of smaller networks compared to standard training (Hinton et al., 2015), bringing the performance of larger networks to smaller networks which < l a t e x i t s h a 1 _ b a s e 6 4 = " X m V P O b K q Z r 8 z 2 1 m 5 m Y d D C q a A X j g = " > A A A C B X i c b V B N S 8 N A E N 3 U r 1 q / o h 7 1 s F g E T y W R o h 4 L e v B Y 0 X 5 A G 8 p m s 2 2 X b j Z h d y K W 0 I s X / 4 o X D 4 p 4 9 T 9 4 8 9 + 4 T X P Q 1 g c D j / d m m J n n x 4 J r c J x v q 7 C 0 v L K 6 V l w v b W x u b e / Y u 3 t N H S W K s g a N R K T a P t F M c M k a w E G w d q w Y C X 3 B W v 7 o c u q 3 7 p n S P J J 3 M I 6 Z F 5 K B 5 H 1 O C R i p Z x 9 2 g T 1 A e g t E B k Q F + M q s 5 E J k 7 q R n l 5 2 K k w E v E j c n Z Z S j 3 r O / u k F E k 5 B J o I J o 3 X G d G L y U K O B U s E m p m 2 g W E z o i A 9 Y x V J K Q a S / N v p j g Y 6 M E u B 8 p U x J w p v 6 e S E m o 9 T j 0 T W d I Y K j n v a n 4 n 9 d J o H / h p V z G C T B J Z 4 v 6 i c A Q 4 W k k O O C K U R B j Q w h V 3 N y K 6 Z A o Q s E E V z I h u P M v L 5 L m a c U 9 q 1 R v q u W a k 8 d R R A f o C J 0 g F 5 2 j G r p G d d R A F D 2 i Z / S K 3 q w n 6 8 V 6 t z 5 m r Q U r n 9 l H f 2 B 9 / g B b x Z k U < / l a t e x i t > Standard Distillation < l a t e x i t s h a 1 _ b a s e 6 4 = " f G 7 C u q D 9 r q C T V r / a p V l r Z O p H 9 H M = " > A A A C B X i c b V D L S s N A F J 3 U V 6 2 v q E t d D B b B V U m k q M u C I i 4 r 2 A e 0 p U y m k 3 b o Z B J m b s Q S u n H j r 7 h x o Y h b / 8 G d f + M k z U J b D w w c z r m X O + d 4 k e A a H O f b K i w t r 6 y u F d d L G 5 t b 2 z v 2 7 l 5 T h 7 G i r E F D E a q 2 R z Q T X L I G c B C s H S l G A k + w l j e + T P 3 W P V O a h / I O J h H r B W Q o u c 8 p A S P 1 7 c M u s A d I r g m H k R 8 L f G V O c i E y d 9 q 3 y 0 7 F y Y A X i Z u T M s p R 7 9 t f 3 U F I 4 4 B J o I J o 3 X G d C H o J U c C p Y N N S N 9 Y s I n R M h q x j q C Q B 0 7 0 k S z H F x 0 Y Z Y D 9 U 5 k n A m f p 7 I y G B 1 p P A M 5 M B g Z G e 9 1 L x P 6 8 T g 3 / R S 7 i M Y m C S z g 6 l W S H E a S V 4 w B W j I C a G E K q 4 + S u m I 6 I I B V N c y Z T g z k d e J M 3 T i n t W q d 5 W y z U n r 6 O I D t A R O k E u O k c 1 d I P q q I E o e k T P 6 B W 9 W U / W i / V u f c x G C 1 a + s 4 / + w P r 8 A V 5 1 m R Y = < / l a t e x i t > Faithful Distillation < l a t e x i t s h a 1 _ b a s e 6 4 = " m 4 u p 1 z c z f P H 9 T g R Y Z f M G 0 F + 1 Q M 8 = " > A A A B 9 X i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 h E 1 I t Q 8 O K x g v 2 A N p b N d t I u 3 W z C 7 k Y p p f / D i w d F v P p f v P l v 3 L Y 5 a O u D g c d 7 M 8 z M C 1 P B t f G 8 b 6 e w s r q 2 v l H c L G 1 t 7 + z u l f c P G j r J F M M 6 S 0 S i W i H V K L j E u u F G Y C t V S O N Q Y D M c 3 k z 9 5 i M q z R N 5 b 0 Y p B j H t S x 5 x R o 2 V H j q Y a i 4 S S a 6 J 5 / r d c s V z v R n I M v F z U o E c t W 7 5 q 9 N L W B a j N E x Q r d u + l 5 p g T J X h T O C k 1 M k 0 p p Q N a R / b l k o a o w 7 G s 6 s n 5 M Q q P R I l y p Y 0 Z K b + n h j T W O t R H N r O m J q B X v S m 4 n 9 e O z P R V T D m M s 0 M S j Z f F G W C m I R M I y A 9 r p A Z M b K E M s X t r Y Q N q K L M 2 K B K N g R / 8 e V l 0 j h z / Q v 3 / O 6 8 U v X y O I p w B M d w C j 5 c Q h V u o Q Z 1 Y K D g G V 7 h z X l y X p x 3 5 2 P e W n D y m U P 4 A + f z B 9 T z k V 8 = < / l a t e x i t > ✏ = 0.1 < l a t e x i t s h a 1 _ b a s e 6 4 = " m 4 u p 1 z c z f P H 9 T g R Y Z f M G 0 F + 1 Q M 8 = " > A A A B 9 X i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 h E 1 I t Q 8 O K x g v 2 A N p b N d t I u 3 W z C 7 k Y p p f / D i w d F v P p f v P l v 3 L Y 5 a O u D g c d 7 M 8 z M C 1 P B t f G 8 b 6 e w s r q 2 v l H c L G 1 t 7 + z u l f c P G j r J F M M 6 S 0 S i W i H V K L j E u u F G Y C t V S O N Q Y D M c 3 k z 9 5 i M q z R N 5 b 0 Y p B j H t S x 5 x R o 2 V H j q Y a i 4 S S a 6 J 5 / r d c s V z v R n I M v F z U o E c t W 7 5 q 9 N L W B a j N E x Q r d u + l 5 p g T J X h T O C k 1 M k 0 p p Q N a R / b l k o a o w 7 G s 6 s n 5 M Q q P R I l y p Y 0 Z K b + n h j T W O t R H N r O m J q B X v S m 4 n 9 e O z P R V T D m M s 0 M S j Z f F G W C m I R M I y A 9 r p A Z M b K E M s X t r Y Q N q K L M 2 K B K N g R / 8 e V l 0 j h z / Q v 3 / O 6 8 U v X y O I p w B M d w C j 5 c Q h V u o Q Z 1 Y K D g G V 7 h z X l y X p x 3 5 2 P e W n D y m U P 4 A + f z B 9 T z k V 8 = < / l a t e x i t > ✏ = 0.1 < l a t e x i t s h a 1 _ b a s e 6 4 = " R U C f 2 e f r T d 9 / k h l E b 8 O 7 7 p r l Q w s = " > A A A C K H i c b V D L S g N B E J z 1 G d e o i R 6 9 L A Y h i I R d 8 X U M 6 M G L G M E k g h t C 7 6 S T j J n Z X W Z m J W H J P 3 j V T / B r v E m u f o m T x 0 E T G x q K q m 6 q u 4 K Y M 6 V d d 2 Q t L a + s r q 1 n N u z N 7 N b 2 T i 6 / W 1 N R I i l W a c Q j + R i A Q s 5 C r G q m O T 7 G E k E E H O t B 7 2 q s 1 1 9 Q K h a F D 3 o Q Y 0 N A J 2 R t R k E b q u Y H I u 0 P m 7 m C W 3 I n 5 S w C b w Y K Z F a V Z t 7 K + q 2 I J g J D T T k o 9 e S 5 s W 6 k I D W j H I e 2 n y i M g f a g g 0 x G N k m P 2 8 + r U V Q O y l 5 5 6 W z + 9 N C u T h L M k P 2 y Q E p E o 9 c k D K 5 I R V S J Z Q 8 k 1 f y R t 6 t D + v T + r J G 0 9 E l a 7 a z R / 6 U 9 f 0 D I X K l 3 g = = < / l a t e x i t > x < l a t e x i t s h a 1 _ b a s e 6 4 = " B v / w / I E 7 I Q i o y I q Q w s C G X G n Y 0 7 s = " > A A A C K n i c b V D L S s N A F J 3 4 r P F Z X b o J F k F E S i K + l g V d u B E V 7 A O a U G 6 m N + 3 Q m S T M T K Q l 9 C f c 6 i f 4 N e 7 E r R / i 9 L H Q 1 g s X D u f c y 7 n 3 h C l n S r v u p 7 W w u L S 8 s l p Y s 9 c 3 N r e 2 d 4 q 7 N Z V k k m K V J j y R j R A U c h Z j V T P N s Z F K B B F y r I e 9 6 5 F e f 0 a p W B I / 6 U G K g Y B O z C J G Q R u q 4 Y c i 7 7 d O h 6 2 d k l t 2 x + X M A 2 8 K S m R a D 6 2 i t e G 3 E 5 o J j D X l o F T T c 1 M d 5 C A 1 o x y H t p 8 p T I H 2 o I N N A 2 M Q q I J 8 f P D Q O T R M 2 4 k S a T r W z p j 9 v Z G D U G o g Q j M p Q H f V r D Y i / 9 V C M e O s o 6 s g Z 3 G a a Y z p x D j K u K M T Z 5 S G 0 2 Y S q e Y D A 4 B K Z m 5 3 a B c k U G 0 y s 2 3 / B s 1 z E u + M 0 X 2 K E n Q i j 3 M f Z E d A f 2 i e 7 f g n I 2 S b / L z Z t O Z B 7 b T s X Z T P H 8 9 K l a N p k g W y T w 7 I E f H I J a m Q W / J A q o Q S T l 7 I K 3 m z 3 q 0 P 6 9 P 6 m o w u W N O d P f K n r O 8 f b j C m g w = = < / l a t e x i t > x 2 < l a t e x i t s h a 1 _ b a s e 6 4 = " e o V v Y S k E l u d p 4 7 r 1 7 k j w B N 3 e C 0 k = " > A A A C K n i c b V D L S s N A F J 3 U d 3 y 1 u n Q T L I K I l E R 8 L Q u 6 c C M q 2 A e Y U m 6 m t + 3 Q m S T M T K Q l 5 C f c 6 i f 4 N e 7 E r R / i p G a h 1 Q s X D u f c y 7 n 3 B D F n S r v u u 1 W a m 1 9 Y X F p e s V f X 1 j c 2 y 5 W t p o o S S b F B I x 7 J d g A K O Q u x o Z n m 2 I 4 l g g g 4 t o L R R a 6 3 H l E q F o X 3 e h J j R 8 A g Z H 1 G Q R u q 7 Q c i H X e 9 r F u u u j V 3 W s 5 f 4 B W g S o q 6 7 V a s N b 8 X 0 U R g q C k H p R 4 8 N 9 a d F K R m l G N m + 4 n C G O g I B v h g Y A g C V S e d H p w 5 e 4 b p O f 1 I m g 6 1 M 2 V / b q Q g l J q I w E w K 0 E M 1 q + X k v 1 o g Z p x 1 / 7 y T s j B O N I b 0 2 7 i f c E d H T p 6 G 0 2 M S q e Y T A 4 B K Z m 5 3 6 B A k U G 0 y s 2 3 / E s 1 z E q + N 0 U 2 M E n Q k D 1 I f 5 E D A O D P P D v z D H N k m P 2 8 2 r b + g e V T z T m s n d 8 f V + n 6 R 5 D L Z I b t k n 3 j k j N T J F b k l D U I J J 0 / k m b x Y r 9 a b 9 W 5 9 f I + W r G J n m / w q 6 / M L b H C m g g = = < / l a t e x i t > x 1 < l a t e x i t s h a 1 _ b a s e 6 4 = " e o V v Y S k E l u d p 4 7 r 1 7 k j w B N 3 e C 0 k = " > A A A C K n i c b V D L S s N A F J 3 U d 3 y 1 u n Q T L I K I l E R 8 L Q u 6 c C M q 2 A e Y U m 6 m t + 3 Q m S T M T K Q l 5 C f c 6 i f 4 N e 7 E r R / i p G a h 1 Q s X D u f c y 7 n 3 B D F n S r v u u 1 W a m 1 9 Y X F p e s V f X 1 j c 2 y 5 W t p o o S S b F B I x 7 J d g A K O Q u x o Z n m 2 I 4 l g g g 4 t o L R R a 6 3 H l E q F o X 3 e h J j R 8 A g Z H 1 G Q R u q 7 Q c i H X e 9 r F u u u j V 3 W s 5 f 4 B W g S o q 6 7 V a s N b 8 X 0 U R g q C k H p R 4 8 N 9 a d F K R m l G N m + 4 n C G O g I B v h g Y A g C V S e d H p w 5 e 4 b p O f 1 I m g 6 1 M 2 V / b q Q g l J q I w E w K 0 E M 1 q + X k v 1 o g Z p x 1 / 7 y T s j B O N I b 0 2 7 i f c E d H T p 6 G 0 2 M S q e Y T A 4 B K Z m 5 3 6 B A k U G 0 y s 2 3 / E s 1 z E q + N 0 U 2 M E n Q k D 1 I f 5 E D A O D P P D v z D H N k m P 2 8 2 r b + g e V T z T m s n d 8 f V + n 6 R 5 D L Z I b t k n 3 j k j N T J F b k l D U I J J 0 / k m b x Y r 9 a b 9 W 5 9 f I + W r G J n m / w q 6 / M L b H C m g g = = < / l a t e x i t > x 1 < l a t e x i t s h a 1 _ b a s e 6 4 = " R U C f 2 e f r T d 9 / k h l E b 8 O 7 7 p r l Q w s = " > A A A C K H i c b V D L S g N B E J z 1 G d e o i R 6 9 L A Y h i I R d 8 X U M 6 M G L G M E k g h t C 7 6 S T j J n Z X W Z m J W H J P 3 j V T / B r v E m u f o m T x 0 E T G x q K q m 6 q u 4 K Y M 6 V d d 2 Q t L a + s r q 1 n N u z N 7 N b 2 T i 6 / W 1 N R I i l W a c Q j + R i A Q s 5 C r G q m O T 7 G E k E E H O t B 7 2 q s 1 1 9 Q K h a F D 3 o Q Y 0 N A J 2 R t R k E b q u Y H I u 0 P m 7 m C W 3 I n 5 S w C b w Y K Z F a V Z t 7 K + q 2 I J g J D T T k o 9 e S 5 s W 6 k I D W j H I e 2 n y i M g f a g g 0 8 G h i B Q N d L J u U P n 0 D A t p x 1 J 0 6 F 2 J u z v j R S E U g M R m E k B u q v m t T H 5 r x a I O W f d v m y k L I w T j S G d G r c T 7 u j I G W f h t J h E q v n A A K C S m d s d 2 g U J V J v E b N u / R v O c x F t j d B e j B B 3 J o 9 Q H 2 R H Q H 5 p n O / 7 x G N k m P 2 8 + r U V Q O y l 5 5 6 W z + 9 N C u T h L M k P 2 y Q E p E o 9 c k D K 5 I R V S J Z Q 8 k 1 f y R t 6 t D + v T + r J G 0 9 E l a 7 a z R / 6 U 9 f 0 D I X K l 3 g = = < / l a t e x i t > x < l a t e x i t s h a 1 _ b a s e 6 4 = " B v / w / I E 7 I Q i o y I q Q w s C G X G n Y 0 7 s = " > A A A C K n i c b V D L S s N A F J 3 4 r P F Z X b o J F k F E S i K + l g V d u B E V 7 A O a U G 6 m N + 3 Q m S T M T K Q l 9 C f c 6 i f 4 N e 7 E r R / i 9 L H Q 1 g s X D u f c y 7 n 3 h C l n S r v u p 7 W w u L S 8 s l p Y s 9 c 3 N r e 2 d 4 q 7 N Z V k k m K V J j y R j R A U c h Z j V T P N s Z F K B B F y r I e 9 6 5 F e f 0 a p W B I / 6 U G K g Y B O z C J G Q R u q 4 Y c i 7 7 d O h 6 2 d k l t 2 x + X M A 2 8 K S m R a D 6 2 i t e G 3 E 5 o J j D X l o F T T c 1 M d 5 C A 1 o x y H t p 8 p T I H 2 o I N N A 2 M Q q I J 8 f P D Q O T R M 2 4 k S a T r W z p j 9 v Z G D U G o g Q j M p Q H f V r D Y i / 9 V C M e O s o 6 s g Z 3 G a a Y z p x D j K u K M T Z 5 S G 0 2 Y S q e Y D A 4 B K Z m 5 3 a B c k U G 0 y s 2 3 / B s 1 z E u + M 0 X 2 K E n Q i j 3 M f Z E d A f 2 i e 7 f g n I 2 S b / L z Z t O Z B 7 b T s X Z T P H 8 9 K l a N p k g W y T w 7 I E f H I J a m Q W / J A q o Q S T l 7 I K 3 m z 3 q 0 P 6 9 P 6 m o w u W N O d P f K n r O 8 f b j C m g w = = < / l a t e x i t > x 2 < l a t e x i t s h a 1 _ b a s e 6 4 = " n R N p q 3 7 T T k 5 Z 4 9 S 8 g 7 n K Z P a V w y E = " > A A A C O n i c b V D L T g I x F O 3 4 R H y h L t 0 0 E h P c k B l R w I U J R h c u 8 Y G Q M D j p l A 4 0 t D O T t m M k k / k e f 8 J f c K t x y 8 6 4 9 Q M s j x g F T 9 L k 5 J x z c 2 + P G z I q l W m + G 3 P z C 4 t L y 6 m V 9 O r a + s Z m Z m v 7 T g a R w K S G A x a I h o s k Y d Q n N U U V I 4 1 Q E M R d R u p u 7 3 z o 1 x + I k D T w b 1 U / J C 2 O O j 7 1 K E Z K S 0 7 m z L u P c 4 W D x I m v b 5 O c 7 f L 4 0 T l M D u A p N P M n p R K 0 7 f R P 4 u Z i O l E s F 6 C T y Z p 5 c w Q 4 S 6 w J y Y I J q k 5 m Y L c D H H H i K 8 y Q l E 3 L D F U r R k J R z E i S t i N J Q o R 7 q E O a m v q I E 9 m K R 1 9 N 4 L 5 W 2 t A L h H 6 + g i P 1 9 0 S M u J R 9 7 u o k R 6 o r p 7 2 h + K / n 8 q n N y i u 3 Y u q H k S I + H i / 2 I g Z V A I c 9 w j Y V B C v W 1 w R h Q f X t E H e R Q F j p t t O 6 F G u 6 g l l y d 5 i 3 i v n j q 6 N s J T e p J w V 2 w R 7 I A Q u U Q A V c g i q o A Q y e w A t 4 B W / G s z E w P o z P c X T O m M z s g D 8 w v r 4 B V T O p M A = = < / l a t e x i t > f (3) SD (x 2 ) = 0.683 < l a t e x i t s h a 1 _ b a s e 6 4 = " x 5 7 I z v 4 b Q 1 G A u c F R a g v p z 5 k i R 0 M = " > A A A C O X i c b V D L S s N A F J 3 4 r P V V d e l m s A h 1 U 5 K q b V 0 I B U V c V m l V M D F M p p N 2 6 E w S Z i Z i C f k d f 8 J f c K v g U l f i 1 h 9 w + k A 0 e m D g c M 6 5 3 D v H i x i V y j R f j K n p m d m 5 + d x C f n F p e W W 1 s L Z + I c N Y Y N L G I Q v F l Y c k Y T Q g b U U V I 1 e R I I h 7 j F x 6 / a O h f 3 l L h K R h 0 F K D i D g c d Q P q U 4 y U l t x C w 7 9 J S r s 7 q Z u c t 9 K S 7 f H k z q 2 k O / A Q m u W D W g 3 a d v 4 7 c X K c T d S r F b d Q N M v m C P A v s S a k C C Z o u o U 3 u x P i m J N A Y Y a k v L b M S D k J E o p i R t K 8 H U s S I d x H X X K t a Y A 4 k U 4 y + m k K t 7 X S g X 4 o 9 A s U H K k / J x L E p R x w T y c 5 U j 2 Z 9 Y b i v 5 7 H M 5 u V X 3 c S G k S x I g E e L / Z j B l U I h z X C D h U E K z b Q B G F B 9 e 0 Q 9 5 B A W O m y 8 7 o U K 1 v B X 3 J R K V v V 8 v 7 Z X r F R m t S T A 5 t g C 5 S A B W q g A U 5 B E 7 Q B B v f g E T y B Z + P B e D X e j Y 9 x d M q Y z G y A X z A + v w D S t K j 4 < / l a t e x i t > f (3) Figure 1: Faithful Distillation -a heat map showing the maximum difference (specifically the ℓ ∞ -norm) of the soft confidence outputs of a robust teacher model (f RT ) and its two distilled students, a student trained using standard KD (f SD ), and a student trained using faithful distillation (f F D ) within an ℓ ∞ -ball surrounding an example x from the MNIST dataset. In particular, we observe two problematic scenarios for f SD : an adversarial disagreement example at x 2 , and poor relative calibration in terms of confidences (that is, large differences in confidence outputs of the two networks) at x 1 . We can see for f F D , the heat map transitions are a lot smoother indicating smaller differences in confidences with its teacher. This implies that f F D more closely imitates its teacher than f SD . Overall, we observe that faithful distillation improves upon the two problems observed with f SD . tend to be more efficient to run. However, students trained using standard KD are vulnerable to adversarial attacks, that is, to small input perturbations which cause networks to change their correct classification (Guo et al., 2019;Goldblum et al., 2020).
To mitigate this issue, previous works have studied the robustness of teacher and student networks separately, in particular, that distilling knowledge from a robust teacher improves the robustness of the distilled student (Goldblum et al., 2020). However, an important question arises: if we have a robust teacher, can we find dataset examples where the teacher and student agree, yet under small perturbations then disagree? Moreover, consider the case where the confidence of the deployed network will be used to make decisionsis the distilled student as confident as the teacher in and around dataset examples? As fig. 1 shows, small perturbations of an image can cause significant disagreements in classification, which we refer to as adversarial disagreement examples, and major differences in confidences between a teacher and its student. This motivates the need for (i) a framework to evaluate the robustness and confidence of the student with respect to the teacher and (ii) a training method that allows us to obtain better students with respect to that framework.
Our contributions are fourfold: • we define the concept of a faithful imitator to discuss and bound the difference in confidences of a teacher and its student; • we introduce empirical and verified methods to investigate and compute these bounds; • we provide a faithful distillation loss that allows us to train students which are verifiably more aligned with their teacher network in terms of confidences, and • we demonstrate the capabilities of our framework on the MNIST (LeCun et al., 1998) and Fashion-MNIST (Xiao et al., 2017) datasets.
Background and Related Work
Throughout this paper, we assume that we are dealing with a C-class classification problem and that (x, y) ∼ D is an example from a data distribution D, with x ∈ R n an input vector and y ∈ {1, . . . , C} its associated class.
Knowledge Distillation
Knowledge distillation is the process of transferring information and representations from a larger teacher network, f t : R n → [0, 1] C , to a usually significantly smaller student network, f s : R n → [0, 1] C , with the standard aim of improving the performance of the student over regular training. Hinton et al. (2015) popularised a specific distillation process that utilised the following loss: where L CE refers to the standard cross-entropy loss function, and f t (x; T ) and f s (x; T ) refer to the softmax outputs with temperature scaling T > 0 for the teacher and student respectively evaluated at a point x. We refer to distillation training using this loss as standard distillation (SD). Goldblum et al. (2020) found that starting from a robust teacher network and only distilling knowledge using clean data can empirically result in the student network inheriting robustness from its teacher. However, the authors also noted that this is only sometimes the case and can depend on both the dataset and teacher.
Adversarial Attacks and Training
Adversarial examples are dataset examples that a network correctly classifies that can be deliberately perturbed to cause the network to change its classification (Goodfellow et al., 2015).
Several white-box (Goodfellow et al., 2015;Dong et al., 2018;Madry et al., 2018) and black box-techniques (Narodytska & Kasiviswanathan, 2017;Ilyas et al., 2018;Li et al., 2019) have been developed to generate adversarial examples. One such method utilises PGD attacks which make use of projective gradient ascent to maximise the training loss L within a constrained domain S: where η denotes the step size of the gradient steps, Π S is the projection operator onto the set S, and sgn denotes the sign function (Madry et al., 2018). Usually, S is taken to be an ℓ p -ball surrounding a given example that we wish to attack, typically for p = ∞.
Several methods to defend against adversarial attacks have been introduced (Song et al., 2017;Samangouei et al., 2018;Guo et al., 2018). Such methods include adversarial training (Goodfellow et al., 2015;Madry et al., 2018;Na et al., 2018), which aims to train a network to be more robust to adversarial attacks. Madry et al. (2018) framed the problem of adversarial training the as the following saddle point problem: where the inner maximisation may be tackled using the PGD method described in eq. (2).
Adversarial Distillation Methods
Adversarially robust distillation (ARD) is a modification of the SD method discussed above (Goldblum et al., 2020). ARD is implemented so that the distilled student agrees with its teacher's unperturbed confidences even when the student itself is perturbed (Goldblum et al., 2020). The adversarial loss that is used within ARD is given by: where δ 0 = arg max ∥δ∥≤ϵ L CE (f s (x + δ; 1), y) and KL denotes the KL divergence (Goldblum et al., 2020).
ARD training is used to distil both knowledge and robustness from a larger, more robust teacher network. The authors found that using ARD can experimentally produce more robust student networks that can even beat the performance in terms of robust accuracy of networks trained adversarially with the same network architecture (Goldblum et al., 2020).
Robust Soft Label Adversarial Distillation (RSLAD) is an adaption of ARD that replaces the hard labels within eq. (4) with soft labels given by the softmax outputs of the teacher (Zi et al., 2021). Specifically, the loss is given by where δ 0 = arg max ∥δ∥≤ϵ L KL (f s (x + δ; 1), f t (x; 1)). Zi et al. (2021) found that RSLAD empirically produced more robust students than ARD with respect to both PGD attacks and even more sophisticated attacks such as AutoAttack (Croce & Hein, 2020).
Motivation
We motivate the need for a method of faithful distillation through the examples illustrated in fig. 1. Given a robust teacher network, we distil a student using the SD loss from section 2.1.
Let us consider an example of an image x from the MNIST dataset (LeCun et al., 1998). Both networks correctly classify the image x as a 3 with similar, high confidence levels. Let us now investigate perturbations of x in a neighbourhood of x, specifically in an ℓ ∞ -ball of radius ϵ = 0.1.
We first find x 1 , a perturbation of the image x, where the teacher classifies the perturbed image as a 3. However, the student classifies the perturbed image as an 8, despite the minimal difference between x 1 and the original image x. We will refer to an example where the student and teacher initially agree on a prediction but, under perturbation, the networks disagree with one another as an adversarial disagreement example. Formally, we define it as below:
Definition 1 (Adversarial Disagreement Example)
For an input x ∈ R n and teacher and student networks, f t : R n → [0, 1] C and f s : R n → [0, 1] C respectively, we say that x is an adversarial disagreement example for a given ϵ > 0 if there exists an x ′ ∈ B ϵ (x) such that We can view such examples as perturbations that cause an originally in agreement teacher-student pair to disagree. The quantity of such examples within a test set gives a measure of the robustness of the distillation process itself.
Next, we find x 2 , another perturbation of the image x, where the teacher and student classify x 2 as a 3, but with very different confidences in their predictions. Here, we would say that the student is not relatively well calibrated in terms of confidence to its teacher. This poses significant problems in the confident deployment of student networks in safety-critical environments where confidences are used in decision-making. In such a scenario, not only are correct classifications important, but also the confidence in said classification. Here, we see that even a small perturbation can result in wildly different confidences, which implies that such a deployed student poorly imitates and is not relatively well-calibrated to its teacher.
The above motivates the need for a framework in which to empirically investigate and verifiably confirm how relatively well-calibrated a student is to its teacher in terms of their confidences. Moreover, it calls for a form of faithful distillation that addresses the issues discussed above as measured within this new framework. Therefore, we present the faithful imitation framework in section 4 and a loss with the aim of producing verifiably more relatively calibrated students in section 5.
Faithful Imitator For Knowledge Distillation
In this section, we provide an evaluation framework based on the concept of faithful imitation, along with methods to compute lower and upper bounds based on it.
Assume we have two processes defined by the functions f : R n → R d andf : R n → R d , where the goal off is to imitate the output of f . We define a faithful imitator as follows.
Definition 2 (Faithful Imitator) We say thatf : is a chosen metric function in the output space, and d x : R n × R n → R ≥0 is a metric function in the input space. We refer to any δ that bounds ) as a faithfulness bound for a given ϵ.
Intuitively, this definition holds if for an ϵ-neighbourhood defined by a function d x around an input x 0 , the outputs of the imitatorf and the original process f are similar -up to a difference of δ -with respect to an output metric d f .
Within the context of knowledge distillation, we have a student network, f s , that is trying to imitate the output of a teacher network, f t . We can use the definition of a faithful imitator as a principled way of reasoning about the relative calibration of a student network with respect to its teacher in terms of confidences. By relative calibration, we refer to the similarity in the confidence outputs of the two networks.
For this purpose, we are interested, for a given ϵ, in computing the tightest δ that satisfies definition 2. In the setting of multi-class classification, we follow the robustness literature and define d x and d f to be the metrics induced from the ℓ ∞ -norm (Madry et al., 2018).
The issue with solving eq. (8) to optimality is that this is a general non-linear optimisation problem. In the following sections, we approach the computation of δ * in two different ways: using an empirical, best-effort optimiser that gives us a lower bound on δ * and by over-approximating and linearising the problem in a similar fashion to Zhang et al. (2018) obtaining an upper bound on δ * . The use of upper and lower bounds on the solution to eq. (8) provides us with a tractable way of evaluating the faithfulness, that is the degree of imitation in terms of confidences, of a student network to its teacher.
Empirical Lower Bounds on δ *
We can empirically obtain a lower bound on the solution to eq. (8) by using PDG attacks as defined in eq.
(2) to maximise ∥f t (x) − f s (x)∥ ∞ . By using PGD to maximise the difference in confidences between the student and teacher networks within the ℓ ∞ -ball around a given dataset point x 0 , we obtain a best-effort lower bound on δ * . This is useful for empirically investigating how large the difference in confidences of the teacher and student can be, but gives us no guarantee on the maximum possible difference within an ϵ-ball surrounding the given image x, and therefore cannot be considered a faithfulness bound.
Faithfulness Upper Bounds on δ *
To compute faithfulness bounds as per definition 2, we over-approximate the solution of eq. (8) by relaxing the problem using a linear formulation. We achieve this by (i) relaxing the non-linear activation functions ϕ at each layer using linear lower and upper bounds and (ii) relaxing the softmax σ that yields f t (x) = σ(g t (x)) (and similarly for the student network).
Assuming that z U for a given k and an activation function ϕ, we compute the parameters α For ϕ a ReLU activation, we can use the relaxations provided in Ehlers (2017) and Zhang et al. (2018) to obtain the parameters.
For an input vector z ∈ R C , the softmax operator outputs a vector where the i-th component is defined as σ(z) i = exp(z i )/ j exp(z j ). Note that the softmax can be thus be written as σ(z) i = 1/( j̸ =i exp(z j − z i ) + 1). Using this, we bound the i-th component of the softmax activation by first bounding the difference between logits. Specifically, given upper and lower bounds on the logits z l b i ≤ z i ≤ z u b i we compute the following over this domain: To obtain bounds on the i-th component of the softmax for each network, we propagate the logit difference bounds through the softmax function: This gives us We can apply the bounding of the softmax activation to f t and f s by using bounds on their logits, z (Lt) t,i and z (Ls) s,i respectively, to obtain upper and lower bounds of their soft outputs, σ u b (f t (x)) and σ l b (f s (x)). We can then compute a faithfulness upper on δ * given by: For small enough f t and f s , using a MILP solver directly, such as Gurobi (Gurobi Optimization, LLC, 2022), yields tighter bounds following the description above within reasonable runtimes compared to alternative bound propagation methods such as CROWN (Zhang et al., 2018).
Connection to Model Calibration
Our proposed method of producing faithfulness bounds provides an upper bound on the relative calibration, that is the difference in confidences, of teacher and student networks. Therefore, a corollary of the introduced framework above is that if a teacher is robust and moreover is generally well-calibrated on a dataset, a distilled student that is a faithful imitator (i.e. achieves low empirical and faithfulness bounds) of its teacher will also be well-calibrated.
Faithful Distillation
We introduce a new form of distillation that we will refer to as Faithful Distillation (FD), which is defined by the following loss function: where This loss is an adaption of the ARD loss defined in eq. (4). Here, by minimising the loss, we aim to minimise the maximum difference in confidences between the student and its teacher over a given ϵ ball surrounding each input x. This should encourage the student network to match its teacher's confidences even under perturbation, leading to a more relatively calibrated or better-imitating student.
It is worth noting that both FD and RLSAD should achieve comparable performance due to the fact that in the limit, they should both produce very similar bounds. This is because f t (x + δ 0 ) → f t (x) as the teacher becomes increasingly robust during adversarial training. However, in section 6, we observe that on MNIST and Fashion-MNIST, FD seems to produce more verifiably relatively calibrated teacher-student pairs, which importantly provides a sound certificate for downstream tasks. Moreover, the increased verifiability is necessary in order to complement the less reliable empirical measures of faithfulness such as EmpLB introduced above. We discuss this topic in more depth in section 6.1.
Experiments
In this section, we apply the idea of faithful knowledge distillation to classification for MNIST (LeCun et al., 1998) and Fashion-MNIST (F-MNIST) (Xiao et al., 2017). We will investigate the robustness of individual models and of a given distillation process for ℓ ∞ neighbourhoods of radii ϵ ∈ {0.025, 0.05, 0.1, 0.15, 0.2} on MNIST and of radii ϵ ∈ {4/255, 8/255, 12/255, 16/255, 20/255} on F-MNIST. We explore how relatively well-calibrated a given student is to its teacher network by computing upper faithfulness bounds on δ * as described in section 4.2. Furthermore, we will compute empirical lower bounds on δ * as described in section 4.1 using PDG attacks of 50 iterations on each image with a step size of 2.5ϵ/50 (following Madry et al. (2018)), using 5 iterations of random restarts per image. For the sake of efficiency, we report results on a randomly sampled set of 1000 MNIST and F-MNIST images from the original test sets -which we will refer to simply as the test set of each data set throughout.
We start by training robust teacher networks on each dataset, which we denote by f RT as we would like to investigate and produce robust and relatively well-calibrated students distilled from these robust teachers. This is in line with the observation made by Goldblum et al. (2020) that robust teacher networks are better able to produce robust student networks. Note that all the students to follow are distilled from these particular teacher networks on their respective datasets. The teacher networks achieved test set accuracies of 97.88% and 88.23% on MNIST and F-MNIST respectively, with table 1 showing that the trained teacher networks are as robust as expected.
From these teacher networks, we train four separate student networks on each dataset according to the distillation methods previously discussed. We train students using SD (f SD ), students using ARD (f ARD ), students using RSLAD (f RSLAD ) and students using our loss, FD (f F D ). We use an even weighting of loss terms in each distillation loss used for training. These networks achieved clean test set accuracies of 97.41%, 97.20%, 97.12%, and 97.14% on MNIST and 88.29%, 88.18%, 87.47%, and 87.81% on F-MNIST respectively. Further details of network training can be found in appendix A.1 and appendix A.2.
Measuring Robustness and Faithfulness. To compare the performance of the four different students with respect to their robust teacher f RT on the two datasets, we are interested in measuring robustness and faithfulness.
For robustness, we analyse both the teacher and students alone by computing each model's robust accuracy using 50-step PGD attacks at different ϵ values with a step size of 2.5ϵ/50 (Madry et al., 2018;Goldblum et al., 2020). Further, to understand the additional errors the students make compared to the robust teacher -as per the motivation in section 3 and definition 1-we introduce the concept of distillation agreement, which is simply the percentage of examples from a dataset that do not cause disagreements between teacher and student under perturbation.
Definition 3 (Distillation Agreement)
. We then define the distillation agreement, A t ϵ (f s ), of the student f s w.r.t. its teacher f t on D for a given ϵ > 0 as: where as usual, |D| denotes the size of the set D.
Since S t,s ADE,ϵ is hard to compute exactly due to the nature of x ′ ∈ B ϵ (x), we approximate it using 50-step PGD attacks, again with a step size of 2.5ϵ/50, maximising the cross entropy loss of the student compared against the hard predictions of its teacher, and report instead empirical distillation agreement,Ã t ϵ (f s ).
Robustness.
The empirical distillation agreement with respect to the teacher,Ã RT ϵ , and robust accuracies for each of the networks on both MNIST and F-MNIST datasets are shown in table 1.
We observe on both datasets that incorporating any form of adversarial distillation (ARD, RSLAD, or FD) produces a more robust student, as seen through the robust accuracies, than SD. The robust accuracy of f F D , f ARD and f RSLAD are similar. However, we note that RSLAD achieves the greatest robust accuracy scores across all values of ϵ on both MNIST and F-MNIST. This agrees with the general findings of Zi et al. (2021) that using soft labels over hard labels helps produce more robust student networks.
Moreover, we see that the empirical distillation agreement scores of all adversarially distilled students on both datasets are significantly greater than for f SD , indicating that adversarial training helps to align the predictions of teacher and student networks under perturbation. On MNIST, we observe that f F D obtains the greatest empirical distillation agreement over all but the smallest ϵ. However, on the F-MNIST dataset, we observe that f RSLAD obtained the greatest empirical distillation agreement, with f F D achieving, on average, a higher agreement score than f ARD . One potential cause for the difference in results could be the sensitivity of the trained networks to hyperparameter choice. Future work could investigate this or other reasons for the differences in adversarial disagreement scores across datasets. Faithfulness and Relative Calibration. To understand the faithfulness -and therefore relative calibration of confidences -of the different students w.r.t. the robust teacher networks, we report aggregated values of empirical lower bounds (EmpLB) and verified faithfulness bounds (FaithUB) over the test set, and for each ϵ, for both MNIST and F-MNIST in table 2. To fully capture the distribution of the bounds over the test set, we additionally present the results in fig. 2 for MNIST and in fig. 3 for F-MNIST. Here, lower is better, implying a greater degree of relative calibration between a teacher-student pair.
From table 2, we observe that the empirical attack bounds (EmpLB) on both datasets are on average smaller for f F D , f RSLAD and f ARD than for f SD . This indicates that the adversarially distilled students are, on average, empirically more relatively well-calibrated than SD students w.r.t. the robust teacher. This is exemplified further in fig. 2, where there is a greater shift in the empirical bound distributions for f SD over f F D , f RSLAD and f ARD . In addition, on the MNIST dataset, we see that the empirical bounds are lower for f F D than for the three other student networks. Moreover, on MNIST, we notice the standard deviations of the bounds for f F D are smaller across all values of ϵ, showing a more concentrated spread of lower empirical bounds than for the other student, with fewer images creating large differences in the confidences of f F D and f RT . This is further highlighted in the upper tails of the EmpLB distributions in fig. 2. On F-MNIST, like in the table 1, we again observe a change of results, with f RSLAD attaining the lowest empirical bounds followed by f F D .
Looking at the faithfulness bounds (FaithUB) for the f F D , f RSLAD and f ARD students on both datasets in table 2, we observe that they are significantly lower across all values of ϵ than for f SD , which confirms the empirical observations discussed above. Moreover, the higher standard deviation of the faithfulness bounds for ϵ values of 0.15 and 0.2 for f F D , f SD and f ARD indicate that these students have less of an accumulation of images with faithfulness bounds of 1 -the largest possible difference in confidences between a teacher and its student. This is further shown on MNIST in fig. 2, where we observe empirically and verifiably that the maximum confidence difference between the adversarially distilled students and their teacher is smaller for a greater number of images, with their faithfulness bounds more closely imitating the empirical attack bounds for smaller values of ϵ than for f SD . This suggests, in particular, that the degree of relative calibration between these adversarially trained students and their teacher is greater in a verifiable, upper-bound sense.
On both datasets, we see that as for all values of ϵ apart from for ϵ = 4/255, we observe that the faithfulness bounds are, on average lower for f F D than for all of the other students across both datasets. This shows that f F D is verifiably more relatively calibrated in an upper bound sense to its teacher on both MNIST and F-MNIST. On MNIST, this aligns and supports the empirical observations that f F D is more faithful to its teacher in terms of confidences. On F-MNIST, we have a disparity where f F D is verifiably more relatively calibrated across nearly all of the values of ϵ as shown by lower verified bounds but empirically is observed to be less well relatively calibrated to its teacher than f RSLAD , as shown by f F D 's greater empirical faithfulness bounds and lower empirical distillation agreement.
Finally, we note that the standard deviations of the computed FaithUB are large for larger values of ϵ on both datasets. This is a result of the fact that the bounds are bounded above by 1. In particular, this means that high variance in cases where the mean is higher is desirable. This is observable from the distribution of the MNIST bounds for ϵ = 0.15 and 0.2 plotted in fig. 2, where we notice that the higher standard deviation of FD comes from having a greater accumulation of bounds in lower sections of the histograms.
FD as a more verifiable distillation method over RLSAD. As mentioned above, from table 1 and table 2, we see that the empirical measures of distillation robustness that we have introduced, that of EmpLB and empirical distillation agreement, are worse for f F D than for f RSLAD on F-MNIST. However, we note that the general trend of f F D producing tighter FaithUB holds for both MNIST and F-MNIST.
These empirical measures, however, provide no guarantees on the relative calibration of a student to its teacher. Indeed, such methods are highly dependent on the method of attack used for their computation. In particular , table 3 shows three examples, x 1 , x 2 , and x 3 , from the F-MNIST test set where, for ϵ = 8/255, we compute EmpLBs using PGD attacks with steps of 1, 10, 25, and 50. We observe that f RSLAD and f F D flip between giving the lowest and hence the best EmpLB. This indicates that the aforementioned empirical methods cannot solely be used for comparisons. Moreover, combined with the difference in results across MNIST and F-MNIST, this highlights the importance of verifiable methods of bounding the maximum difference in confidences between a teacher and student. In particular, FatihUBs do provide certified upper bounds as they are computed using linear relaxations as a MILP and are therefore independent of and not subject to a choice of attack method such as PGD. As a result, FaithUBs are a more robust method of evaluating the relative calibration of a teacher-student pair and, importantly, provide guarantees for downstream applications such as in safety-critical devices. It is still worth noting that a more complete picture is given when FaithUB is evaluated alongside EmpLB.
Limitations
For larger values of ϵ, the verified faithfulness bounds for all our student networks are loose compared to the empirically produced bounds. This indicates that our method of computing faithfulness bounds struggles to scale with increasing values of ϵ. Since LP methods will provide tighter bounds than bound propagation methods such as CROWN (Zhang et al., 2018), this proves to be a limitation for verifying the greater degree of relative calibration seen empirically for larger values of ϵ. This trend continues to CIFAR-10 (Krizhevsky et al., 2009), where our experiments showed that the methods introduced in this paper do not scale well. This alludes to the fact that a more sophisticated method of producing faithfulness bounds needs to be produced to verify larger networks for larger values of ϵ on more complicated data sets.
Conclusion
The setting of faithful imitators provides a framework for empirically and verifiably reasoning about the relative calibration between a student network and its teacher in terms of confidences in a KD setting. This allows for guarantees on the maximum difference in confidences between the two networks, which is important for the safe deployment of student networks in safety-critical environments. Our experiments on MNIST and Fashion-MNIST suggest that when combined with a robust teacher, ARD, RSLAD, and our FD training can produce empirically and verifiably relatively better-calibrated teacher-student network pairs than standard non-adversarial distillation training. Our FD-trained students were observed to be verifiably more relatively well-calibrated on average to their teacher network than standard non-adversarial distillation, RSLAD, or ARD students on both datasets. However, we observed that the results of empirical methods of bounding the difference in confidences varied across the two datasets for the adversarially trained students, highlighting the need for verifiable guarantees provided by our framework and methods.
Future work could explore the relative calibration between teacher-student pairs on larger datasets and further compare the relative calibration of teacher-student pairs trained using ARD, RSLAD, and FD across datasets. Moreover, more sophisticated and scalable ways of producing faithfulness bounds may be needed to develop this work further. | 2023-06-08T01:15:52.363Z | 2023-06-07T00:00:00.000 | {
"year": 2023,
"sha1": "e6280df35f6f6e2f379a62da31ad6fd786cd1a75",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "e6280df35f6f6e2f379a62da31ad6fd786cd1a75",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
238209592 | pes2o/s2orc | v3-fos-license | Control of mildew in vines with cinnamon extract and catalase activity in organic production
Management with synthetic fungicides in the control of phytopathogens in viticulture can cause environmental pollution and contamination with residues in grape cluster. The objective of this study was to verify the effect of aqueous cinnamon extract on the in vitro and in vivo control of Plasmopara viticola on catalase activity on ‘Isabel Precoce’ vines. The treatments used were: aqueous cinnamon extract (ACE) at concentrations of 0.12; 0.25 and 0.50% plus 0.25% vegetable oil (VO); being the standard treatments VO (0.25%), Bordeaux mixture 1:1:100 (lime: copper sulfate: water) and water only. The germination tests of P. viticola sporangia were carried out in incubation periods of 4 and 24 hours of the pathogen in contact with the treatments. In addition, the area under the disease progress curve (AUDPC) and the activity of the catalase enzyme were estimated in plants grown in the greenhouse. The results indicated that the treatments with 0.12%, 0.25% and 0.5% ACE with VO reduced the germination of P. viticola. In relation to the AUDPC, the 0.25% dose of VO associated ACE reduced 65% and 67% in leaf discs and vines in the greenhouse, respectively. This fact is related to the induction of CAT activity provided by this dose in the periods of 2HBA, 2HAI and 4HAI. Thus, it can be said that the ACE associated with VO can be used to control the downy mildew of the ‘Isabel Precoce’ vine.
Introduction
Viticulture is faced with problems caused by phytopathogens that can cause qualitative and quantitative losses, due to the costs incurred in the control measures. (Pons et al. 2018). Among these pathogens the oomycete Plasmopara viticola [Berk. & Curt] Berl. & de Toni, causal agent of vine mildew, which causes losses in productivity due to the compromise of the plant aerial part, which subsequently worsens and adversely affects the vegetative development (Gessler et al. 2011).
For the control of grape mildew, synthetic fungicides are used, which despite efficiency, select pathogens resistant to the active principles, cause environmental pollution and also remain as residues in grape clusters (Peruch et al. 2007).
As a way to eliminate or at least reduce these problems, alternative products that are less polluting to the environment are sought, as well as effective in disease control, being able to act directly on the pathogen or trigger the elicitor effect of resistance inducers on the vine (Pinto et al. 2013).
Among the alternative products that may provide fungitoxic action are cinnamon (Cinnamomum zeylanicum), which presents volatile compounds, such as eugenol and cinnamaldehyde, that may adversely affect the development of microorganisms (Jham et al. 2005). Flávio et al. (2014) verified that the extract of cinnamon at a concentration of 30% reduced the infection of sorghum seeds by the pathogens of Penicillium sp., Aspergillus sp., Trichoderma sp., Curvularia sp., Rhizopus sp., Colletotrichum sp., Fusarium sp., Dresheslera sp., Alternaria sp., Chaetomium sp. Additionally, cinnamaldehyde may enhance the antioxidant activity of the enzymes superoxide dismutase, glutathione S-transferase and catalase, which are important in inducing plant resistance to pathogen attack (Brugalli 2003).
As a way to prevent the volatilization of these active ingredients present in cinnamon, allowing the expression of the fungitoxic effect as well as the activation of defense enzymes, such as catalase increasing the adhesion of the product on the leaf, the vegetable oil can be used as an adjuvant (Bogorni & Venturoso 2003).
In this context, the present work aimed to verify the effect of cinnamon (C. zeylanicum) and vegetable oil (adjuvant) in the control of grapevine mildew in vitro and in vivo and its effect on the activity of the enzyme catalase on 'Isabel Precoce' vines in organic production system.
Methodology
The experiments were conducted in the Phytopathology laboratory and in a greenhouse, both belonging to the Department of Agronomy of the State University of the West Center (UNICENTRO), Guarapuava-PR.
To prepare the treatments, the cinnamon bark (C. zeylanicum) obtained in the local trade was immersed in distilled water at 70ºC to obtain the aqueous solutions at the concentrations of 0.12%, 0.25% and 0.50% (w/v). Subsequently, the infusions were rested for 24 hours in a container. After this time, the preparation was filtered and then 0.25% vegetable oil (Natur'l oil ® Stoller, 930 ml/L (93% v/v), (Cosmópolis, São Paulo).
In the germination of P. viticola sporangia, 100 mL of sterilized distilled water containing Tween 80 on leaves with typical symptoms of vine mildew was added and, with a Drigalski loop, the mycelium was removed for the sporangia release.
Aliquots of 40 μL of the suspension and another in the same amount of the solution from each of the treatments were placed into individual wells of ELISA test plates.
The plates were then maintained in a growth chamber at 25 °C in the dark, each corresponding to the period of 4 and 24 hours. In order to stop the sporangium germination, 20 μL of the blue cotton dye of lactophenol was added to each well, at the time scheduled for evaluation. Subsequently, the percentage of sporangia germinated (100 spores per repetition) was evaluated, observed in an inverted objective optical microscope. Germinated sporangia were those that showed release of zoospores. The experimental design was completely randomized with six treatments and five replications (Garcia et al., 2018).
In the evaluation of the severity of mildew (P. viticola) on 'Isabel Precoce' vine leaves, healthy leaves with 5 cm diameter were used, surface disinfested with 2% sodium hypochlorite for 5 seconds and dried at room temperature.
Subsequently, the leaf discs were immersed for 1 minute in containers containing the different treatments and then distributed on foam moistened in Gerbox boxes at room temperature.
After 24 hours, 5x104 sporangia mL-1 water suspension of P. viticola was inoculated onto the leaf discs, and after 48 hours the mildew severity was evaluated for seven days according to the diagrammatic scale proposed by Azevedo (1997).
Subsequently severity data were transformed into an area under the disease progression curve (AUDPC) based on the formula: AUDPC = Σ (yi + yi+1) / 2 * (ti+1ti), where: n = number of assessments; y = disease severity (%); t = time (days). The experimental design was completely randomized with six treatments and four replicates with six disks each.
In order to evaluate the severity of mildew (P. viticola) on 'Isabel Precoce' vines in greenhouse, the grafts grafted on the 'Paulsen 1103' rootstock were planted in 1 L pots and filled with commercial substrate (Plantmax®, composed of 60% Pinus bark, 15% vermiculite, "fine" granulometry and 15% "superfine" and 10% humus) and stored in a greenhouse under sprinkler irrigation. After the first leaves of the vines were sprouted, treatments were sprayed every 7 days with a hand sprayer.
On 02/26/2014 the inoculation of the suspension of 5x104 sporangia of P. viticola was carried out and then the vines were kept in a humid camera for 2 days and after 7 days, severity assessments were initiated (% of injured area), according to the scale of Azevedo (1997), in 4 leaves previously identified in each plant. Severity data were transformed into an area below the disease progression curve (AUDPC) based on the formula: AUDPC = Σ (yi + yi+1) / 2 * (ti+1ti), where: n = number of evaluations; y = disease severity (%); t = time (days). The experimental design was in randomized blocks with six treatments and four replicates, each vessel being an experimental plot.
For catalase enzyme activity in vines that were under greenhouse conditions, leaf discs of approximately 5 cm in diameter were collected after four treatments. These collections were carried out 2 hours before (2HBA) of the fourth application of treatments and inoculation of P. viticola, as well as 2 (2HAI), 4 (4 HAI) and 6 hours later (6 HAI).
The leaf samples were protected with foil, cooled on ice and stored in a freezer at -80 °C until preparation of extracts for analyzes of total protein and catalase activity.
Leaf discs were weighed and then macerated in liquid nitrogen mortar and mechanically homogenized with 1% (w/w) PVP (polyvinylpyrrolidone) and 4 mL of 50 mM potassium phosphate buffer (pH 7.0) containing 0.1 mM EDTA. Afterwards the solution was centrifuged at 13,000 g for 30 min at 4 °C, the supernatant obtained being considered as an enzymatic extract, which was stored at -80 °C until the analyzes were performed. From these extracts the content of total proteins and catalase activity were determined.
For the determination of the protein content according to Bradford (1976) for each 50 μL of the supernatant was added, under stirring, 2.5 ml of the Bradford reagent. After 5 min the absorbance was read at 595 nm in a spectrophotometer (Shimadzu -Model UV-1800). The concentration of proteins, expressed in mg per mL of sample (mg protein mL -1 ), was determined using a standard curve of bovine serum albumin (BSA) concentrations of 0 to 0.5 mg mL -1 , obtained by the Bradford method y = -0,0456+0,733x.
Catalase activity (CAT) (EC 1.11.1.6) was quantified by the stable complex formed by ammonium molybdate with hydrogen peroxide (Abs 405 nm). The enzyme extract (0.2 mL) was incubated in 1 mL reaction mixture containing 60 mM hydrogen peroxide in 60 mM potassium phosphate buffer pH 7.4 at 38 ° C for 4 min. The addition of 1mL of 32.4 mM of ammonium molybdate after 4 min of incubation was done to stop the consumption of hydrogen peroxide by the enzyme present in the extract. A blank was prepared for each sample by addition of ammonium molybdate to the reaction mixture, omitting the incubation period. The yellow complex of molybdate and hydrogen peroxide were measured at 405nm. The difference between the blank absorbance and the incubated sample indicated the amount of hydrogen peroxide used by the enzyme. The H2O2 concentration was determined using the extinction coefficient є = 0.0655 mM -1 cm -1 .
The results were submitted to analysis of variance and comparison of means by the Tukey test at the 5% probability level, with the statistical program SISVAR (Ferreira 2011).
Results
For the germination of sporangia of P. viticola, at the times evaluated, there was a quadratic effect as a function of the doses of aqueous cinnamon extract with vegetable oil. It was observed that at 4 and 24 hour treatments of 0.25% and 0.5% ACE with VO reduced in 60% and 56%, 70% and 73.8%, respectively, the germination of P. viticola when compared to the control. It was observed that the direct contact of the VO with the pathogen in these two evaluation periods had a fungitoxic effect reducing germination by 62.3% and 67.6%. Regarding BM, these values are 34.6% and 52.7%, respectively ( Figure 1A and 1B).
For the AUDPC of mildew on leaves of vines, a quadratic effect was observed as a function of the doses. It should be noted that the 0.12% and 0.25% ACE doses reduced the AUDPC of grapevine mildew by 97.2% and 65%, and did not present statistical difference with the treatment composed only of VO, which decreased by 90 %, both with respect to the control treatment ( Figure 2A). *Statistically significant at 5% probability. Source: Authors.
Discussion
Cinnamon extract (C. zeylanicum) presents in its composition eugenol and cinnamaldehyde that have a negative effect on the development of microorganisms, being able to act on the structures of hyphae growth and virulence of pathogens (Khan & Ahmad 2011). These compounds are potent antifungal agents that can control the development of Colletrotrichum gloeosporioides, Alternaria sp. and Penicillium chysogenum (Kumar et al. 2009).
This toxic effect was confirmed by cinnamon extract in results obtained by Venturoso et al. (2011) when verifying that the dose of 20% reduced by 43% the mycelial growth of Phomopsis sp.
Regarding VO, Garcia et al. (2015) also emphasize direct control over P. viticola. The authors note that the 0.80 mL L -1 dose of this product reduces the germination of P. viticola by 77% after 24 hours of incubation when compared to the control. Evidence that the longer the contact time of the product with the sporangia, the lower its germination. BM showed to be effective for the control of grape mildew. However, it should be taken into account that the application of this compound in the vineyards increases the copper contents in the layers of 20-40 cm of the soil (Casali et al. 2008). Dagostin et al. (2011) emphasize that VO has the same effect of copper (presents in BM) for the control of mildew.
With the present study we show that this process was potentiated with the association with ACE, mainly with the dose of 0.25%. Possibly, this fact, is related to high adherence of ACE to the leaf surface allowed by VO (Zyl et al. 2010).
The application of plant extracts, such as cinnamon, has the advantage of producing organic plants free of toxic products and still present more than one antifungal compound that helps in the management of the crop (Shuping & Eloff 2017). This fungitoxic effect of cinnamon was confirmed by Flávio et al. (2014) who observed that the aqueous extract of this vegetable in the concentration of 30% reduced in 61% the fungus microflora of seeds of sorghum.
The effect of reducing AUDPC from mildew on vines treated with the 0.25% ACE dose is related to its direct effect on P. viticola ( Figure 1) and also CAT activity. This enzyme highlights its performance as the main route of H2O2 degradation.
Thus, its activation eliminates the excess of this molecule, which in high concentration can cause cell damage (Mittler 2017).
So probably this treatment provided H2O2 synthesis prior to inoculation of P. viticola, activating plant defense mechanisms.
This activation occurs through the action of this molecule as a secondary messenger that activates genes related to the pathogenesis, induction of phytoalexins synthesis, in the reinforcement of the cell wall increasing the interconnections between hydroxyproline and glycoproteins to the matrix of polysaccharides, and/or acts directly toxic on fungal phytopathogen (Quan et al. 2008).
It should be considered that the pathogen also releases proteolytic enzymes that cause damage to the plasma membrane of the cell and consequently can also activate H2O2 (Quan et al., 2008). A fact that is observed in the treatment with 0.25% of ACE with 4HAI, potentializing the activation of defense mechanisms of these grapevine plants and the consequent reduction of AUDPC of mildew (Figures 2 and 3, C).
In general, it was observed that the 0.25% ACE dose had a direct effect on P. viticola, reducing the germination of the pathogen and also activated 'Isabel Precoce' vine defense mechanisms, reducing the AUDPC of mildew. It is also worth noting that ACE is easy to acquire and low cost, so it is recommended to carry out new experiments under field conditions so that the potential of this extract can be evidenced for the application in organic commercial vineyards.
Conclusion
The dose of 0.25% aqueous cinnamon extract (Cinnamomum zeylanicum) associated with vegetable oil had a fungitoxic effect directly on Plasmopara viticola, reduced AUDPC of mildew on leaf discs in greenhouse.
The 0.25% dose of the aqueous cinnamon extract also activated the catalase enzyme activity, inducing the resistance of these plants to vine mildew.
The treatments with cinnamon extract were efficient in controlling the vine mildew and with the advantage of use in the organic production in the cv. Isabel Precoce.
It is suggested that future work be carried out with these treatments under field conditions. | 2021-09-27T20:02:23.432Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "ab441d6ec9cccf9418d7dab95488c84abbf0d15b",
"oa_license": "CCBY",
"oa_url": "https://rsdjournal.org/index.php/rsd/article/download/18885/16756",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "da92d0db9be275646f54697387210610dc09e4ba",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
218469815 | pes2o/s2orc | v3-fos-license | Inhibitors of DNA Glycosylases as Prospective Drugs
DNA glycosylases are enzymes that initiate the base excision repair pathway, a major biochemical process that protects the genomes of all living organisms from intrinsically and environmentally inflicted damage. Recently, base excision repair inhibition proved to be a viable strategy for the therapy of tumors that have lost alternative repair pathways, such as BRCA-deficient cancers sensitive to poly(ADP-ribose)polymerase inhibition. However, drugs targeting DNA glycosylases are still in development and so far have not advanced to clinical trials. In this review, we cover the attempts to validate DNA glycosylases as suitable targets for inhibition in the pharmacological treatment of cancer, neurodegenerative diseases, chronic inflammation, bacterial and viral infections. We discuss the glycosylase inhibitors described so far and survey the advances in the assays for DNA glycosylase reactions that may be used to screen pharmacological libraries for new active compounds.
DNA in living cells is always exposed to many damaging factors of environmental and endogenous origin. These insults produce various modifications of nucleobases and lead to the formation of apurinic/apyrimidinic (AP) sites and DNA strand breaks. Accumulating damage significantly affects genomic stability and may ultimately end in mutations or cell death [1,2].
Mechanisms that correct genomic damage, commonly known as DNA repair systems, exist in all forms of life [1]. The most abundant DNA lesions-deaminated, oxidized, and alkylated bases, AP sites, single-strand breaks-are repaired by the base excision repair (BER) system [1,3]. Initially, the damaged base is recognized by a DNA glycosylase (of which, 11 are known in humans, and 8 in Escherichia coli), which cleaves the N-glycosidic bond between the nucleobase and the C1 deoxyribose atom, forming an AP site. Afterwards, an AP endonuclease hydrolyzes the phosphodiester bond 5 to the AP site. The repair proceeds with the incorporation of the correct dNMP by a DNA polymerase and the ligation of the remaining single-strand break ( Figure 1).
DNA glycosylases belong to several major structural superfamilies and have different, sometimes overlapping, substrate specificities (Table 1). They all, however, have a common reaction mechanism: the C1 of the damaged nucleotide is attacked by a nucleophilic moiety (either an activated water molecule in monofunctional DNA glycosylases or an enzyme's amino group in bifunctional ones), the damaged base departs, and either an AP site is generated (monofunctional glycosylases), or a Schiff base covalent intermediate forms (bifunctional glycosylases) [4][5][6]. The Schiff base undergoes βor β,δ-elimination and is then hydrolyzed, leaving a nick with either an α,β-unsaturated aldehyde or a phosphate as the 3 -terminal group. To access C1 , most glycosylases flip the target nucleotide from the DNA stack into the enzyme's active site, which is equipped with a deep lesion recognition pocket, representing a convenient druggable target [7]. (1); the resulting apurinic/apyrimidinic (AP) site is cut by an AP endonuclease (2); the deoxyribose fragment is removed by a deoxyribophosphate lyase (3); a correct dNMP is incorporated by a DNA polymerase, and the nick is sealed by a DNA ligase (4). Nicked DNA also activates signaling by poly(ADP-ribose)polymerase 1 (PARP1), which initiates poly(ADP-ribosyl)ation of many chromatin proteins to facilitate the access of DNA repair factors to the site of damage. In human cells, BER is tightly regulated at several levels. One of the best-studied players orchestrating the BER process is poly(ADP-ribose)polymerase 1 (PARP1), together with its homologs PARP2 and PARP3, which act as nick sensors and regulate the access of repair factors to the damage sites through modification of acceptor proteins and DNA ends by poly(ADP-ribose) [8,9]. PARPs attracted attention as potential targets for cancer treatment after PARP inhibitors were discovered to be highly toxic for cells with inactivated homologous recombination repair pathway [10,11]. In human tumors, recombination repair deficiency is often associated with inactivating mutations in the BRCA1 and BRCA2 genes, the main driver mutations in hereditary breast and ovarian cancers. BRCA1 and BRCA2 proteins regulate the DNA break response through a pathway that does not overlap with BER [12].
Blocking both these pathways is lethal for the cell, while normal cells with active recombination repair survive PARP1 inhibition. The lethal effect of PARP inhibitors is largely mediated by PARP trapping at nicks [13,14], which mainly originate from ribonucleotides misincorporated during DNA replication [15]. Several PARP inhibitors are presently approved for clinical use, and several hundred clinical trials are ongoing.
Inhibitors of DNA Glycosylases: General Considerations
The example of PARP inhibitors highlights the concept of synthetic lethality, which underlies most of the attempts to develop BER inhibitors into practically useful drugs. Two conditions must be satisfied for such compounds to be effective. First, the target cells must experience genotoxic stress, induced either directly by DNA-damaging factors or indirectly through some kind of metabolic stress (nucleotide pool imbalance, oxidative stress, etc.). Second, if DNA damage caused by this type of stress can be repaired bypassing BER, the bypass must be blocked by mutations or another drug used in combination with a BER inhibitor. The second requirement is often satisfied in cancers, where mutations in DNA repair genes are usually among driver mutations. Genotoxic treatment may also be tuned to produce lesions repaired predominantly by BER (such as uracil accumulated through antimetabolite treatment, uracil analogs used as drugs or prodrugs, or oxidized purines appearing through MTH1 inhibition or induced by photodynamic therapy), in which case BER impairment alone could be sufficient to effect considerable cytotoxicity.
Two considerations are crucial when assessing the cytotoxic potential of DNA glycosylase inhibitors. First, unlike the enzymes underlying common BER steps, such as break signaling by PARPs or AP site cleavage by AP endonucleases, DNA glycosylases are specific for damaged bases, and their inhibition will affect only a subset of BER reactions. This in fact may be advantageous for fine-tuning or selection of concurrently used DNA damaging agents, many of which produce specific primary lesions rather than AP sites of strand breaks [16]. Second, DNA glycosylases are often ambivalent with respect to cell-killing effects of DNA damage (as discussed below in sections about specific types of lesions and their repair): they may either counteract the damage by repairing the induced lesions or potentiate the damage by converting damaged bases to AP sites or strand breaks, which are generally more cytotoxic. Thus, the inhibition of DNA glycosylases is not always warranted for inducing synthetic lethality in cancer cells or bacteria. It is always desirable to validate a particular DNA glycosylase as a drug target by knockout or knockdown approaches in an appropriate cell line or pathogen.
The inhibitors discussed in the remaining parts of this paper are mostly small-molecule compounds. Almost all DNA glycosylases are inhibited to a certain degree by non-specific single-or double-stranded DNA, competing for binding with substrate DNA [17][18][19][20][21], and a number of modified nucleotides tightly bound but not cleaved when incorporated into oligonucleotides have been described [22][23][24][25][26][27][28]. Moreover, binding and inhibition of DNA glycosylases by polyanions such as heparin [29-32] likely stems from the ability of these enzymes to bind nucleic acids. Minor-groove ligands of various chemical nature also interfere with DNA glycosylase binding [33,34]. Despite the obvious importance of such interactions for the biological functions of DNA glycosylases, delivery and targeting problems thus far prevent the therapeutic use of oligonucleotides and other macromolecular polyanions as mass action-driven inhibitors of intracellular enzymes. However, one strategy known for a while and recently applied to DNA glycosylases is the use of prodrugs that are metabolized to nucleoside triphosphate analogs and incorporated into DNA [35,36]. For example, 1 -cyano-2 -deoxyuridine triphosphate is a good substrate for DNA polymerases and, when incorporated into DNA, inhibits E. coli uracil-DNA glycosylase (Ung) and human UNG, displaying nanomolar K i values [37]. Interestingly, some lesions, such as 2-deoxyribonolactone and 5-hydroxy-5-methylhydantoin [27,38,39], demonstrate an intrinsic ability to trap bifunctional DNA glycosylases covalently on DNA, reminiscent of the PARP-trapping potential of cytotoxic PARP inhibitors. Thus, the development of nucleotides that can be incorporated into DNA and trap DNA glycosylases may be an interesting direction of the glycosylase inhibitor design.
Uracil in DNA: Synergism of Glycosylase Inhibitors and Antimetabolites
Antimetabolites, the class of drugs interfering with nucleotide metabolism pathways and thereby with DNA or RNA synthesis, are one of the staples of therapeutic interventions against cancer and bacterial and protozoan infections and are especially useful in combination therapy [40,41]. Many clinically used antimetabolites, such as antifolates, interfere with thymine biosynthesis and cause the accumulation of uracil (or its analogs) in genomic DNA [42,43]. The repair of drug-induced genomic uracil is a double-edge sword: while it protects cells from the effect of this non-canonical nucleobase at low levels of substitution, extensive uracil buildup and excision are toxic and may be the primary reason of cell death after exposure to antifolates [44,45]. Therefore, the inhibition of uracil repair may have different consequences depending on the level of DNA modification and, possibly, on the nature of the modification (if different from uracil).
Human cells possess four DNA glycosylases capable of excising uracil from DNA. However, for three of them (TDG, SMUG1, and MBD4), uracil either is not the main substrate or is removed only from specific contexts (for example, methylated CpG dinucleotides). The main enzyme responsible for uracil repair, UNG, is among the most important factors limiting the efficiency of antifolates and fludarabine, whose action is based on the accumulation of uracil in genomic DNA [46][47][48]. UNG knockdown in human prostate cancer cell lines increases their sensitivity to H 2 O 2 and doxorubicin [49]. Non-small cell carcinoma and lung adenocarcinoma cells develop spontaneous resistance to pemetrexed, a folic acid analog inhibiting dihydrofolate reductase, thymidylate synthase, and glycinamide ribonucleotide formyltransferase, due to a significant increase in the level of UNG, and the suppression of UNG expression returns the sensitivity to normal [50,51]. Some uracil analogs that are accumulated in DNA (such as 5-fluorouracil) are more toxic for cells when SMUG1, rather than UNG, is downregulated [52,53]. However, due to the structural similarity between UNG, TDG, and SMUG1, low-molecular-weight inhibitors will most likely be active against all three enzymes; therefore, the nature of the glycosylase that removes uracil during treatment with antimetabolites is not of primary importance.
UNG inhibitors in combination with genotoxic stress effectively suppress the growth of Plasmodium falciparum [54], Trypanosoma brucei [55,56], and Trypanosoma cruzi [57,58], which makes BER a promising target for drug intervention in protozoan infections. Importantly, some inhibitors preferentially suppress the activity of UNG from infections agents but have little effect on the human enzyme ( Table 2).
Low-molecular-weight UNG inhibitors are still at a preclinical stage. The literature describes three main classes of such inhibitors. All of them are competitive, mechanism-based and mimic certain features of the transition state of the UNG-catalyzed reaction [6,79]. Free uracil and its analogs inhibit UNG enzymes from various sources, presenting submillimolar to millimolar IC 50 [59,[80][81][82][83][84][85], so successful inhibitors required extensive modification of the base to ensure tight binding. Historically, the first class of compounds active against human Plasmodium and herpes simplex type 1 UNGs was 6-(p-alkylanilino)uracils, of which 6-(p-n-octylanilino)uracil showed the strongest affinity, with IC 50 8 µM for the viral enzyme [54,59,[86][87][88] (Table 2). Bipartite inhibitors, structurally similar to the 6-substituted derivatives, consist of a uracil base or its analog linked to a phenolic or benzoic fragment [60][61][62]89] (Table 2). In the structures of bipartite inhibitors bound to UNG, the uracil part occupies the uracil-binding pocket of the enzyme, while the aromatic fragment lies in the DNA-binding groove [61,62] (Table 3). Finally, triskelion inhibitors contain three functional groups at the ends of a branched linker: either one analog of uracil and two aromatic fragments, or two analogs of uracil and one aromatic fragment [63] (Table 2). Interestingly, gentamicin, a clinically used aminoglycoside antibiotic, was reported to inhibit E. coli Ung [64,65] (Table 2). Although the reported IC 50 value was quite high (0.4-1.5 mM), this effect may reflect interactions between Ung and the sugar part of DNA [21,90] and suggests another possible direction for inhibitor development. * IC 50 for DNA polymerase activity in the presence of the D4/A20 complex; ** K d or K i directly measured; *** non-glycosylase member of the GO system (see Sections 4 and 5). Thymine-DNA glycosylase (TDG) has recently been validated as a possible drug target in melanoma: its knockdown causes cell cycle arrest, senescence, and cell death in melanoma cell lines but not in normal cells and prevents tumor growth in a xenograft model [91]. Screening of several mid-scale compound libraries yielded about 40 inhibitors with a variety of structures and IC 50 >10 µM [91].
Uracil-DNA glycosylases from poxviruses (D4 according to vaccinia virus naming convention) provide a unique drug target. While they possess quite efficient uracil-removing activity, the main role of these enzymes is not in DNA repair, but in viral replication. D4 binds the A20 protein to form a processivity factor for poxviral DNA polymerases [92,93]. Deletion of the D4 gene causes a sharp drop in the ability of vaccinia virus to replicate in cells [94][95][96]. Polycyclic aromatic compounds that disrupt the D4/A20 binding interface are considered a promising class of antiviral drugs active against poxviruses [66,97,98] (Table 2).
Finally, a natural Ung inhibitor, the Ugi protein, is produced by PBS1 and PBS2 bacteriophages [99]. Although Ugi is not considered a therapeutically promising candidate, it has recently found an unexpected use in cell technologies involving CRISPR/Cas9 genome editing. A new generation of Cas9-based tools employs base editors, in which a Cas9 targeting module is fused with a cytosine deaminase to generate C→T mutations [100,101]. Repair by UNG counteracts uracil-mediated targeted mutagenesis, so co-expression of Ugi is commonly used to increase the efficiency of this gene editing procedure [100][101][102].
Oxidative Damage Repair: Key to Antibiotic Resistance?
It has been shown that BER is necessary for the survival of certain pathogenic and opportunistic bacteria (Mycobacterium, Neisseria, Pseudomonas, Salmonella) under conditions of genotoxic stress caused either by drugs or by the body's immune response [103][104][105]. Recently, it was discovered that oxidative stress significantly contributes to the death of bacteria exposed to antibiotics of several classes. Topoisomerase inhibitors, β-lactam antibiotics, membrane-permeabilizing agents, and aminoglycosides induce the generation of hydroxyl radicals in several divergent bacterial species through an iron-dependent Fenton reaction, increasing the lethality of these drugs [106][107][108][109], whereas reducing agents such as H 2 S or NO protect bacteria from a wide range of antibiotics [110,111]. Possible reasons for the enhanced cell death include translation errors due to oxidative RNA damage [112], oxidation of the nucleotide pool followed by massive chromosome breakage at the sites of damaged nucleotide incorporation [113][114][115], and direct DNA damage by reactive antibiotic molecules, their metabolites, or reactive oxygen species [116,117]. Based on these observations, the systems of antioxidant defense and oxidative damage repair in bacteria are now regarded as promising targets for sensitization towards bactericidal antibiotics, which, if successful, can be a breakthrough in the current antibiotic resistance crisis.
In bacteria, several DNA glycosylases are responsible for oxidative damage repair. In E. coli, the best-studied enzymatic system, termed the "GO system" (for Guanine Oxidation), involves three enzymes: Fpg (MutM), MutY, and MutT, which have complementary functions in countering the mutagenic effect of 8-oxoguanine (oxoG) [118][119][120]. OxoG is an abundant DNA lesion that easily forms stable oxoG(syn):A(anti) Hoogsteen-type pairs, leading to characteristic G:C→T:A transversions [121,122]. Fpg is a DNA glycosylase that excises oxoG from pairs with C; such pairs appear when G is directly oxidized in DNA or when oxodGMP is incorporated opposite C during replication [123,124]. If oxoG remains in DNA and directs dAMP misincorporation, the excision of oxoG by Fpg would lead to a G→T transversion. To safeguard the cell from this mutagenic route, Fpg does not cleave oxoG:A mispairs, which are recognized by MutY, and A is excised instead of oxoG [125]. If repair DNA polymerases insert the correct dCMP opposite oxoG, the second round of repair is carried out by Fpg; otherwise, dAMP is inserted, and MutY-initiated repair is reinitiated. The third enzyme of the system, MutT, hydrolyses oxodGTP and oxoGTP to monophosphates to prevent oxoG incorporation from the oxidized nucleotide pool [126,127]. E. coli also possesses a homolog of Fpg, endonuclease VIII (Nei), which is not considered part of the GO system and preferentially excises oxidized pyrimidines with little opposite-base preference, although it has some activity against oxoG in vitro and prevents G:C→T:A mutations in the absence of Fpg [128][129][130][131]. Finally, endonuclease III (Nth) also removes a wide variety of oxidized pyrimidine bases [132,133]. Although the GO system has been extensively characterized in E. coli, little is known about its functions and the properties of its components in pathogenic bacterial species. Fpg proteins from Salmonella enterica [134], Neisseria meningitidis [135], and Corynebacterium pseudotuberculosis [136] have been cloned and subjected to limited biochemical characterization, which showed essentially Fpg-like properties. Several Fpg homologs from Mycobacterium tuberculosis and Mycobacterium smegmatis were characterized and found to have divergent substrate specificities resembling either E. coli Fpg or Nei [137][138][139][140]. For MutY, limited enzyme characterization has been done for proteins from N. meningitidis [141], Helicobacter pylori [142], and C. pseudotuberculosis [143,144]. The presence of a fully functional GO system with its characteristic antimutator pattern has been confirmed in vivo in their native bacterial cells for Pseudomonas aeruginosa [145,146], N. meningitides [104,135,147], M. smegmatis [137,148,149], and Staphylococcus aureus [150]. Fpg was found to be functional in vivo in S. enterica [134], and MutY in H. pylori [142].
The available information about the relevance of the GO system for the pathogenicity of bacteria shows its dual role. On one hand, it seems that this line of defense indeed assists successful primary infection. MutY deficiency has been shown to compromise mouse stomach colonization by H. pylori [142]. Successful macrophage infection by Brucella abortus requires intact fpg and mutY genes [151], and M. tuberculosis Fpg and Nei are required for lung colonization in a rhesus macaque model [152]. Hypervirulent Neisseria isolates maintain functional fpg and mutY despite having a general mutator phenotype [147]. On the other hand, hypermutability associated with GO system inactivation sometimes provides the variance for selection of highly virulent or drug-resistant strains [153][154][155][156]. This underscores the importance of a thorough characterization of the GO system for a given pathogen to assess it as a possible drug combination target.
Human homologs of Fpg and Nei (NEIL1, NEIL2, and NEIL3) are significantly different from the bacterial proteins in their sequence and structure, making realistic the development of small ligands selectively targeting the bacterial enzymes. MutY and Nth appear to be less selective targets.
Oxidative Damage Repair: Cancer Sensitization Strategy
OGG1 is a human DNA glycosylase that initiates the repair of oxidized purine bases, mainly oxoG and formamidopyrimidines, thus being a functional analog of Fpg. Nevertheless, in its sequence and structure, OGG1 is completely different from Fpg ( Table 1). Overexpression of OGG1 in fibroblasts, pulmonary epithelial cells, and bone marrow protects them from the toxic effects of thiotepa, carmustine, and mafosfamide, which mainly yield N7-alkylated purines further hydrolyzed in the cell to formamidopyrimidine derivatives [163][164][165]. However, it is unclear whether normal OGG1 levels can reduce the toxicity of these drugs in tumor cells. A similar effect of OGG1 has been described for cisplatin and oxaliplatin [166], although the nature of the damage removed in this case is not entirely clear. Of the antitumor agents that produce oxidative DNA damage, OGG1 downregulation or inhibition sensitizes cells to bleomycin [167] and ionizing radiation [168].
As with uracil incorporation and repair, the activity of OGG1 may not only safeguard cells from genomic damage but also potentiate the action of DNA-damaging agents, converting damaged bases to more cytotoxic strand breaks. For instance, OGG1 downregulation protects several cancer cell lines from β-lapachone, an NAD(P)H dehydrogenase (quinone 1)-dependent redox cycling drug that produces copious amounts of intracellular H 2 O 2 [169]. Hence, a strategy alternative to OGG1 inhibition may consist in saturating the BER capacity with oxidative lesions. In human cells, OGG1 together with mismatched adenine-DNA glycosylase MUTYH and nucleoside triphosphatase MTH1 (NUDT1) forms an analog of the GO system, which prevents the mutagenic effect of oxoG [170]. Recently, the knockdown or inhibition of MTH1, which hydrolyzes oxoG triphosphates and prevents their incorporation into growing DNA and RNA chains, was shown to be toxic to tumor cells, due to the accumulation of oxidized bases and DNA breaks [71,171,172]. Apparently, as in the case of PARP inhibitors, selective toxicity is due to the suppression of the last remaining pathway for the oxidative damage repair in cancer cells. Several low-molecular-weight MTH1 inhibitors were identified, including a clinically approved tyrosine kinase inhibitor, crizotinib [71,171,173] (Table 2). Interestingly, crizotinib possesses a chiral center that gives rise to (R)and (S)enantiomers, of which clinically used (R)-crizotinib inhibits c-MET and ALK protein kinases, whereas the (S)-enantiomer preferentially binds to and inhibits MTH1 [71,174,175]. While it is still debated to which extent the cytotoxic activity of crizotinib and other MTH1 inhibitors is dependent on MTH1 and oxidative overload [172,[176][177][178], most reports agree that oxidative damage is an important cell-killing factor, although its causes might not be limited to a direct suppression of MTH1 activity (recently reviewed in [179]). As of today, (S)-crizotinib is not used for patient treatment. However, another anti-cancer drug candidate, karonudib, was developed from previously found MTH1 inhibitors [172,180,181]. Presently, two Phase 1 clinical trials of karonudib are registered with the US National Library of Medicine Clinical Studies Database.
Inhibitors of OGG1 have been reported in the literature but have not yet reached clinical trials. Mechanism-based approaches had only limited success: both oxoG base and its analogs proved to be weak inhibitors [182,183], while substituted 2,6-diaminopurines performed slightly better [76]. Experimental and computational screening of small-molecule pharmacological libraries produced several hits that were expanded into inhibitors with submicromolar affinity, structurally unrelated to the OGG1 substrate [72][73][74] (Table 2). Combinatorial design based on the identified OGG1 and MTH1 inhibitors was used to obtain a compound with submicromolar affinity for both these enzymes [75] ( Table 2).
Other DNA glycosylases that repair oxidative damage, including NEIL1, NEIL2, NEIL3, MUTYH, and NTHL1, have been targeted less successfully. Purine-analog library and general library screening produced several inhibitors for NEIL1, but their affinities were in the micromolar range, and the target selectivity was quite low when compared with the inhibition of other glycosylases [76,184] ( Table 2). Fumarylacetoacetate was reported to inhibit NEIL1 and NEIL2 and to a lesser degree, OGG1 and NTHL1 (Table 2), but the structural reasons under this effect have not been established [77]. Moreover, these enzymes have low experimental support as targets for sensitization to antitumor therapy, although NEIL1 confers some resistance to ionizing radiation and antifolates [185,186].
Oxidative Damage Repair: Unexpected Connections
While DNA damage and its repair are well understood in the cancer paradigm, two unexpected connections of oxidative damage with other human pathologies emerged recently. OxoG and its repair by OGG1 are suspected to play a regulatory role in the inflammatory response. Several lines of evidence support this conclusion. Ogg1-null or -depleted mice show a significantly alleviated inflammatory response to many factors, including bacterial endotoxins, Helicobacter infection, foreign protein response, ragweed pollen grain extract-induced allergy, etc. [187][188][189][190]. Interestingly, however, the inflammation associated with UVB or pulmonary hyperoxia is enhanced rather than reduced in Ogg1 −/− mice [191][192][193], suggesting that OGG1-dependent inflammation requires foreign antigens. After the excision of oxoG, the OGG1·oxoG complex can bind Ras family GTPases and facilitate the GDP-to-GTP exchange [194,195], which triggers the signaling pathway leading to the activation of NF-κB, the key pro-inflammatory transcription factor [196]. Moreover, OGG1 can bind oxidized G-rich promoters of pro-inflammatory genes in an enzymatically non-productive mode and facilitate their expression by attracting NF-κB [197][198][199]. A small-molecule OGG1 inhibitor, TH5487, was developed that competes with oxoG for binding and downregulates the inflammatory response in a mouse model [73] (Table 2). Although TH5487 has a 4-bromobenzimidazolone moiety, which is structurally similar to oxoG, the crystal structure of its complex with OGG1 [73] (Table 3) unexpectedly revealed that the oxoG-binding pocket is occupied by another moiety of TH5487, p-iodophenylacetamide, whereas 4-bromobenzimidazolone resides in a so-called exo-site, which normally binds undamaged G and serves as a transient binding site for oxoG on its way to the active site [200,201]. Thus, TH5487 functionally resembles bipartite inhibitors of UNG, simultaneously engaging two selective binding sites in the enzyme molecule. Such design may be employed to construct new potent inhibitors of OGG1 and other DNA glycosylases.
In addition, inhibition of OGG1 holds promise to prevent or delay the onset of Huntington's disease in risk groups. This hereditary condition, which belongs to the class of "trinucleotide repeat" genetic diseases, is caused by expansion of the (CAG) n repeat run in the HTT gene beyond the critical length of~35 repeats [202]. Before becoming symptomatic, carriers of a pathogenic allele experience an explosive growth of the (CAG) n run up to several hundred repeats in the striatum at the early stage of the disease [203]. This expansion is triggered by the normal repair of oxoG initiated by OGG1 [204,205] and is likely caused by an imbalance of BER enzymes in this part of the brain, which leads to the accumulation of unprocessed repair intermediates [206]. In the Huntington's disease mouse model, Ogg1 knockout suppresses the repeat number growth in the striatum and delays the onset of motor dysfunction [207]. Thus, in carriers of the pathogenic HTT allele, for whom the penetrance is inevitably 100%, inhibition of OGG1 may be a reasonable therapeutic strategy.
Alkylation Damage Repair: Dual Consequences
Alkylating antitumor agents produce many damaged bases, including O 6 -alkylguanine repaired by O 6 -methylguanine-DNA methyltransferase (MGMT), and ring-alkylated purines repaired predominantly by BER [16,208]. Unlike other DNA glycosylases, which impart resistance to DNA-damaging agents to cells, N-methylpurine-DNA glycosylase (MPG, alias alkyladenine-DNA glycosylase (AAG), and alkylpurine-DNA N-glycosylase, APNG) may increase the cytotoxicity of alkylating antitumor agents, removing alkylated bases from DNA to form AP sites, which are more dangerous for the cell [208][209][210][211][212][213][214]. A similar sensitization mechanism is also characteristic of UNG, TDG, and MBD4 DNA glycosylases when they repair C5-halogenated uracil derivatives [215][216][217][218]. On the other hand, inhibition of MPG in carcinoma cells sensitizes them to alkylating agents [219], and Mpg −/− murine cells are hypersensitive to 1,3-bis(2-chloroethyl)-1-nitrosourea and mitomycin C (but not to alkylating nitrogen mustards) [220]. An integrative model of temozolomide-induced DNA damage and DNA repair by MGMT and MPG in glioblastoma predicts that inhibition of both enzymes is the most successful sensitization strategy [221]. For temozolomide-resistant forms of glioblastoma, the combination of inhibition of BER enzymes and PARP-dependent signaling is effective [212,222]. In addition to the detoxification of anticancer drug adducts, MPG and OGG1 have been reported to hydrolyze a human cytomegalovirus replication inhibitor, 2-bromo-5,6-dichloro-1-(β-d-ribofuranosyl)benzimidazole, opening the possibility of antiviral action of drug combinations including DNA glycosylase inhibitors [223]. Bacterial alkA mutants are hypersensitive to methyl methanesulfonate [224,225]; however, alkylating agents are not among clinically used antibacterial drugs, so this vulnerability is hard to exploit.
Alkylbase-removing DNA glycosylases are the least explored group in terms of specific inhibitors. N3-substituted adenine derivatives are competitive inhibitors of bacterial Tag and mammalian MPG [226][227][228][229][230]. Based on this observation, a series of structural analogs has been computationally designed to inhibit TagA from Leptospira interrogans, the infectious agent of leptospirosis, although no experimental evidence was provided for their activity against the enzyme or the pathogen [231]. A natural flavonol, morin, inhibits MPG [78] (Table 2).
Assays for DNA Glycosylase Activity
Most basic research on DNA glycosylases was and is still done using radioactively labeled oligonucleotides and analyzing substrates and product by gel electrophoresis. While this assay offers the highest sensitivity, it is labor-intensive, not easily scalable, and inconvenient when screening pharmacological libraries. In the past years, a number of fluorescence-based assays to follow glycosylase activities appeared, some of them coupled with signal amplification to increase the sensitivity.
The first attempts to utilize fluorescent labels for detecting DNA glycosylase activities in a homogeneous mode were based on changes in the signal from a fluorophore incorporated next to a lesion upon the eversion or the excision of the damaged base [232,233] (Figure 2A). Although this approach has been used for inhibitor screening [75], it is not very sensitive, and fluorophores adjacent to a lesion may even inhibit the measured activity. Molecular beacon substrates developed later consist of an oligonucleotide hairpin or a duplex bearing a fluorophore and a quencher at its termini ( Figure 2B). Such substrates allow measuring DNA glycosylase activities both in vitro and in living cells [76,[234][235][236][237] and have been used in glycosylase inhibitor library screening [73,76]. In this case, the glycosylase must be bifunctional to nick the substrate, or else an AP endonuclease has to be present in the assay. Several types of arrays or beads with immobilized damaged oligonucleotides have been reported, in which only the damaged strand is labeled, and the cleavage produces short DNA fragments that can be washed off [238][239][240][241][242]. While such assays are not homogeneous, they are well suited for multiplexing and parallel screening. Fluorophores can also be incorporated into double-stranded DNA in situ through base excision, AP site cleavage, and gap filling by DNA polymerase β with a labeled dNTP [243]. An interesting approach was suggested that employs a DNAzyme inactivated by a strategically placed U residue, and the excision by uracil-DNA glycosylase reactivates the DNAzyme, which cleaves a fluorescent substrate [244]. However, it cannot be applied to other glycosylases that require double-stranded substrates. In a more advanced version, base excision generates a specifically folded quadruplex, which forms a fluorescent complex with quadruplex-selective ligands [245][246][247].
The most sensitive fluorescent assays rely on the formation of a nick after the cleavage by a DNA glycosylase (either bifunctional or coupled with an AP endonuclease), followed by signal amplification. The amplification may be exonuclease-assisted linear isothermal, in which a beacon is annealed and degraded by a double-stranded specific exonuclease in repeated cycles [248][249][250] (Figure 2C). Alternatively, the signal may be enhanced by rolling-circle amplification, using any suitable assay to detect the newly synthesized DNA [251][252][253][254]. Finally, nick formation can serve as a starting point for exponential isothermal amplification [255,256] or cascade hybridization [257].
Conclusions
DNA glycosylases, as enzymes that initiate base excision repair, represent an attractive pharmacological target. Their structures reveal mechanism-based features, such as deep pockets for substrate base binding, indicating potential druggability, and several successful attempts of library screening produced tantalizing leads that can be explored further to develop drugs for cancer and infectious diseases. Moreover, recent findings implicate DNA glycosylases not only in genome protection but also in regulatory pathways and suggest that they can be targeted in some inflammatory and neurodegenerative processes. A number of rapid and sensitive assays for screening DNA glycosylase activities were developed in the past few years, which should facilitate the search for their inhibitors.
The most important factor that complicates the targeting of DNA glycosylases in the now well-established framework of synthetic lethality, e.g., in cancer therapy, is their dual function in cell killing. On one hand, glycosylases initiate the repair of genotoxic adducts and, in theory, should potentiate their action. On the other hand, there are many cases in which the main lethal lesions are not adducts per se but intermediates of their repair, such as AP sites or DNA breaks. Such intermediates usually accumulate if the activity of downstream BER enzymes are insufficient to process the inflicted amount of genomic lesions in full. In these situations, DNA glycosylase inhibition would protect rather than sensitize cells to genome damage. Optimally, DNA glycosylases should be targeted in some form of precision therapy, based on the general model of toxicity of various adducts and the specific knowledge of adduct spectra and downstream BER capacity in the affected cells.
Outside of the cancer field, DNA glycosylase inhibition is most likely to find its soonest clinical application in antiviral therapy, since two important groups of human pathogens, poxviruses and herpesviruses, possess their own uracil-DNA glycosylases, a validated target required for replication in host cells, and several promising drug leads are available. Inhibition of OGG1 to prevent somatic trinucleotide repeat expansion in Huntington's disease also has high priority due to the extreme morbidity and mortality of the condition and the lack of other drugs, although lead compounds capable of brain delivery have not been reported so far. The inflammation-modulating action of OGG1 inhibitors, albeit attracting considerable attention, would still require much research and mechanistic insights to produce drugs comparable with more traditional anti-inflammatory agents. Even if more basic research is required to validate DNA glycosylases as targets for antibacterial combination therapy, yet the payoff in this area may be the largest one. The prospects of bringing DNA glycosylases into the circle of drug targets ultimately depend on our understanding of their action in DNA repair and connection with other cellular pathways. | 2020-04-30T09:08:02.160Z | 2020-04-28T00:00:00.000 | {
"year": 2020,
"sha1": "29ca5561bf372d1b0654710d8b56f8d9772aeb4a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/21/9/3118/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "716fa69a71acf9b9b46058c246bc3ecdd2755351",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
240070962 | pes2o/s2orc | v3-fos-license | Signatures of muonic activation in the Majorana Demonstrator
Experiments searching for very rare processes such as neutrinoless double-beta decay require a detailed understanding of all sources of background. Signals from radioactive impurities present in construction and detector materials can be suppressed using a number of well-understood techniques. Background from in-situ cosmogenic interactions can be reduced by siting an experiment deep underground. However, the next generation of such experiments have unprecedented sensitivity goals of 10$^{28}$ years half-life with background rates of 10$^{-5}$cts/(keV kg yr) in the region of interest. To achieve these goals, the remaining cosmogenic background must be well understood. In the work presented here, Majorana Demonstrator data is used to search for decay signatures of meta-stable germanium isotopes. Contributions to the region of interest in energy and time are estimated using simulations, and compared to Demonstrator data. Correlated time-delayed signals are used to identify decay signatures of isotopes produced in the germanium detectors. A good agreement between expected and measured rate is found and different simulation frameworks are used to estimate the uncertainties of the predictions. The simulation campaign is then extended to characterize the background for the LEGEND experiment, a proposed tonne-scale effort searching for neutrinoless double-beta decay in $^{76}$Ge.
Experiments searching for very rare processes such as neutrinoless double-beta decay require a detailed understanding of all sources of background. Signals from radioactive impurities present in construction and detector materials can be suppressed using a number of well-understood techniques. Background from in-situ cosmogenic interactions can be reduced by siting an experiment deep underground. However, the next generation of such experiments have unprecedented sensitivity goals of 10 28 years half-life with background rates of 10 −5 cts/(keV kg yr) in the region of interest. To achieve these goals, the remaining cosmogenic background must be well understood. In the work presented here, Majorana Demonstrator data is used to search for decay signatures of meta-stable germanium isotopes. Contributions to the region of interest in energy and time are estimated using simulations, and compared to Demonstrator data. Correlated time-delayed signals are used to identify decay signatures of isotopes produced in the germanium detectors. A good agreement between expected and measured rate is found and different simulation frameworks are used to estimate the uncertainties of the predictions. The simulation campaign is then extended to characterize the background for the LEGEND experiment, a proposed tonne-scale effort searching for neutrinoless double-beta decay in 76 Ge.
I. INTRODUCTION
Interactions with cosmogenic particles are an important source of background for rare event searches such as dark matter [1][2][3][4], neutrino oscillations [5], or neutrinoless double-beta decay (0νββ) [6][7][8]. Therefore, these experiments are usually sited in laboratories deep underground to reduce the cosmic ray flux. However, even after a reduction by orders of magnitude, the remaining flux can be a problem for the next generation of underground experiments. The first few hundred feet of rock overburden will completely absorb many types of cosmic rays, but high-energy muons can penetrate several thousand feet of rock. Muons with kinetic energies up into the TeV range can interact with rock or the experimental apparatus and create large numbers of secondary particles. These particle showers often have an electromagnetic component which includes photons, and can also have a hadronic component which includes protons or neutrons [9][10][11][12][13].
One such deep underground rare event search is the Majorana Demonstrator (MJD) [14][15][16]. This 0νββ experiment is located at the 4850-ft level of the Sanford Underground Research Facility (SURF) [17] in Lead, South Dakota. At such depths, the muon flux is reduced by orders of magnitude relative to the surface. A recent measurement found (5.31 ± 0.16) × 10 −9 µ cm −2 s −1 [18] for the total muon flux. Because of the lowbackground nature of these experiments, complementary measurements and simulations are necessary in order to understand the contribution of the remaining cosmogenic flux [19][20][21].
In germanium, the production of neutron-induced isotopes has been studied with AmBe neutron sources [22] and neutron beams [23]. It has been shown that a number of long-lived isotopes such as 57 Co, 54 Mn, 68 Ge, 65 Zn, and 60 Co are produced [24][25][26][27]. These isotopes, as well as others, are also generated when the germanium detectors are fabricated and transported at the surface. This is a well-known problem [25,28], and special precautions were taken in the production of Majorana detector crystals [29], including use of a database with detailed tracking of surface exposure [30]. Once underground, the flux of cosmic rays is significantly reduced, but not zero. For double-beta decay searches in 76 Ge, the isotope 68 Ge is often considered as one of the major background contributors [23,31]. It is created by spallation reactions on germanium by muons, or by fast neutrons energies of several tens of MeV. Its 271-day half-life renders it impossible to correlate the decay signal with the incident cosmogenic shower that produced it. Its radioactive daughter 68 Ga (Q-value 2.9 MeV) has a decay energy spectrum that spans over the region of interest (ROI) for 0νββ in 76 Ge (2.039 MeV). A number of other isotopes are produced in spallation reactions with muons, high-energy photons, or fast neutrons interacting with the nuclei. In addition to these, 77 Ge can be produced via neutron capture reactions, which primarily occur at lower neutron . The colored scale represents isotopes with the potential to contribute background for 0νββ while the grey-scale isotopes do not contribute to the region of interest (ROI). The germanium isotopes with odd neutron number analyzed in this paper are outlined in cyan.
energies. Figure 1 shows the production rate of isotopes created inside the germanium crystals during simulations of cosmogenic muons interacting with the Demonstrator, and the close-by rock. As shown and discussed later in detail, the isotopic composition of the germanium detectors will affect the rate of production of the isotopes.
In this paper, we report on the production rate of meta-stable states in the isotopes 71m Ge, 73m Ge, 75m Ge, and 77m/77 Ge and compare to predictions from simulations. Given the ultra-low radioactive background of the Demonstrator, we can use specific signatures to identify these isomeric decays. Therefore, we analyze the pulse-shape of the signal waveform which occur after incoming muons. Similar experiments used the time between initial muon interaction and a subsequent decay, such as Borexino [32,33], KamLAND [8], Super-Kamiokande [34,35], and SNO+ [36,37]. Incoming muon and their showers interact with these large experiments, and in-situ activation can be an important background. In current generation experiment, the background from cosmogenics and neutron-induced isotopes is not significant. However, its significance increases with the size and decreasing background goals of future generation efforts. In the following, we will describe the isotope signatures used as well as the search in the Demonstrator data. This section is followed by a comparison to rates from simulations using Geant4 and FLUKA. We conclude by discussing the estimated impact on the tonne-scale effort, the Large Enriched Germanium Experiment for Neutrinoless double-beta Decay (LEGEND) [38].
A. The Majorana Demonstrator
The Majorana Demonstrator contained fifty-eight p-type point contact (PPC) germanium detectors installed in two independent cryostats, totalling 44.1 kg of high-purity germanium detectors. Of these, 29.7 kg are enriched up to 87% in 76 Ge [15,29], see Table I. Each germanium crystal was assembled into a detector unit and stacked in strings of three, four, or five units. Each cryostat contained 7 strings. The mass, diameter, and height of each crystal ranged from 0.5 to 1 kg, 6 to 8 cm, and 3 to 6.5 cm, respectively. There were several shielding layers around the cryostats. From outside to inside these were: a 12-inch thick polyethylene wall, a muon veto made of plastic scintillator, a radon exclusion box purged with liquid nitrogen boil-off, an 18-inch thick lead shield, and an innermost a 4-inch thick copper shield, see Fig. 2. The innermost cryostats and the inner structural material were made of ultra-pure, underground electroformed copper which contains extremely low levels of radioactivity from thorium and uranium [39].
Data sets used in this analysis were acquired over the course of almost 4 years, from 2015 until 2019 -the same data used in Ref. [16], with a similar blinded analysis scheme. All analysis routines are fixed and reviewed on open data, before being applied to the full data set after unblinding. The total exposure for this analysis is 9.4 ± 0.2 kg yr and 26.0 ± 0.5 kg yr for the natural and enriched detectors, respectively [16]. The signals from each detector are split into two different amplification channels. The high-gain channels reach from a keV-scale threshold up to about 3 MeV and allow an excellent pulse shape analysis for low-energy physics searches as well as double-beta decay analysis. The low-gain data spans up to 10-11 MeV before saturating, allowing for searches and analyses of high-energy backgrounds. The decay pattern presented here are in the energy range of tens of keV up to MeV. Detector signals include waveforms with duration 20 µs followed by a dead time of 62 µs. Some portion of the data used multi-sampling of waveforms which extended length allowed better pulse-shape analysis in the 0νββ analysis, see Ref. [16], with a duration of 38.2 µs and a dead time of 100 µs. The rising edge is located at a timestamp of ∼10 µs from the beginning of the waveform. Given a distinctive waveform structure and short time-delayed coincidence, the searches for 73m Ge and 77 Ge are almost background-free. By taking advantage of the low count-rate and excellent energy resolution of the Demonstrator, the production rate of 71m Ge, 75m Ge and 77m Ge can also be determined.
B. Search for 73m Ge
One can consider both of the first two excited states in 73 Ge to be isomers since their half-lives are longer than usual for nuclear states. The second excited state has a half-life T 1/2 of about 0.5 seconds and is named 73m Ge within this work. Most β-decays from neighboring isotopes populate this state as shown in Fig. 3. In addition, de-excitations from higher excited states within 73 Ge can feed this state, due to inelastic scattering of neutrons, photons, or other particles. The half-life of 73m Ge is long enough to apply a time-delayed coincidence method [40,41]. After an energy deposition by an initial decay or de-excitation (first event), a second event can be observed. The second event is the de-excitation of the meta-stable state at 66.7 keV. The analysis aims to identify two events in one detector within a short time window, with the second event possessing a specific energy and structure. The individual detector count-rate is about 10 −4 Hz over the entire energy spectrum. The probability for a second event in a 5-second long window (10×T 1/2 ) is less than 0.05% for any two random events. After applying the energy requirement on the second event, the search becomes quasi background-free. The de-excitation of the 66.7-keV state can be identified uniquely since it is a two-step transition, as seen in Fig. 4. First, an energy of 53.4 keV is released when relaxing to the first excited state. It is followed by a 13.3-keV pulse that has a half-life of 2.95 µs. This is short enough to be observed within a single waveform and has a distinctive pattern.
The data is first scanned with a simple energy acceptance window using the Majorana standard energy calibration [16]. When the two transitions (53 and 13 keV) are well separated in time, the energy of the event is flagged in the data as the energy of the first transition around 53 keV. If the two transitions are very close in time and look like a single waveform, the energy appears as the sum of the two steps. Potential background like in-detector Compton scattering would also show such very short step structure, and are suppressed by the later requirements. Including the energy resolution of about 0.5 keV at these energies, this first algorithm creates a selection of candidates between 48 and 72 keV with negligible efficiency loss. For each of these second event candidates, the preceding five seconds of data is scanned for a possible f irst event. All events above the general analysis threshold of 5 keV is accepted, and only clearly identified noise bursts [44] are rejected. Only delayed coincidence combinations that fulfill these basic conditions are fed into the detailed analysis searching for the two-step pattern, since this part of the analysis is computationally intense.
For the 73m Ge decay search, a special pulse shape analysis is applied to identify the short-time delayed- coincidence waveforms. As shown in Fig. 4, a clear twopeak pattern in the first derivative of the waveform can be found. The amplitude ratio of the two peaks is roughly equivalent to the energy ratio of the two transitions (53/13≈4). The delay between the two peaks is comparable to the lifetime of the first excited state(∼ 3µs). Noise and slow waveforms [45] are rejected by requiring narrow peaks. To estimate the background of the analysis, we removed the need for a f irst event, and repeated the analysis. Over the whole data set, three pile-up events were found within the same energy window and the correct ratio between the two signals but outside the delayedcoincidence time window. These can be interpreted as random coincidences with a rate of 0.18 cts/kg/yr. When combining this rate with the overall detector of 10 −4 Hz, we assume this background negligible for the further analysis. Since two-step waveforms of the appropriate energy and peak ratios are rare, the analysis efficiencies were estimated using simulated waveforms generated in germanium crystals by mj siggen [46]. A two-step waveform can be formed by combining one 53-keV waveform and one 13-keV waveform with a short-time delay determined in accordance with the half-life 3 µs. The acceptance windows of the simulation analysis parameters were set conservatively in a ±3σ range. The uncertainty of the analysis cuts was estimated with two-step waveforms generated by combining 53 keV waveforms and 13 keV waveforms from calibration data [47]. Negligible differences between simulated waveforms and combined calibration waveforms were found. These differences can be attributed to the additional baseline noise of the second waveform, as well as the existence of a small population of slow waveforms in the calibration data. While the initial energy acceptance and time search has only minimal efficiency loss, the waveform analysis is not 100% efficient because of the length of the recorded waveform and the efficiency to distinguish the two-step pattern. The final combined efficiency of the analysis chain tot = is 79 ± 14% for normal sampling and 88 ± 14% for data sets taken with multi-sampling. Table II shows the list of 73m Ge candidates identified. Three of the candidates show a first event with energy around 11 keV. These events are likely due to a 73 As electron capture decay (T 1/2 = 80.3 days), cf. Fig. 3. The isotope 73 As can be cosmogenically generated on the surface before detectors arrive underground. The cool-down time between the day detectors arrive at the 4850-foot level and start of data taking differs from detector to detector, from about a year to several years. All arsenic-type events occurred in the last batch of detectors brought underground, see Fig. 5. Detectors which were brought underground earlier have no such signature observed, supporting this assumption. Simulations predict that only a negligible amount of 73 As was produced insitu. Therefore, we excluded these three events from our cosmogenic analysis. The identification of these events illustrates the high sensitivity of the 73m Ge tagging process. The remaining events are used to determine the isotope production rate. The statistical uncertainty for a 1-σ confidence level is determined using the Feldman-Cousins approach [48]. The systematic effects due to the analysis procedure are on the order of 14%. These uncertainties include effects like dead-time windows after a trigger, as well as periods in which a selection of events was not possible, e.g. when transitioning to a calibration. The final isotope production rate is 0.38 +0. 34 −0.19 and 0.05 +0.09 −0.02 cts/(kg yr) for the natural and enriched detectors, respectively. A comparison with simulation is shown in Table IV.
C. Search for 77 Ge
The isotope 77 Ge is produced by neutron capture on 76 Ge. After the capture, the excited nucleus decays either to the ground state of 77 Ge or to the meta-stable state at 159 keV ( 77m Ge). The neutron capture crosssection for each has been measured [49]. Both states can decay to 77 As with distinct half-lives and gamma emissions, cf. Fig. 6. The 77m Ge decay can release up to 2.86 MeV in energy. In more than half of the decays the final state of the β-decay is the ground state of 77 As. In these cases, the single β particle can produce a point-like energy deposition similar to that of neutrinoless double-beta decay. Its relatively short half-life of only 52.9 seconds allows for the introduction of a timedelayed coincidence cut as suggested by Ref. [20]. The decay of 77 Ge also spans over the 0νββ ROI. However, the populated higher-energetic states of 77 As will decay via gamma emission. This additional photon allows a background-suppression by analysis cuts such as multisite event discrimination [44], multi-detector signatures, or an argon veto anti-coincidence [20]. For this study, we can use the 475 keV state of 77 As and its half-life of 114 µs to identify the creation of 77 Ge. Similar to the search for 73m Ge, the time-delayed coincidence method is used. A f irst event from the β-decay of 77 Ge is followed by a second event with a well-defined energy of 475 keV. Also included in the analysis is the search for the branch that includes a 211 or 264 keV transition, as shown in Fig. 6. Since the half-life of the meta-stable state in 77 As is shorter than in the 73 Ge case, the de-excitation to the ground state has a significant chance to occur in the dead time period of the previous f irst decay event. Therefore, the detection efficiency compared to the 73m Ge search is reduced to 69% (54%) for normal (multi-sampled) waveforms. Full energy detection efficiency of about 54% for these γ rays was estimated with the MaGe simulation code [50]. The total efficiency includes branching effects in the decay scheme and is calculated to be 31% (25%) for normal (multi-sampled) waveforms. Due to the extremely low total event rate in each detector of about 10 −4 Hz, the number of expected background events is on the order of 10 −7 for the whole data set. No candidate Ge decays that pass all analysis steps. Two or more energies for the first events indicate events for which more than one detector was triggered, as could be the case when a neutron scatters. The energy of the second event is not listed, since it is restricted as described in the text. ∆T1 is the time difference between the f irst and second events. ∆T2 is the time difference of the two steps in the second event waveform. The time relative to the last muon identified by the muon veto is given as ∆Tµ. The ratio E1/E2 indicates the amplitude ratio of the two peaks in the first derivative of the short time-delayed coincidence waveform of the second event. "Enriched Detector" indicates whether or not the event occurred in an enriched detector. Events marked with * are considered background from surface activation due to their energy and distribution. The last column represents the date that the detector went underground (DateUG), the month the event occurred in the data stream (DateEvent), and the time spent underground (∆TUG). event was found in the current search. The Feldman-Cousins method was used to estimate the uncertainty with the assumption of zero background. Since no events were found, an upper limit on the event rate can be set to less than 0.7 and 0.3 cts/(kg yr) for the natural and enriched detectors, respectively.
D. Search for 71m Ge, 75m Ge, and 77m Ge For many germanium isotopes with odd neutron number, low-lying isomeric states exist. The half-lives of these states range from a few ms for 71m Ge to almost a minute for 77m Ge. When muons and their showers pass through the Demonstrator, they can cause knock-out reactions on the stable germanium isotopes. These reactions, dominated by neutrons or photons, create excited odd-numbered germanium isotopes, which populate these isomeric states when relaxing. When decaying, each isomer has a characteristic energy release of a few hundred keV. This delayed energy release, in combination with the Demonstrator's low count rate, enables a search for signatures from these isotopes. A first event is identified as a muon using the scintillator-based muon veto system as described in Ref. [18]. Second events are searched for after the timestamp of the muon event in the germanium data stream. These second events have a characteristic transition energy from the isomeric state to the ground state, see Table III. The energy windows of the event selection are ±5 keV around the expected energy and the time windows are five to ten times the corresponding isomer half-lives after the incident muon. The uncertainty of the veto-germanium timing is known to be negligible relative to the time considered. Efficiency values to detect signatures based on MaGe for each of the corresponding signatures are given in Table III. To estimate the rate of random background for each signature, we considered the overall signal rate and the muon flux. In a germanium detector, the overall signal rate is about 0.05-0.2 events per day per detector in a 10 keV wide window for the energies of interest [15]. The muon flux at the 4850-ft level [18] is measured to be about 6 muons per day passing through the experimental apparatus. The overlap of both distributions can be used to estimate the background rate at the expected transition energy and time window (see Table III). While the time windows of 75m Ge and 77m Ge are about 5 times longer The red dotted curve shows the integrated number of events above the analysis energy threshold between a time t and the previous muon at time tµ in the Demonstrator data. The black dashed line represents the expected number of events calculated assuming that the rates for the muon system and germanium array would be completely independent. For long times, the trend corresponds to a random coincidence; however, for short time windows a deviation from the independent random triggering can be found which illustrates that there is a clear correlated contribution by muons in both systems.
than their half-lives, the time window of 71m Ge is chosen to be 10 times the half-life. This was done to decrease the effect of statistical fluctuations that can be present in short time windows when estimating the background. The number of events based on these two rates as a function of time between muon and germanium events was calculated to verify this estimate. Figure 7 shows the time of events in the Demonstrator's germanium detectors relative to the time of the last muon compared to how the distribution would look like if the veto and germanium system would be not correlated. The number of events fits very well to the expected coincidental rate when the previous muon was more than one second before the germanium event. Additional events within one second of a muon are found and indicate a clear contribution from the muon-induced prompt backgrounds. Therefore, we give for rates that are consistent with upper limits, and for the 71m Ge-channel a rate over background, see Table III. These rates, combined with the rate of expected 73m Ge and 77 Ge events, are now used to discuss the quality of simulations.
III. SIMULATION OF COSMOGENIC BACKGROUND IN THE MAJORANA DEMONSTRATOR
MaGe [50] is a Geant4-based [51] framework developed by the Majorana and Gerda collaborations. The calculations were done with two different versions of Geant, 4.9.6 and 4.10.5, with the same geometries to evaluate the consistency of the results. The first version coincided with the Demonstrator construction, while the latter was the version at the end of the data sets analyzed for this manuscript. This selection is arbitrary and newer versions are published more than once a year. Given the time-intense simulations, we restricted ourselves to these two versions in order to illustrate how results can change within one package, as discussed in Ref. [52]. In each case the physics list QGSP BIC HP was used for simulations. This list uses ENDF/B-VII.1 data [53,54] for nuclear reaction cross-sections and extrapolates into unmeasured energy regions or isotopes with TENDL [55], a TALYS based evaluation [56]. In addition to the MaGe based simulations, a simplified geometry was translated to FLUKA [57], version 2011 2x.6. Similar simulations were performed and the predicted isotope production rates were then compared to the Geant4 output.
The muon flux at the Davis campus has been simulated [18] and was in good agreement with the measured values when the same distribution was used as the input. To study the results from each of the simulation packages, muons were generated inside a rock barrier surrounding the experimental cavity to allow the formation of showers. About four meters of rock are needed to fully develop all shower components [58]. Ten million muons were started as primaries on a surface above the Demonstrator, equivalent to almost 200 years of measurement time. Two different geometries were used in the simulation. The first geometry is the early experimental configuration, representing about a year of Demonstrator data where only half of the poly-shield was installed. In the second geometry, all of the 12-inch thick poly-shield was installed for the final configuration of the Demonstrator. Each simulated data set was weighted according to the exposure for each configuration, as given in Ref. [16], and each data set reflects subsets of active and inactive detectors, respectively.
Isotope production rates
In order to understand which isotopes are produced, the rate of each isotope created by muon interactions in the Demonstrator is calculated from the simulation. As shown in Fig. 1 the difference in isotopic mixtures creates a wide variety of isotopes. Isotopes that are created in spallation reactions can create daughter isotopes during the subsequent β-decays and electron captures. A natural isotope mixture in germanium tends to produce lighter isotopes than the enriched mixture. In the Demonstrator's enriched material, fewer isotopes with neutron numbers less than 42 can be found because spallation reactions have to knock out additional nucleons to produce these. The rates for these higher energy spallation reactions are suppressed because of the decreased flux of higher energy projectiles, as well as smaller reac- TABLE III. Overview on the signatures of isomeric transition in odd germanium isotopes. The efficiency to detect these events includes the reduction due to branching in the decay. If the number of events is consistent with the background, upper limit calculations with 1σ C.L. are given. The uncertainties for the individual rates are estimated in Table IV. The efficiency of 77m Ge is reduced due to its high β-decay branching.
tion cross-sections. A comparison of the three simulations with the experimental data can be found in Table IV. When neutron capture occurs on 76 Ge, Geant4 populates the ground state 77 Ge exclusively. Using the cross-sections in Ref. [49], an expected production rate of 77m Ge was calculated based on the rate of ground-state production, and the metastable isotopes were then added to the simulation manually, a method similar to Ref. [20]. For spallation reactions, isomeric states are created, so no correction was necessary. While the overall agreement is good, none of the simulation packages is able to reproduce all the experimental rates, as seen in Fig. 8. Averaging the ratios between simulations and experiment for all isotopes considered, the simulations tend to overestimate production rates. However, this average is driven by the 73 Ge ratio. Since the experimental rates have large statistical uncertainties, this trend might balance out.
Distribution in time and energy
As shown in Fig. 9, the energy distribution of events that are in coincidence with the muon veto is consistent in data and simulation. For 0νββ analysis, the number of background events in the ROI is reduced when applying the veto. The remaining events contribute about 3×10 −4 cts/(keV kg yr) to the background around the Q-value in the enriched detectors. Table V summarizes the simulated event rates of the isotopes which can decay and contribute to the ROI. For this summary, we considered events with energy deposits in the 400-keV wide window around the Q-value at 2.039 MeV [15] that occur one second or later after the incident muon. Figure 10 shows that the majority of muon-induced events which contribute to the 0νββ ROI occur within this time. However, β-decaying isotopes, especially in decay chains involving multiple isotopes, can contribute at later times. Some events will contribute as background even after extended muon cuts like the one suggested by Ref. [20]. A comparison of experimental data in the ROI without any further analysis cuts indicates that simulation and experiment agree well for short time frames, as seen in Fig. 10. For longer times, when the correlation with the incident muon is not available, cosmogenic backgrounds in the ROI are subdominant. However, future experi- Table IV for natural Ge (top) and the Majorana enriched Ge (bottom). A ratio of one would indicate that the simulation is in good agreement with the experimental findings. If no counts were observed, the expected upper limit was used as the experimental rate. The grey shaded areas show the uncertainties based on the experimental rate; the error bars on the data points represent the uncertainties in the simulations.
ments plan to lower background from construction material. This effectively reduces the dominant background sources while increasing the importance of the cosmogenic background. At the same time the experiment will be larger in size which allows the individual muons to interact with more germanium targets, so the importance of cosmogenic backgrounds will increase.
Isotope Dominant production Candidates Experimental rate Simulated rate mechanism (cts/(kg yr)) (cts/(kg yr)) TABLE IV. Comparison of the detection rate from experiment, based on found candidate events in Demonstrator data, and the simulation detection rate for different packages. The uncertainty for simulated values is given by the statistical error (68%C.L.) of the simulation plus a 20% uncertainty for the incoming muon flux as discussed in Ref. [18].
Uncertainty Discussion
Other sources of background from natural radioactivity are neutrons produced by fission and (α,n) processes in the rock. Reference [59] estimated the integrated number of neutrons from these sources to be about a factor of 30 higher than those accompanying muons at the Davis Cavern at SURF. These neutrons have, as shown in Fig. 12, an energy distribution that reaches up into the MeV-range. Hence, their energies are too small to contribute to spallation processes which create the majority of the isotopes in Table V. However, neutron capture reactions are possible. As discussed in the introduction, low-background experiments like the Demonstrator consist of multiple shielding layers. Measurements and simulations [60,61] indicate that the wall neutron flux is reduced by at least three orders of magnitude due to the combined 12-inch thick polyethylene layer and the 18-inch thick lead shield. Therefore, we expect a dominant production of slow neutrons by muons. This assumption is supported by the fact that we found no indication of prominent capture γ rays from the copper which surrounds the detector. As stated, simulations have to cover a wide range of reaction cross-sections for various energies and isotopes. The simulations can be split into three major sections: 1) cosmogenic muons, with energies from a few GeV up to the TeV range and the creation of showers, 2) transport and interactions of a variety of particles in the accompanying shower, and 3) the decay of newly created radioactive isotopes. Several inputs can contribute to the total uncertainties of such a complex simulation framework. The uncertainty on the incoming muon rate is about 20% [18] while the uncertainties on exposure are only about 2% [16]. For this work, no further data cleaning cuts are applied in order to reduce the number of additional uncertainties. As shown in Fig. 8, the same geometry and input muon distributions will result in different rates in different reaction codes. Here, a large uncertainty comes from the physics models hidden in the simulation packages. Neutron physics often plays a special role since charged particles or photons can be shielded effectively with lead or other high-Z materials. As Table IV shows, a large change has been observed between Geant versions. One contributing factor is the use of the evaluated data tables in the newer version, Geant4.9.6 Geant4.10.5 Isotope natural detectors enriched detectors natural detectors enriched detectors (10 −5 cts/(keV kg yr)) (10 −5 cts/(keV kg yr)) (10 −5 cts/(keV kg yr)) (10 −5 cts/(keV kg yr)) 58 TABLE V. Event rates produced by the cosmogenic isotopes for events within the 400 keV wide window around the Q-value [15] and occurring more than one second after the incident muon. No additional cuts on pulse shape are applied, see Fig. 9. One can assume a 100% systematic uncertainty in the simulations, as discussed.
which aims to improve the predictive power of the simulation package [52]. The predicted number of events in the newer version of Geant is also consistent with the FLUKA physics, which supports these changes. Various simulation packages use slightly different neutron physics models. Databases for neutron cross-sections are often incomplete, or only exist for energies and materials relevant to reactors. This problem was noted previously and comparisons between packages have been done to study neutron propagation or muon-induced neutron production [62,63]. The influence of the isotope mixture and its uncertainty on the final results was investigated as well. Given the intense CPU-time needed for the as-built Demonstrator simulation, a simplified calculation was done to estimate the dominant reaction channels. From MaGe, the flux of neutrons and γ rays inside the innermost cavity was tabulated and folded with the isotopic abundance as given in Table I as well as the reaction cross section calculated by TALYS [55,56]. As shown in Fig. 11, neutrons are the dominating projectiles to create the meta-stable isomers used in this study. For a natural isotope composition neutron capture reactions dominate the production over knockout reactions like (γ, n) or (n, 2n). Since the natural isotope composition is well understood only minor uncertainties are introduced. For enriched detectors, knockout reactions as listed in Table IV dominate the production mechanisms. Hence, the lighter germanium isotopes and their large relative uncertainties only contribute on a negligible scale.
In the current-generation experiments, the cosmogenic backgrounds are only a small background contribution since the total background is on the order of 4.7×10 −3 cts/(keV kg yr) for Majorana Demonstrator [16], and 5.6×10 −4 cts/(keV kg yr) for Gerda [64,65]. Due to the different shielding approach, the Gerda background contribution by cosmogenics can not be compared directly to the Majorana Demonstrator. This will be discussed in the next section. However, in order to improve the background rate for next generation experiments, a detailed understanding of the cosmogenic backgrounds becomes necessary [38].
IV. OUTLOOK TO A GE-BASED TONNE-SCALE 0νββ EFFORT
The results in Fig. 9 suggest that simulations are capable of qualitatively describing the cosmogenic contribution to the background budget. However, as shown in Fig. 8, uncertainties can become a problem and even more prominent when discussing the background of a tonne-scale 0νββ experiment, such as the LEGEND experiment [38]. The sensitivities for next-generation efforts are strongly dependent on the background level [38,
66
]. If the background is "zero", the sensitivity scales linearly with the exposure; otherwise, the sensitivity only scales as the square root of the exposure. For LEGEND-1000, the goal is to reduce the background to 10 −5 cts/(keV kg yr). Hence, the integrated rates in Table V would be too high for the background in the future experiment. As shown in Fig. 10, one can increase the veto time after each muon in order reduce the background, but this technique is limited and increases the amount of detector dead time, especially for underground laboratories with less rock overburden and consequently higher muon flux. The design and the location of the tonne-scale experiment directly impact the background budget with respect to cosmogenic contributions. One major feature of the next-generation design is the usage of low-Z shielding material, such as the liquid argon shield in Gerda. In addition to its active veto capability, argon as a shielding material directly affects the secondary neutron production close by the germanium crystals. Figure 12 shows that the neutron flux at the 4850 ft level in simulations can change as the shielding configuration changes. The total neutron flux entering The red dots represent data in the same window from Majorana Demonstrator without any analysis cuts as shown in Ref. [16]. The dark gray area shows events that occur within one second after an incident muon, which are removed by the current muon veto in the Demonstrator. The light gray area indicates the veto cut suggested in Ref. [20] for a future large-scale germanium experiment.
The installation of the 30-cm thick poly-shield suppresses the low-energy portion of the neutron flux while the highenergy portion of the neutron flux is mostly unaffected. This is because most of the fast secondary neutron flux is produced inside the lead shielding. To understand the effect of a low-Z shielding material, the 18-inch thick lead shield in the Demonstrator simulations was replaced with a 4.4-meter thick liquid argon shield. This thickness results in the same suppression factor for 2.6 MeV γ rays. In the simulations, this liquid argon shield suppresses the neutron flux inside the inner-most shielding. An instrumented liquid argon shield can further suppress delayed signatures, reducing the total cosmogenic contribution. As shown in Table V, 77 Ge, the main contribution to the ROI, is mostly created by low-energy neutron capture which would be suppressed by a liquid argon shield. Table VI shows the background estimation for a Demonstrator-scale experiment with different shield configurations. The 1-sec muon veto can suppress the muon-induced background by roughly a factor of ten; however, the liquid argon shield can further reduce the background. In a tonne-scale experiment with Demonstrator-style shielding at 4850-ft depth, the current cosmogenic background rate shown in Table V represents 200% of the background budget for LEGEND-1000. However, a low-Z shielding approach, as well as analysis cuts as given in Ref. [20] drop this number to Table I. The two channels 77 Ge and 77m Ge are combined for this estimate since both are produced by capture on 76 Ge. the percent level. Especially time and spatial correlations, see Ref. [68], are very effective in reducing the effects of correlated signals from cosmogenic particles deep underground. As shown in Ref. [38] a deeper laboratory will reduce the cosmogenic background, as it scales with the muon flux at the first order. However, details like shielding materials, additional neutron absorbers, detector arrangement, and analysis cuts help to reduce the contribution.
V. SUMMARY
This work presents a search for cosmogenically produced isotopes in the Majorana Demonstrator and compares the detected number to predictions from simulations. The number of isotopes agrees reasonably well, [59]. The increase in flux after the innermost shielding layer of the Demonstrator (black dashed) is due to the production of additional neutrons by muons in lead. Different shielding approaches, e.g. no poly-shield (grey), or low-Z approach with liquid argon (blue) can affect the flux. and the overall distribution in energy and time are in good agreement to measured distributions. However, differences between simulation packages lead to uncertainties that are not negligible. Given the complexity of the simulations, uncertainties of a factor of two or more should be considered. It has been shown that for a future Ge-based tonne-scale experiment, the design directly affects the production of isotopes and the background to the ROI. Low-Z shielding like liquid argon in combination with analysis cuts can have similar impact as a deeper laboratory when reducing the effect of cosmogenic radiation. [20] 0.09 0.18 TABLE VI. Cosmogenic event rate in the 400-keV wide window at the Q-Value for lead and liquid argon shielding options at the 4850 ft level of SURF, without additional pulse shape analysis. For lead shielding, the two cases in Fig. 12 are shown representing the two extremes during the Demonstrator construction: without the poly shield at the beginning and with the 30-cm thick poly in the final configuration. | 2021-10-29T01:16:24.685Z | 2021-10-27T00:00:00.000 | {
"year": 2021,
"sha1": "98b04dde9faa8c73654d449dddbe5b2a9e22072a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "98b04dde9faa8c73654d449dddbe5b2a9e22072a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Physics"
]
} |
252417814 | pes2o/s2orc | v3-fos-license | S1.5d Spectrum of etiological agents of mycotic keratitis: A n 11- year review
Abstract S1.5 Mycotic Keratitis, September 21, 2022, 11:00 AM - 12:30 PM Background To assess the spectrum of etiological agents of mycotic keratitis over an 11-year-period at a tertiary eye care center in southern India. Methods A retrospective review was made of microbiological data relating to corneal scrapings performed over a period of 11 years (January 2011-December 2021) on 1200 individuals who presented with suspected microbial keratitis. Each individual underwent corneal scraping and the scraped materials were subjected to meticulous microbiological analysis that included direct microscopy (Gram-stain and lactophenol cotton blue wet mount) and culture on multiple solid and liquid culture media. Results A total of 404 fungal isolates were recovered from the corneal scrapings of 1200 patients with suspected microbial keratitis. Of the 404 fungal isolates, Fusarium spp (133) were the predominant isolates, followed by Aspergillus flavus (104), Curvularia spp (24), Aspergillus fumigatus (17), Bipolaris sp (7), Alternaria sp (2), Colletrotrichum sp (2) Cylindrocarpon lichenicola (2) Exserohilum sp (1) and Drechslera sp (1); there were also 111 filamentous fungus isolates that defied identification in spite of various efforts made to induce sporulation. Of the 404 culture-proven cases of mycotic keratitis, 381 patients confirmed ocular trauma while engaged in agricultural activity. Conclusion Fusarium spp., followed by Aspergillus spp., were the most common organisms found in mycotic keratitis patients in this specific geographical area. Additional efforts are required to spread awareness among villagers about the dangers of not promptly treating mycotic and other forms of microbial keratitis so that blindness and visual disability caused by corneal scarring in rural areas can be reduced.
S1.4 Fungal infections in Asia, bringing it out of the dark, September 21, 2022, 11:00 AM -12:30 PM Background: Routine laboratory testing for cryptococcal meningitis currently consists of Cryptococcal antigen ( CrAg ) testing in blood and cerebrospinal fluid ( CSF ) , CSF India ink, and CSF fungal culture. Quantitative cryptococcal culture ( QCC ) is labor intensive and not feasible in most settings.
Objectives: We evaluated quantitative ( qPCR ) and reverse transcriptase qPCR ( RT-qPCR ) assays to quantify cryptococcal load in CSF, plasma, and blood. We also investigated the dynamics of fungal DNA and RNA detection during antifungal treatment.
Methods: We developed a qPCR assay that can differentiate serotypes A, D, and B/C of Cryptococcus neoformans and C. gattii based on the amplification of a unique nuclear Quorum sensing protein 1 ( QSP1 ) and a multicopy 28S rRNA gene and evaluated the assays on 205-patient samples from the AMBITION-cm trial in Botswana andMalawi ( 2018-2021 ) . CSF, plasma, and whole blood samples were stored per patient and were sampled at day 0 ( baseline ) , day 7 and 14 for CSF and at day 1, 3 and 7 for plasma and whole blood post antifungal treatment initiation. A Roche LightCycler480 and Graph pad prism were used for data analysis.
Results: A total of 205/209 stored patient samples ( 85 from Botswana; 124 from Malawi ) , were used. For QSP1 qPCR tested in CSF at D0, 138 ( 81.7% ) were serotype A, 28 ( 16.6% ) were serotype B/C and 3 ( 1.8% ) were a mixed infection of serotype A and B/ C. There was no amplification with 36 ( 17.6% ) samples. There was no difference in fungal loads at D0, D7, and D14 between serotype A and B/C with the QSP1 qPCR assay, and QC C. QCC showed a good correlation with qPCR quantification with QSP1 qPCR ( slope = 0.797, R2 = 0.73 ) and with 28S rRNA qPCR ( Slope = 0.771, R2 = 0.778 ) assays. The fungal load at D0 was significantly higher in patients who died at week 2 ( w2 ) and at week 10 ( w10 ) as compared with patients who survived post-week 10 ( P < .01 ) , with no significant difference in initial fungal load in both treatment regimens ( P > .05 ) . Detection of Cryptococcus DNA ( 28S rRNA qPCR ) in plasma or whole blood within the first 24 h of treatment was significantly associated with early mortality at w2 and mortality at w10 ( P < .01 ) . QSP1 RT-qPCR showed that detection of DNA was due to viable fungal cells as the quantification of QSP1 whole nucleic acids was systematically higher ( X2 to 5 ) than that of DNA.
Conclusion: Quantification of C. neoformans and C. gattii load in CSF and plasma at D0 is useful in identifying patients at risk of death and may be a promising tool for monitoring treatment response in the future.
Philip Aloysius Thomas
Institute of Ophthalmology, Joseph Eye Hospital, Tiruchirappalli, India S1.5 Mycotic keratitis, September 21, 2022, 11:00 AM -12:30 PM Mycotic keratitis ( corneal infection due to a fungal etiology ) is a well-recognized ophthalmological emergency warranting rapid initiation of specific antifungal therapy. However, the magnitude of the problem of mycotic keratitis in the community, especially in the Indian subcontinent and the developing world, is, perhaps, less apparent. A minimal annual incidence estimate of 1051, 787 cases ( 23.6/100 000 population [popln] ) globally has recently been reported, with the highest rates being in Asia ( 33.9/100 000 popln, an absolute number of 939 895 ) and Africa ( 13.5/100 000; 75 196 ) ; if all culture-negative cases are assumed to be fungal, especially where the incidence of mycotic keratitis is known to be high, then the annual incidence would be about 1480 916 cases. A fungal etiology has been found to account for a very high proportion ( > 45% ) of microbial keratitis cases in countries in the Indian subcontinent. Countries where a fungal etiology accounts for > 25% of microbial keratitis mostly tend to abut the equator. Interestingly, the proportion of microbial keratitis patients with a proven fungal etiology shows a significant negative correlation with the gross domestic product per capita. Although it is clear that the most common fungal species are Fusarium, Aspergillus , and Candida species, marked regional variations in fungal etiology have been noted. It is important to realize that sensitivity of the culture of ocular fungal pathogens can vary, depending on the pathogen, as well as the competence of the testing laboratory. For some countries, multiple reports over time have been noted, with there being some evidence of an increasing trend in the proportion of all microbial keratitis cases being diagnosed as mycotic keratitis. Even in a single geographical location, cases of mycotic keratitis may be higher than the yearly average at certain times of the year, such as during the harvest or windy seasons, or when there is increased relative humidity. A disturbing statistic to note is that, in 8%-11% of patients with mycotic keratitis, the affected eye needs to be removed, representing an irreversible annual loss of 84 143-115 697 eyes. It is recognized that many people suffering from mycotic keratitis in rural distant communities never present to health care workers due to financial and other constraints. Hence, the actual number of people afflicted by mycotic keratitis, man-days lost due to the disease and during therapy, and reduced quality of life due to persistent disability ( corneal scarring ) in the Indian subcontinent and developing countries requires further study.
S1.5b The burden of mycotic keratitis in West Africa
Harish Gugnani 1,2 1 Vallabhbhai Patel Chest Institute, University of Delhi, Delhi-110007, India 2 University of Nigeria, Nsukka, Nigeria, Nsukka, Nigeria S1.5 Mycotic keratitis, September 21, 2022, 11:00 AM -12:30 PM Background: Fungal infection of the cornea, known as mycotic keratitis, can cause permanent corneal scarring and perforation resulting in the loss of the eye. This paper reviews the prevalence and epidemiology of mycotic keratitis in different countries in West Africa to estimate its burden.
Methods: An exhaustive search of the literature was made on Google, PubMed, MEDfacts, Cochrane Library, and Web of knowledge using different sets of keywords, viz. mycotic keratitis, ocular fungal infection, West Arica, risk factor, prevention, etc.
Results: A study in Nigeria over a period of 4 years (1974)(1975)(1976)(1977) dealt with 42 confirmed cases of mycotic keratitis with Fusarium solani as the predominant etiological agent ( 14 cases ) followed by Penicillium citrinum ( 8 cases ) and Aspergillus fumigates ( 5 cases ) , Candida spp ( 3 cases ) . The remaining 12 cases were that of Fusarium moniliforme , Aspergillus spp , Penicillium sp , and Cladosporium spp . The predisposing factors identified were trauma from palm tree leaves, thorns, kernels, or other plant objects, mechanical tools, and frying oil. A 10-year review ( 2003-2012 ) of 152 cases of corneal ulcers at the University of Calabar Teaching Hospital, Calabar, Nigeria revealed only 2 ( 2.9% ) cases due to Aspergillus sp, many patients in this study were farmers. Other studies from Nigeria only mentioned the prevalence of keratitis without any mention of fungal etiology. Of the two studies from Ghana, the one conducted in 1999 showed the predominant agents Fusarium spp. ( 52.3% ) and Aspergillus spp. ( 15.3% ) , in the other one conducted in 1999-2001, these agents were represented by 42.2% and 17.4% respectively. In another prospective study of suppurative corneal ulcers in 290 cases in Ghana ( June 1999-May 2001 ) , the etiological agents identified in culturally proven 77 ( 85.5% ) cases of mycotic keratitis were Fusarium spp-46, A. flavus -9, A. fumigates -7, A. niger -1, A. nidulans -1, and Aspergillus sp-1 A Siera Leonian study of cases of suspected infectious ulcerative keratitis from January 2005 to January 2006 ) detected 35.6% of mycotic keratitis and 13.7% of mixed fungal and bacterial etiology. A study on the burden of serious fungal infections in Togo mentioned an annual incidence of 951 cases of mycotic keratitis but no details of fungal etiology were mentioned.
Conclusion: Investigators have estimated the annual global incidence of fungal keratitis at over 1 million cases. Reports of cases reported from some countries represent only a tip of the true burden of mycotic keratitis in West Africa. There is a need for comprehensive surveys ( involving collaboration between ophthalmologists and microbiologists ) of mycotic keratitis in representative communities in collaboration with primary health centers and hospitals in different countries. It should be possible to produce a combined antifungal antibacterial preparation for widespread and immediate prophylactic first-aid use after corneal trauma, especially in rural areas.
S1.5c Proteomics in fungal keratitis research: a road map to personalized treatment
Lalitha Prajna Aravind Eye Hospital, Madurai, India S1.5 Mycotic Keratitis, September 21, 2022, 11:00 AM -12:30 PM Research becomes very significant and meaningful when it addresses a significant public health problem of a region. Fungal keratitis, ulceration of the cornea due to fungal infection, is one such serious problem. This infection that results in monocular blindness affects primarily the agrarian population and is considered to be a silent epidemic in India.
The current treatment for fungal keratitis is the topical application of antifungal drugs such as natamycin and voriconazole. Nearly 40% of the patients do not respond to this treatment and require corneal transplantation. Treatment with antifungal drugs is not a holistic approach as it addresses only killing the fungus. The exaggerated inflammatory response at the site of infection may be a crucial factor in the disease outcome. And, this is not taken into account during the treatment due to the lack of knowledge of the host response during the fungal infection. This was the starting point of our research in fungal keratitis-to understand the corneal immune response to fungal infection.
Using mass-spectrometry-based proteomics studies on a tear from keratitis patients, we identified that in response to the fungal infection, the complement and coagulation pathways were activated along with neutrophil-mediated defense responses, notably the neutrophil extracellular traps. These pathways and their cross-talk with each other were primarily responsible for the exaggerated immune response at the site of infection. We selected five tear proteins that were significantly altered and validated them to serve as indicators of the inflammatory status of the ulcer in keratitis patients. Further, we developed a predictive logistic regression model that incorporates tear biomarker levels and ulcer characteristics to identify the subset of patients who are unlikely to respond to the antifungal treatment. We are currently exploring the possibility of using tear-derived EVs or their cargo as adjuvant therapy to modulate the inflammatory response in these non-responder patients.
Through our efforts using proteomics approaches, we now have five tear proteins as indicators of the inflammatory status in keratitis patients. These proteins along with the clinical features can identify the subset of patients who are unlikely to respond to antifungal treatment. Additionally, we showed that keratitis patient tear-derived extracellular vesicles are enriched with proteasomes. As proteasomes have an established role in immune modulation, EVs with proteasomes are thus promising candidates for adjuvant therapy, which we are currently exploring. Thus, our journey of over a decade of research on fungal keratitis started with the basic research to understand the host response that in turn provided the leads for translational research, which is now advancing towards personalized treatment for these patients.
S1.5d Spectrum of etiological agents of mycotic keratitis: A n 11-year review
Philip Aloysius Thomas, Jayaraman Kaliamurthy Institute of Ophthalmology, Joseph Eye Hospital, Tiruchirappalli, India S1.5 Mycotic Keratitis, September 21, 2022, 11:00 AM -12:30 PM Background: To assess the spectrum of etiological agents of mycotic keratitis over an 11-year-period at a tertiary eye care center in southern India.
Methods: A retrospective review was made of microbiological data relating to corneal scrapings performed over a period of 11 years ( January 2011-December 2021 ) on 1200 individuals who presented with suspected microbial keratitis. Each individual underwent corneal scraping and the scraped materials were subjected to meticulous microbiological analysis that included direct microscopy ( Gram-stain and lactophenol cotton blue wet mount ) and culture on multiple solid and liquid culture media.
Conclusion: Fusarium spp., followed by Aspergillus spp., were the most common organisms found in mycotic keratitis patients in this specific geographical area. Additional efforts are required to spread awareness among villagers about the dangers of not promptly treating mycotic and other forms of microbial keratitis so that blindness and visual disability caused by corneal scarring in rural areas can be reduced. | 2022-09-22T15:29:58.065Z | 2022-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "476bd7604fe2bc33b3336bb8f7af612aef5f3c36",
"oa_license": "CCBYNCND",
"oa_url": "https://academic.oup.com/mmy/article-pdf/60/Supplement_1/myac072S15d/45937938/myac072.s1.5d.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "a02f5afa9bea84e1edf791de780a10f033379cd3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
237346938 | pes2o/s2orc | v3-fos-license | An Adaptive Clustering Approach for Accident Prediction
Traffic accident prediction is a crucial task in the mobility domain. State-of-the-art accident prediction approaches are based on static and uniform grid-based geospatial aggregations, limiting their capability for fine-grained predictions. This property becomes particularly problematic in more complex regions such as city centers. In such regions, a grid cell can contain subregions with different properties; furthermore, an actual accident-prone region can be split across grid cells arbitrarily. This paper proposes Adaptive Clustering Accident Prediction (ACAP) - a novel accident prediction method based on a grid growing algorithm. ACAP applies adaptive clustering to the observed geospatial accident distribution and performs embeddings of temporal, accident-related, and regional features to increase prediction accuracy. We demonstrate the effectiveness of the proposed ACAP method using open real-world accident datasets from three cities in Germany. We demonstrate that ACAP improves the accident prediction performance for complex regions by 2-3 percent points in F1-score by adapting the geospatial aggregation to the distribution of the underlying spatio-temporal events. Our grid growing approach outperforms the clustering-based baselines by four percent points in terms of F1-score on average.
I. INTRODUCTION
Prediction of traffic accidents is an important research area in the mobility, urban safety, and city planning domains. Such prediction is particularly challenging due to the data sparsity, the complexity of the spatio-temporal event distribution, the variety of the involved influence factors, and the complexity of their relationships.
State-of-the-art accident prediction methods (e.g., [1], [2]) mainly focus on two prediction aspects, namely feature selection to identify relevant influence factors and the definition of the predictive model architecture. One crucial aspect, typically neglected by the existing works, is the geospatial aggregation underlying predictive models. Whereas some urban areas, such as city centers, have a more complex structure and tend to attract more accidents, other areas are less accident-prone. Hence, differently from existing works, we include geospatial aggregation as an essential factor in our modeling. Overall, we consider the spatiotemporal accident prediction problem according to the three dimensions: geospatial aggregation, feature selection, and predictive model architecture.
The forecasting of spatio-temporal accidents is particularly challenging due to data sparsity. Existing works address the data sparsity by adopting coarse geospatial aggregations, such as fixed grids [3] or entire administrative districts [4] as prediction targets. However, neither predefined grid cells nor administrative districts adequately fit the spatio-temporal distribution of the observed events. Furthermore, existing works on traffic accident prediction usually consider accident datasets in US cities (e.g., [1], [2]). These cities exhibit a grid-like structure, whereas European cities have the least grid-like structure [5], such that the models developed for the US cities are not directly applicable to Europe.
In this paper, we propose Adaptive Clustering Accident Prediction (ACAP) -a novel approach to infer adaptive grids from the observed sparse spatio-temporal event distributions. We perform predictions on adaptive task-specific regions obtained through the proposed clustering-based grid growing method. As a predictive model, we rely on a neural network approach. We combine time series forecasting, in the form of Gated Recurrent Units (GRUs), with an embedding of static regional features. Through experiments on real-world datasets, we demonstrate that the proposed method increases the prediction accuracy compared to the state-of-the-art baselines based on fixed grids. As our experiments demonstrate, our Adaptive Clustering Accident Prediction approach outperforms several machine learning and neural network baselines regarding F1-score on the accident prediction task in several cities in Germany.
We observed that most existing works focused on evaluating the model performance based on private datasets (e.g., [3]), which makes them difficult to reproduce and to extend by other researchers. We aim to foster reproducibility, reuse, and extensibility of our work by the research community. Hence, we use only publicly available open datasets as a basis for feature extraction. For example, we collect the regional attributes, such as street types or the number of junctions in a region, from OpenStreetMap (OSM) 1 -the largest publicly available source of map data. Furthermore, we build our accident prediction model on the "German Accident Atlas" 2 -a publicly available official dataset containing traffic accident data for Germany. Moreover, we make our data processing pipeline available open-source 3 .
In summary, our contributions are as follows: 1) We propose ACAP -a novel approach to infer adaptive grids from sparse spatio-temporal accident distributions. 2) Our proposed prediction model using ACAP as geospatial aggregation achieves state-of-the-art prediction performance on the general task of traffic accident predic- several baselines with a performance increase of 2-3% on average concerning F1-score on three large German cities. The rest of the paper is structured as follows: First, we discuss related work in Section II. Then, in Section III we present the formal problem statement for sparse spatiotemporal event prediction. In Section IV, we present our proposed ACAP approach based on adaptive clustering with grid growing. Section V describes our experimental setup, including baselines and datasets. We present the evaluation results on open real-world datasets in Section VI. Finally, we provide a conclusion in Section VII.
II. RELATED WORK
In this section, we discuss related work on accident prediction. While existing approaches perform accident prediction on fixed grids or specific highways/streets, the proposed ACAP approach adapts to the specific regions. In the following, we discuss relevant accident prediction approaches according to the spatial aggregations they adopt.
Prediction on fixed map grids. Moosavi et al.
[2] developed a DAP model for predicting the occurrences of an accident on the 5x5 grid in a 15-minute interval. They evaluated the model on sparse data by augmenting it with the Point of Interests (POIs), weather, and time. Hetero-ConvoLSTM [1] predicted the number of accidents on the 5x5 grid during each time slot (a day). They used heterogeneous data, including roads, weather, time, traffic, and satellite images. Ren et al. [6] employed an LSTM model that predicts the frequency of accidents, given the history of the past 100 hours, for 1x1 grids. In another study by Chen et al. [3], the accident prediction is performed on a 500m×500m grid cell with the human mobility data as well as a set of 300,000 accident records in Tokyo (Japan). The authors predicted the possibility of accident occurrence on an hourly basis.
Prediction on street segments. The works in this category deal with predicting an accident or accident count on a given road/highway. Chang et al. [7] used information such as road geometry, annual average daily traffic and weather data to predict the frequency of accidents for a highway in Taiwan using a neural network and compared the results with the Poisson or negative binomial regression. Caliendo et al. [8] embedded road attributes such as length, curvature, annual average daily traffic, sight distance, side friction coefficient, longitudinal slope, and the presence of a junction to predict the accident count on a four-lane median-divided Italian freeway. There are similar works related to accident prediction on highways. For example, an accident prediction model by Wenqi et al. [9] based on a convolution neural network is designed to forecast an accident on the I-15 USA highway. Yuan et al. [10] predicted the accident occurrence for each road segment in the state of Iowa each hour in similar work. Hollenstein et al. [11] investigated the association of bicycle accident occurrence to roundabout properties of the road at Swiss roundabouts using a logistic regression approach. The authors also studied various features of roundabouts responsible for bicycle accidents.
In summary, existing approaches rely on a fixed grid of arbitrary size or a pre-defined street-segment aggregation. In contrast, ACAP is a novel adaptive approach for predicting accidents in spatially closed regions, irrespective of the fixed grids or specific street segments. Furthermore, ACAP works on sparse data, publicly available and easy to collect, in contrast to the approaches that use extensive but often closed datasets for modeling and prediction.
III. PROBLEM STATEMENT
We phrase our considered problem of traffic accident prediction in a general fashion of sparse spatio-temporal event prediction. Since in this paper we are only interested in predicting traffic accidents, as a particular case of spatiotemporal events, we use events and accidents as synonyms.
Let E ⊂ R 3 be the set of spatio-temporal events, i.e., each event E ∈ E consists of the latitude, longitude, and time information. We are interested in the prediction of these events for different spatial aggregations: Let f : R 3 → N 2 be the aggregation function mapped into R max cells and T max time intervals, i.e., f (E) ⊂ {0, . . . , R max } × {0, . . . , T max }. We are especially interested in studying the effect of different geospatial aggregations on prediction performance.
Since time and position do not provide sufficient information for developing predictive models in this domain, we assume additional features about each spatial cell and time interval. Formally, let X temporal ⊂ R Rmax×Tmax×dt and X cells ⊂ R Rmax×dr be the matrices of the d t temporal and d r spatial features. For example, we have as part of the regional information X cells the number of junctions, the street length, and the region size. Examples of region-specific temporal features X temporal are solar elevation and solar azimuth.
Our task is to create a binary forecast based on k-historic observations, i.e., to train a function Φ : if an event is observed in the next time period in the specific region, and 0 otherwise. We assume an imbalanced event set, where the occurrence of one event, e.g., non-accident, is much more likely than the other kind of event, e.g., accident. Furthermore, we are interested in comparing the performance over different spatial aggregations. Hence, it leads to change of the aggregation function f : R 3 → N 2 to another aggregation functionf :
IV. APPROACH
This section presents the Adaptive Clustering Accident Prediction (ACAP) approach proposed in this paper. The model architecture of ACAP is illustrated in Fig. 1. First, we propose an adaptive clustering technique to build clusters that reflect the geospatial distribution of the accidents, presented in Section IV-A. Then, our method generates temporal and geospatial feature embeddings, presented in Section IV-B. Finally, we describe the predictive model of ACAP in Section IV-C.
A. Adaptive Clustering with Grid Growing
Existing accident prediction approaches apply either a uniform geospatial aggregation using standard methods, such as geohash [12], or utilize administrative districts as a prediction target. The geospatial aggregation adopted by these approaches is often enforced by the already aggregated raw data, e.g., resulting from anonymization. The resulting uniform spatial grids are relatively coarse and do not reflect the actual accident distribution. Furthermore, existing works typically utilize US datasets such as Large-Scale Traffic and Weather Events Dataset (LSTW) 4 , and IOWADOT data 5 for the evaluation. In these datasets, the uniform grid structure appears meaningful, as it follows the typical layout of the US cities. In contrast, the European cities' road layout does not typically follow the grid-like structure [5]. These observations motivate us to perform adaptive clustering to create geospatial aggregations that better fit the road layout and city infrastructure in the target region.
Algorithm 1 presents an overview of the adaptive clustering approach proposed in this work. This algorithm is based on our variant of grid growing [13], which learns geospatial regions based on the training data, e.g., past observed accidents. The algorithm includes two main steps: 1) grid construction and 2) grid growing. The grid construction step requires an initial geospatial grid as a basis. This grid is then aggregated iteratively to form larger regions that follow the event distribution. In the grid growing approach proposed by [13], the initial number of rows and columns is userdefined, and these parameters are not intuitive. In contrast, we construct the grid in a novel way with the help of geohash. Geohash encodes a geographic location into a string of letters and digits. Each character in the geohash defines a specific grid, e.g., "u1qcvmz82kw" stands for Hannover city center. Longer geohash values correspond to the fine-granular grids with smaller cell sizes. In this work, we experiment with the geohash of length five, six, and seven, which approximately correspond to the regions of 4.89km×4.89km (5x5), 1.22km×0.61km (1x1), and 153m×153m (0.1x0.1), respectively. We experimentally assess the influence of the geohash length and utilize the geohash of length seven, which corresponds to the smaller cell size, i.e., 0.1x0.1 (δ detail ), in our grid growing approach.
The next step is the grid growing. In the first step, we randomly select a seed, i.e., a grid cell containing an accident. The region starts growing from the current seed by searching for accidents in the neighbor cells. As the eightneighbors search gives more accurate results than the fourneighbors search [13], we perform an eight-neighbors search to obtain nearby accidents in all adjacent grid cells. The grid growing stops when the current region does not find any accidents in the adjacent grid cells and assigns a cluster to the resulting region. In the next step, we choose the next seed cell randomly from the accident-prone grid cells not clustered in the previous algorithm iterations. The grid growing algorithm continues until it assigns all accident-prone grid cells in the training set to a cluster. Based on the clusters generated by the grid growing algorithm, we can, later on, assign locations and accidents unseen during training to their nearest clusters. To define the nearest cluster, we adopt haversine distance and apply a distance threshold ∆. We experimentally set ∆ = 400 meters. For the accident locations not mapped to any of the clusters due to the distance value exceeding the threshold, we map those locations to a larger base grid cell of 1x1 (δ base ) and assign this cell to a separate geospatial cluster.
The grid growing algorithm illustrated in Fig. 1 is essential for building adaptive regions. We compare the proposed grid growing approach to fixed grids and clustering approaches in the evaluation. The advantages of adaptive clustering, and especially of the grid growing approach proposed in this work, are as follows: (i) Our geospatial aggregation adapts to the underlying distribution of accidents in the dataset. In Algorithm 1 Adaptive Clustering with Grid Growing 1: Input: Spatio-(temporal) events E, e.g., training set of accidents 2: Output: Spatial-aggregation function fGG 3: Hyperparameters: detailed grid size δdetail, base grid δbase, distance threshold ∆ 4: Calculate for each E ∈ E their detailed grid G δ detail (E) 5: Initialize clusterings C = ∅ and i = 0 6: while Unmarked event E ∈ E exist do other words, we adjust the geospatial resolution based on the events that occur in the geospatial proximity. (ii) Our adaptive clustering allows us to work with sparse spatiotemporal data, unlike other baselines [2]. This property makes our approach easily applicable to large (rural) areas where the data can be extremely sparse.
B. Features & Embeddings
As a data pre-processing step, we compute temporal and geospatial features such as accident and regional features for each adaptive cluster and each grid cell. We evaluate the adaptive clustering approach with a fixed grid of cell size 5x5 and 1x1 in Section VI.
Temporal Features. Accidents are time-dependent, such that we aim to learn the correlation between the accidents and the temporal features. Our model includes ten temporal features such as weekday/weekend, season, month, year, weekdays, an hour of the day, daylight, solar position, solar azimuth, and solar elevation. All temporal features are encoded in one feature vector using the one-hot-encoding technique. The resulting feature vector includes 36 dimensions, where each dimension represents a possible feature value. The degree of temporal aggregation depends, in general, on data availability. In the "German Accident Atlas" dataset used in the evaluation, temporal features are aggregated on an hourly basis due to legal restrictions. Accident Features. The accident features include the accident type and the road conditions during the accident. Examples of accident types in the "German Accident Atlas" dataset include a car collision with another car or a bicycle. A specific accident type can be more prominent at one location than others, e.g., a city center has more car collisions than collisions with a bicycle. Thus, the accident type feature helps to identify such areas. Road conditions feature informs whether the road was wet, slippery, or dry during an accident. The accident features are converted into one-hot-encoded vectors and averaged for the accidents in a geospatial cluster or a grid cell.
Regional Features. Regional features are infrastructural attributes of a specific region, i.e., a grid cell or an adaptive cluster. Intuitively, regional features have a significant influence on accident occurrences. For example, accidents tend to occur more often near junctions or crossings. We select the following Point of Interests (POIs) as regional features: amenities count, number of crossings, number of junctions, number of railways, station frequency, stop signs count, number of traffic signals, number of turning loops, number of giveaways, highway types, and the average maximum speed for each region. We normalize feature values to the range between 0 and 1. We extract regional features from OSM.
Feature Embedding. Embeddings are continuous vector representations of discrete variables. Embeddings can help to reduce the dimensionality of feature vectors and to represent latent features. We construct latent representations from onehot-encoded and normalized feature vectors generated above as follows.
Temporal embeddings. For the temporal features, we utilize Gated Recurrent Unit (GRU) to create temporal embeddings. GRU is a type of Recurrent Neural Network (RNN) to learn sequential or temporal data.
A set of eight temporally ordered one-hot-encoded vectors from the preceding time points, each of length n, where n corresponds to the number of one-hot-encoded features, are fed to the GRU. With the temporal features listed above, n=36. GRU includes two recurrent layers in our settings, each with 128 units, and outputs the embedding vector of the same length.
Embeddings of accident and regional features. For these features, a feed-forward layer of size 128 with the sigmoid activation function creates feature embeddings.
C. Predictive Model
The predictive model of ACAP outputs a softmax, i.e., the likelihood for accidents respectively non-accidents. We transform them into binary accident labels, i.e., '1' for accident and '0' for non-accident. The model input is composed of the temporal embeddings and the embeddings of the accident and regional features of each geospatial cluster or grid cell. The input is feed-forwarded through the neural network layers with decreasing dimensionality. In particular, we use a set of fully connected layers of size 512, 256, 64, and 2, respectively. The activation function is applied in each layer to induce non-linearity in the model. The first three layers utilize ReLU as the activation function, whereas we apply softmax activation to the last layer's output. We use batch normalization [14] after the second and third layers. The role of batch normalization is to re-scale and normalize the intermediate outputs. The last layer is the classification layer that predicts binary accident labels. We optimize ACAP using categorical cross-entropy as a loss function.
V. EVALUATION SETUP
In this section we describe the baselines, datasets, parameters and metrics utilized in the evaluation.
A. Accident Prediction Baselines
We utilize four baseline methods, including machine learning and deep learning baselines: Logistic Regression (LR), Gradient Boosting Classifier (GBC), Deep Neural Network (DNN), and Deep Accident Prediction (DAP) model [2] to compare the performance of our approach regarding accident prediction.
LR is widely used for classification tasks where the model outputs probabilities for classification problems. GBC, another ML-based baseline with boosting characteristics, is also suitable for our classification task.
To compare our approach with deep learning models, we use DAP and DNN. DAP utilizes Long Short-Term Memory (LSTM) for temporal learning, Glove2Vec for learning accident descriptions, and embedding components for learning spatial attributes. DNN employs a set of fully connected layers of size 512, 256, 64, and 2, respectively.
B. Clustering Baselines
Geospatial aggregation can be broadly divided into two parts: grid-based and clustering-based. For the grid-based aggregation, we use the 5x5 and more detailed 1x1 geohash grids, as described in Section IV-A. The clustering approaches belong to the three categories: neural networkbased, density-based, and centroid-based. As a representative of the neural network-based clustering methods, we evaluated Self-Organizing Map (SOM) [15], [16], [17]. In density-based clustering, DBSCAN [18] and its extension Hierarchical DBSCAN (HDBSCAN) have been used to cluster the geospatial data [13]. DBSCAN is an unsupervised machine learning algorithm to classify unlabeled data. As a representative of the centroid-based methods, we apply the well-known K-means algorithm [19].
C. Dataset
The accident dataset is collected by "The Federal Statistical Office" department in Germany and is openly accessible. This dataset includes accident information for 16 German federal states starting from 2016 and currently contains data until 2019. The dataset contains 24 accident attributes, including accident id, latitude, longitude, day of the week, hour, month, year, accident type, and road condition. Due to Germany's legal restrictions, the data is aggregated temporally on an hourly basis, and the specific date of the accident is not reported in the dataset. We filtered the dataset to obtain cities with a long observation period and a sufficient number of accidents to facilitate model training and selected Hannover, Munich, and Nuremberg. For example, Hannover and Nuremberg have comparable accidents count with 7,433 and 6,121, respectively. In contrast, Munich accounts for the highest number of accidents, with 14,986 accidents in the considered period.
OpenStreetMap Dataset. OSM is a publicly available geospatial database. One can easily extract and store regional features such as POIs from OSM geofabrik 6 . For example, around 50 percent of the accidents happened at primary, secondary, tertiary, and trunk highways in Lower Saxony, Germany. The aim is to leverage our model with regional features to help in the prediction task. We fetch the regional features from the OSM dataset, e.g., number of amenities, number of junctions, number of traffic signals, and different highway types. We aggregate each regional feature to its 0.1x0.1 geohash and map it to the clusters and grids in our settings.
Negative Samples. Accident prediction is a binary classification task that requires generating elements of the nonaccident class. Any spatio-temporal point where no accident has occurred can be considered as a non-accident. However, using all time points leads to the generation of too many nonaccidents. To compare different spatio-temporal aggregations on the same dataset, we randomly select a 0.1x0.1 geohash grid and randomly generate a temporal and spatial point for the selected grid. Motivated by [10], we maintain a fixed accident to non-accident ratio, i.e., 1:3 across training and test data.
Training and Test Split. We split three years of data into training and test data: first 29 months, i.e., 80% of data for training, and last seven months, i.e., 20% for testing. For validation, we utilize the hold-out cross-validation method. In this method, a subset (10%) of the training data (split temporally) is reserved for validating the model performance. The early stopping technique based on the validation set is performed as a regularization step with patience as an argument. Patience represents the number of epochs before stopping once the loss starts to increase. We train each model separately for each city and perform testing on the same city.
D. Hyperparameters
In the following, we describe the hyperparameter settings of the models adopted in the evaluation.
Clustering Baselines. We initialize the hyperparameters of the clustering baseline as follows. DBSCAN takes epsilon (e) and the minimum number of points (n) as input parameters. The value of e is determined by the DMDBSCAN algorithm [20] using the nearest neighbor search. The selected e with a combination of different values of n help to determine silhouette scores [21]. The values of e and n with the highest silhouette score are chosen. HDBSCAN has minimum cluster size as the only parameter, which we set to four. We apply the elbow method to determine the number of clusters in K-means (K=4). For SOM, we choose a map size of 30×30, which gives a comparable number of clusters as the 1x1 grid.
Model Hyperparameters. We find the best parameter setting for the aforementioned ML-based baseline models by using grid-search. We follow the same setting as in [2] and refer to our available code for further details about the baselines' hyperparameters.
For ACAP, Adam optimizer with an initial learning rate of 0.01 is used to train the model. A dropout of 0.2 is used for regularization in the GRU layer. In early stopping, patience with 15 helps in regularization. For DNN, the parameter setting is the same as in the fully connected predictive model of the ACAP. All the neural network-based models are trained for 60 epochs.
E. Evaluation Metric
Due to uneven class distribution, we use F1-score as a metric for evaluating different models. F1-score is the harmonic mean of precision and recall. Since we are interested in predicting the accident class, we report the F1-score of the accident class for different models. We run each model ten times and report the average F1-score.
VI. EVALUATION
The evaluation aims to assess the proposed accident prediction approach, analyze the effect of the proposed adaptive geospatial clustering and examine feature importance.
A. Effect of Geospatial Clustering
As the first step of the ACAP evaluation, we compare different spatial clustering methods by changing the clustering in ACAP. In other words, we change the adaptive clustering (AC) part of our approach and plug in other clustering methods. Fig. 2 shows that our grid growing approach, i.e., GG outperforms all baselines by at least four percent points. The best performing baselines are SOM and DBSCAN, while HDBSCAN and K-means result in the worst model performance. Overall, we can observe that the proposed geospatial clustering has a significant positive effect on the observed performance. To further analyze our model and the geospatial aggregation, we evaluate ACAP and the best clustering baseline SOM against two uniform grids on three different cities in the next section.
B. General Performance
To extensively study ACAP performance, we evaluated ACAP using four different spatial aggregations and four other prediction methods on three German cities. As Table I shows, our ACAP approach achieves the highest F1-score in the accident prediction for all spatial aggregations and all cities. With respect to spatial aggregation, our grid growing clustering and 1x1 grids achieve the best results, while especially 5x5 grids reduce the prediction quality. We observe that the aggregation of static features in large uniform grids negatively impacts the performance and only achieves a one percent point higher score in Hannover than K-means, while having 31 regions instead of four. Overall, ACAP increases F1-score by 2-3 percent points over the best performing baseline on average.
C. Performance in the City Centers
To further analyze the proposed grid growing algorithm in urban regions, we evaluate the performance of our approach starting from the city center of Hannover to the larger Hannover region. For simplicity, we select a different radius around the city center of Hannover and compare the performance of grid growing and 1x1 grids. As Fig. 3 illustrates, ACAP with grid growing outperforms the uniform grids in the inner city center by 2 percent points. D. Feature Importance As the final part of our evaluation, we study the importance of our three feature groups -regional, temporal, and accident features -for ACAP's performance. Fig. 4 shows the resulting accident F1-score if the model only uses one feature category for the prediction. We observe the high relevance of regional and temporal features, which achieve 91% and 63% of the model that relies on all features, correspondingly.
All
Regional Time Accident VII. CONCLUSION In this paper, we proposed ACAP -an approach that relies on novel adaptive clustering and various temporal and regional features to predict traffic accidents. Overall, we achieved a 2-3 percent points increase in F1-score over the best-performing baseline on average. Our proposed grid growing algorithm, which flexibly adapts to the regions based on the observed geospatial accident distribution, increases the performance by four percent points against the clusteringbased baselines. We observed that our grid growing approach improves the prediction performance by two percent points in the city centers. Furthermore, ACAP is based on an open data pipeline, which comes with our publicly available implementation, making the proposed approach reproducible and reusable. In future work, we plan to investigate the impact of user-centric features, such as driver behavior, on accident prediction. | 2021-08-30T01:15:26.730Z | 2021-08-27T00:00:00.000 | {
"year": 2021,
"sha1": "8eeb25223be73c533a8dcfb00e5abfefe5337dc2",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2108.12308",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8eeb25223be73c533a8dcfb00e5abfefe5337dc2",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
219558991 | pes2o/s2orc | v3-fos-license | Adherence to Personal Health Devices: A Case Study in Diabetes Management
Personal health devices can enable continuous monitoring of health parameters. However, the benefit of these devices is often directly related to the frequency of use. Therefore, adherence to personal health devices is critical. This paper takes a data mining approach to study continuous glucose monitor use in diabetes management. We evaluate two independent datasets from a total of 44 subjects for 60 - 270 days. Our results show that: 1) missed target goals (i.e. suboptimal outcomes) is a factor that is associated with wearing behavior of personal health devices, and 2) longer duration of non-adherence, identified through missing data or data gaps, is significantly associated with poorer outcomes. More specifically, we found that up to 33% of data gaps occurred when users were in abnormal blood glucose categories. The longest data gaps occurred in the most severe (i.e. very low / very high) glucose categories. Additionally, subjects with poorly-controlled diabetes had longer average data gap duration than subjects with well-controlled diabetes. This work contributes to the literature on the design of context-aware systems that can leverage data-driven approaches to understand factors that influence non-wearing behavior. The results can also support targeted interventions to improve health outcomes.
INTRODUCTION
Personal health devices (PHD), often in the form of mobile and wearable systems, are particularly useful for pervasive monitoring of health status and vital signs [2,5]. These technologies provide unique opportunities for early diagnosis of diseases, management of chronic conditions, and prompt-response to emergency situations [3]. PHDs have been employed for monitoring of many conditions such as heart disease [27], Parkinson's disease [34], and diabetes [10]. Despite, the potential advantages of these technologies, the benefit is often proportional to the frequency of use [22,36]. For example, the American Diabetes Association (ADA) states that frequency of PHD use, specifically continuous glucose monitors, is the "greatest predictor" for lowering hemoglobin A1C -a primary clinical outcome for diabetes management [1]. Therefore, the notion of adherence to PHDs is critical. A person who uses these devices as intended often achieves better outcomes. Conversely, a person who does not use these devices as intended often achieves suboptimal outcomes. However, it is important to note that wearable PHDs are facilitators, and not drivers, of health behavior change [29].
There are several definitions of adherence [40]. However, in this paper, we adopt the definition that adherence means complying to a recommended regimen to achieve the best outcome. A recommended regimen can be in the form of guidelines such as taking 10,000 steps per day or prescriptive such as monitoring blood glucose before and after meals. Given the rise of commercial wearable devices in today's society, recent work has focused on adherence to non-prescription PHDs such as physical activity trackers [11,16,40]. However, adherence to prescription PHDs such as inhalers for asthma control [22] or continuous glucose monitors used in diabetes management [15] is arguably more important. In the case of asthma or diabetes, there can be an immediate risk or an undesired health event associated with non-adherence.
This paper focuses on a case study of adherence to PHDs in diabetes care for two key reasons. Firstly, diabetes is the 7 th leading cause of death and it affects up to 9.4% of people in the U.S. [14]. This is a significant fraction of the population. Secondly, and equally as important, PHDs in diabetes management are relatively advanced as there exists wearable devices for continuous monitoring of the most relevant biomarker (i.e. blood glucose) [10,36]. Similar devices for management of other chronic conditions (e.g. heart disease, mental illness, and obesity) are lagging. However, extensive effort is being committed to develop wearable alternatives for continuous 24-hour monitoring of relevant biological and behavioral markers [5,28,31]. We envision that findings from this study on diabetes can inform PHD data analysis in other domains.
A revolutionary innovation in diabetes care was the development of a continuous glucose monitor (CGM). As shown in Figure 1, it is a minimally-invasive wearable device that enables real-time Figure 1: Personal health devices for diabetes care: A continuous glucose monitor (left), blood glucose display and insulin pump (right) [8].
monitoring of blood glucose (BG) levels from sampling concentrations in the interstitial fluid [10]. In comparison to intermittent self-monitoring using glucose meters, CGMs enable the ability to dynamically adapt management strategies such as food intake, exercise, and medication-use to real-time glucose trends. Proper use of CGMs has been shown to reduce risk factors of diabetes such as severe low blood glucose and micro-/macro-vascular complications [9,10,[35][36][37]. However, as is the case with any wearable PHDs, people do not always use them as recommended [15,35,37].
The objective of this paper is to assess factors that affect adherence (i.e. wearing or use behavior) of personal health devices. More specifically, we seek to investigate: "whether and to what extent achieving target glycemic goals affect wearing behavior of continuous glucose monitors used in diabetes management. " Based on data from larger project, we evaluated 60 -270 days of CGM data from 44 subjects with diabetes and found that: (1) Performance toward target goal and age are two factors that influence adherence to PHD. (2) Longer duration of data gaps occurred in suboptimal BG categories and the longest data gaps occurred in the most severe (i.e. very low/very high) BG categories. (3) Subjects with poorly-controlled diabetes had on average longer data gap durations, indicative of worse adherence, than subjects with well-controlled diabetes. (4) Older subjects (age: > 40-yrs) had significantly worse adherence to PHDs, evident by longer data gap durations, compared to younger subjects (age: 24 -40-yrs). (5) PHD adherence varied across individuals and showed to be subject-dependent.
A key recommendation from this work is for development of context-aware PHDs that implement data-driven adherence analysis in embedded algorithms to improve wearing behavior, guide interventions, and positively affect health outcomes. In the case of CGM use in diabetes management, non-wearing behavior influenced by suboptimal BG can be identified based on the BG category users were in prior to the start of data gaps (or missing data events). Adherence analysis to PHDs is important in many health applications [17]. Therefore, we expect results from this work to inform research in other domains. However, a potential limitation of this work is the assumption that data gaps or missing data is directly indicative of non-adherence to PHDs, specifically CGMs in this study.
RELATED WORK
This section reviews relevant literature on personal health data and interpretation with a focus on non-prescription and prescription PHDs.
Adherence to Non-prescription PHDs
Adherence to PHDs has more commonly been studied for consumer wearable systems such as physical activity trackers and smartwatches [11,12,16,20,24,40,43]. Jeong et al. evaluated smartwatch use amongst 50 college students to understand factors that affect wearing behavior [16]. They found that participants wore their smartwatches for an average of 10.9 and 8.4 hours/day on weekdays and weekends, respectively. Users of such wearable devices were classified into three categories, namely, work-hour wearers, day-time wearers, and all-day wearers. However, only a small percentage of users (about 10%) are all-day wearers, most users tend to take off their device before bed-time [16,21]. Tang and Kay [39] studied adherence to long-term FitBit users. They showed that users benefited from a calendar-view display of daily and hourly adherence in association with the adherence goal. As expected, users cannot achieve the optimal benefit from PHDs without wearing the device. A large scale population study by Doherty et al. [11] found that age and time of day are key variables associated with compliance to physical activity trackers. Additionally, several studies have shown that there is high abandonment of consumer wearable devices after about 2-months [7,19,38]. Some reasons for abandonment include devices not fitting with user's conceptions of themselves, discomfort with information revealed, and the collected data not being perceived as helpful for continued use [12,19]. These findings are applicable for leisurely-used, non-prescription PHDs, however, they do not exactly translate to prescription PHDs needed for management of a health condition.
Adherence to Prescription PHDs
In a review on adherence to inhaler devices, non-adherence was found to be influenced by patient knowledge/education, convenience of the device, age, adverse effects, and associated costs [22]. Likewise some factors that have been identified which limit adherence to CGM devices used in diabetes management include cost, sensor discomfort, device inaccuracy, and general usability issues [10,30,36,37]. A 6-month clinical trial found that the mean CGM adherence in patients with type 1 diabetes differed across age groups with the highest adherence found in adults (ages: > 18 years) and lowest adherence found in adolescents (ages: 12 -18 years). [15]. Other studies have found that psychosocial factors such as coping skills, body image, and support from loved ones are associated with the use of CGMs [35,37]. The aforementioned studies highlight demographic, usability, cost, and psychosocial factors that influence accumulative adherence, however, little effort has focused on understanding contextual factors that affect day-to-day adherence. The recent papers by Raj et al. [32,33] highlight the importance of evaluating clinical data from PHDs in context. More specifically, they show that management of chronic conditions such as diabetes varies in different contextual settings influenced by time, location, people, and emotional state. Unlike the aforementioned work, this paper takes a quantitative, data-driven approach to investigate whether and to what extent management outcomes influence adherence to PHDs. This insight can inform the design of context-aware algorithms that include adherence analysis to identify subject-specific factors associated with non-wearing behavior, provide target interventions, and improve outcomes.
BACKGROUND
Diabetes is characterized by impaired glucose metabolism. Therefore, a person with diabetes should be constantly aware of the many factors that can affect their body's glucose levels including food, activity, medication, environment, and behaviors in daily living [4]. The primary management goal is to minimize the occurrence of hypoglycemic (i.e. low BG) and hyperglycemic (i.e. high BG) events [10,30,36]. Based on clinical research [9], there are five BG categories that are important, namely: (1) Very Low: Periods of BG readings < 54 mg/dL. This is considered a clinically-significant hypoglycemic event that may require immediate action. (2) Low: Periods of BG readings between 54 -70 mg/dL. It is recommended to set a CGM hypoglycemia alert for this category to reduce the risk of a more severe event. (3) Normal: Periods of BG readings between 70 -180 mg/dL. This is considered the target range and the goal is to maximize time spent in this range. (4) High: Periods of BG readings between 180 -250 mg/dL. It is recommended to set a CGM hyperglycemia alert for this category to reduce the risk of a more severe event. (5) Very High: Periods of BG readings > 250 mg/dL. This is considered a clinically-significant hyperglycemic event that may require immediate action. In this work, we use the above categorization of BG readings to evaluate adherence and wearing behavior of CGMs amongst persons with diabetes. It is important to note that CGMs are not perfect and can have inaccuracies in the range of +/-10% [10,36]. Additionally, majority of these devices need to be calibrated using conventional finger-prick method and a blood glucose meter [9]. Nonetheless, CGMs are the gold standard PHD for real-time monitoring of BG in diabetes [42], therefore, they were used in this study.
DATA DESCRIPTION AND PRE-PROCESSING
All the data used in this study was contributed to the research project by members of online diabetes communities [26,41], primarily patients with Type 1 Diabetes. Table 1 provides an overview of two unique CGM datasets analyzed in this work. Dataset-1 includes 60 days of recordings from 10 subjects with diabetes while dataset-2 includes 100 -270 days of recordings from 34 subjects with diabetes. There was no overlap between subjects across both datasets. As shown in Table 1, there was a fair split of well-controlled (52%) vs. poorly-controlled (48%) subjects with diabetes based on the ADA's recommendation to maintain hemoglobin A1C < 7% (equivalent to an average BG < 154 mg/dL) [1,25]. Figure 2 As part of the data-cleaning step, we removed duplicate, incomplete, and invalid samples. A valid data sample is one that includes a date, timestamp, and glucose reading in the range of 40 -400 mg/dL. The data-cleaning step reduced dataset-1 and dataset-2 by 4.28% and 18.89%, yielding 152,477 and 1,513,398 samples, respectively. Based on today's technology, CGMs record a glucose value approximately every 5 minutes with the highest sampling rate being one sample every 1 minute upon the user's request [10,13]. Figure 3 shows a probability density function of the sampling period and confirms that approximately 99% of our dataset was recorded every 5 minutes with a less than 1% sampled every 1 -4 minutes. Given that a CGM is a wearable PHD, the user decides if and when to wear it. Therefore, missing data is not uncommon. The rest of our analysis investigates CGM adherence and influential factors that are explainable from the dataset.
ANALYSIS
Our focus is to understand whether and to what extent management outcomes (i.e. achieving target goals) affect wearing behavior of CGMs. Toward this goal, we: (1) Investigate CGM wear time and explore sample distribution across the five key BG categories discussed in the "Background" section. (2) Characterize periods of missing data (known as data gaps in this work) by duration and distribution in the relevant BG categories. (3) Perform statistical tests (i.e. One-way ANOVA and Two-Sample T-tests) to evaluate the significance of data gap durations in different BG categories. (4) Investigate the duration of data gaps in normal vs. abnormal BG categories on subject-and group-levels, using subgroups based on management-and age-criteria.
CGM Wear Time
As defined in prior work [24,40], wear time is a count of the number of hours in a day that a PHD was worn. In this study, missing data was used as a proxy for calculating wear time of CGMs given that there will exist a recorded BG sample whenever the device is worn and turned-on for use. Figure 4a presents an overview of wear time as determined by the presence of missing data in both datasets. We observe that majority of the time users wore their CGM device for greater than 20 hrs/day. The average wear time was 21.59 (± 2.69) hrs/day and 22.16 (± 3.63) hrs/day for dataset-1 and dataset-2, respectively. This is indicative of a generally higher adherence to prescription PHDs compared to non-prescription PHDs such as physical activity trackers with an average wear time of 10-hrs/day [16,21,24]. However, as shown in Figure 4a there are several cases in which a CGM user's wear time in a given day is low (e.g. less than 15 hrs/day which is below the 25-th percentile mark in both datasets). We tailored our analysis on understanding such cases and potential associations with the user's BG readings (i.e. management outcomes) prior to the start of a data gap. Contiguous streams of missing data is used as a proxy for non-adherence to CGMs.
Sample Distribution in BG Categories
Figure 4b presents an aggregate of CGM sample distribution across the five key BG categories for all subjects in this study. The highest percentages of BG readings, 74.4% in dataset-1 and 67.7% in dataset-2, are in the normal (or target) range. This is representative of positive management outcomes (i.e. subjects are meeting their goals in diabetes care). However, ≈ 27% of the data samples are in the high BG range, with 15.9% in dataset-1 and 19.3% in dataset-2 in the high range (> 180 mg/dL), and 5.8% in dataset-1 and 7.4% in dataset-2 in the very high (> 250 mg/dL). Conversely, ≈ 5% of the data samples are in the low BG range, with about 2.6% in dataset-1 and 4.2% in dataset-2 in the low range (< 70 mg/dL), and 1.3% of both datasets in the very low (< 54 mg/dL). As described, low and high BG are representative of suboptimal management outcomes (i.e. subjects are not meeting their goal in diabetes care). Given that low BG is more dangerous in the near-term than high BG value [9], the data distribution in these categories shows that clinically-significant categories (very low and very high) occur less often. Figure 5 shows a representative week of CGM data from one subject and highlights key concepts used in the remainder of this paper including data gaps and associations with an increase or decrease in BG. These concepts are further explained below:
Data Gaps
• A data gap (δ ) is a period in which there is no BG reading recorded on the continuous glucose monitor. This represents periods of contiguous missing data. In the ideal scenario, users should wear their prescription PHDs throughout the day (including at night-time) and the device will record BG data continuously at a preset sampling rate of 5-minutes. However, the sensor can malfunction or users may take the device off for different reasons, which can lead to missing data, i.e., a gap in the continuous recording. Informed by prior work on missing data and interpolation of CGM samples [13], a data gap is defined as: where δ is the duration in minutes between adjacent data samples, T is the set of sampling periods in a day, and mode(.) is the function used to find a number that occurs most often in a set of numbers. Therefore, a data gap is identified when there is missing data greater than twice the sampling period of 5-minutes (i.e. > 10-minutes) and within 24 hours. • An increase in BG describes the scenario where a user's last BG reading before a data gap is lower than the BG reading after the gap. Based on the BG reading right before a string of missing data, we categorize data gap events into the five key categories discussed in the Background section. An increase in BG readings is most commonly influenced by food intake and may occur when a user is trending to or in a low BG category [9,10]. In our analysis, we investigate whether there is an association between the presence of data gaps immediately following low or very low BG readings. We also evaluate the length of data gaps in each BG category. This analysis aims to understand the influence of low BG categories (i.e. suboptimal management) on non-adherence to CGM use in diabetes care. We seek to answer the question: are users more likely to take off their CGM during periods of low BG and return to wearing their device when BG readings have increased (potentially back to the normal range)? • A decrease in BG describes the scenario where a user's last BG reading before a data gap is higher than the BG reading after the gap. Based on the BG reading right before a string of missing data, we categorize data gap events into the five key categories discussed in the Background section. A decrease in BG readings is most commonly influenced by insulin use and may occur when a user is trending to or in a high BG category [9,10]. In our analysis, we investigate whether there is an association between the presence of data gaps immediately following high or very high BG readings. Likewise, we evaluate the length of data gaps in each BG category. This analysis aims to understand the influence of high BG categories (also suboptimal) on non-adherence to CGM use in diabetes care. We seek to answer the question: are users more likely to take off their CGM during periods of high BG and return to wearing the device when the BG readings have decreased (potentially back to the normal range)?
RESULTS
In this section, we evaluate wearing behavior of CGM devices in daily living, with a focus on: 1) non-adherence to CGMs, identified through data gaps (or missing data events), and 2) factors associated with non-adherence. It is important to note that unlike nonprescription PHDs such as FitBits, CGMs are prescription PHDs that should be worn throughout the day (including at night-time) to achieve optimal diabetes management.
Distribution of Data Gaps in BG Categories
For every data gap present, we evaluated the distribution of the last recorded sample in each of the five key BG categories. Figure 6 (top plot) shows that ≈ 33% of data gaps occurred when users were in abnormal BG categories (i.e. not achieving management goals). Furthermore, we investigated subcategories of increase (i.e. positive difference) and decrease (i.e. negative difference) in BG readings before and after the data gaps. This analysis aims to understand the distribution of data gaps for which the last recorded sample is very high or high and after the gap the first recorded sample is lower (i.e. a decrease in BG) -see segment B in Figure 5 for example. This could represent a scenario in which the user has an extreme reading, takes off their PHD, remedies the situation by taking medication, then returns to wearing the device after a while. We observe that data gaps associated with a decrease in BG reading have higher percentage of cases that start with high and very high (around 32% in dataset-2 and 34% in dataset-1 - Figure 6 bottom right) compared to data gaps associated with an increase in BG reading (around 23% in dataset-2 and 25% in dataset-1 - Figure 6 bottom left). Similarly, for data gaps associated with an increase in BG reading, it is important to investigate the cases where the last recorded sample was a low or very low BG reading. We observe that data gaps with an increase in BG readings have higher percentages that start with low and very low (around 8% in dataset-2 and 5% in dataset-1 - Figure 6 bottom left) compared to data gaps associated with a decrease in BG value (around 4% in dataset-2 and 2% in dataset-1 - Figure 6 bottom right). The above analysis shows that there was a higher percentage of data gaps in the low / very low categories for which the BG value increased immediately following the data gap. Similarly, there was a higher percentage of data gaps in the high / very high categories for which the BG value decreased immediately following the data gap. This is suggestive of scenarios in which users took off their prescription PHD when not achieving their goals and returned to wearing the PHD when their BG started trending toward the target goal. A key observation is that the longest data gaps occurred in the most severe BG categories; equivalent to when users were farthest away from their target goal. More specifically, the very low BG category has the longest data gap associated with an increase in BG after the gap - Figure 7a. For this case, the duration of missing data in the very low BG category is 1.5 times greater than the duration when users are in the normal category. Conversely, the very high BG category has the longest data gap associated with a decrease in BG value after the gap - Figure 7b. Similarly, the duration of missing data when users were in the very high categories is up to 1.5 times the duration when users were in the normal category. This result is suggestive of scenarios in which CGM users take off their prescription PHD when they are in suboptimal BG categories (i.e. not achieving their management goals). Additionally, users tend not to wear the device for longer periods when the take-off started in a more severe or extreme BG category. It is also important to note that there is a similar trend between data gap duration and BG category across both independent datasets. This supports that the results observed are grounded and not biased to one specific dataset.
Statistical Significance of Data Gap Durations.
For significance testing, we use dataset-2 because it is larger and has more samples needed for a One-way ANOVA and Two-Sample T-test per the APA guidelines [18]. We first perform a One-way ANOVA test for the null hypothesis: "the average duration of data gaps in different BG categories is the same. " Table 2 shows the results which support to reject the null hypothesis with p-value < 0.01. Therefore, the average duration of data gaps starting in different BG categories is not the same. Note that in this table, "SS", "df", "MS", and "F" represent "Sum of Squares", "degree of freedom", "Mean Square", and "F-statistic", respectively. p −values are presents using three levels of α, i.e., p < .001 (marked as "***"), p < .01 (marked as "**"), p < .05 (marked as "*"), and p > .05 (marked as ". ").
Next, to compare the average duration of data gaps in different BG categories, we perform a Two-Sample T-test with H 0 : µ i = µ j , where µ i and µ j represents the average data gap duration in two separate BG categories. For our comparison, we use the average gap duration of normal BG category as a reference and compare data gap durations in extreme BG categories, i.e., very low and very high. We also compare BG categories low vs. very low and high vs. very high to test for potential differences. Table 3 shows the Two-Sample T-test results. We observe that the average gap duration in extreme BG categories is significantly different from the duration in the normal BG category. More specifically, p − value = .0066 for very low vs. normal, and p − value = .0180 for very high vs normal, respectively. Furthermore, we observe that the average gap duration starting in the very low BG category is significantly different from the average gap duration starting in the low BG category (p − value = .0045). Therefore, a very low BG category has a greater negative impact on users' adherence to the device compared to low BG category. On the other hand, the comparison of average gap durations in very high and high BG categories do not show a significant difference. This means high and very high BG categories could have a similar (not different) negative impact on users' adherence to the device.
The above finding supports that duration of non-adherence to CGMs is significantly associated with severity of suboptimal management. Current CGMs have the ability to alarm users' when BG readings are trending toward out-of-target range (or abnormal) values. However, the utility of this feature is unknown and increased utility should be encouraged to improve adherence.
Subject-Level Data Gap Analysis
To further compare the average duration of data gaps during normal and abnormal (i.e. very low, low, high, and very high) BG categories, we calculate the difference in these values expressed as a percentage: WhereT normal andT abnormal is average gap duration computed in the normal and abnormal BG categories, respectively. Using this equation, the %chanдe can have one of the following values: Figure 8 shows the subject-level analysis of data gaps that started in the normal vs. abnormal BG categories -using dataset-1 as an example. Our analysis revealed that 70% of subjects in dataset-1 and 50% of subjects in dataset-2 had longer average gap durations that started in abnormal BG categories vs. normal BG category (i.e. case 3a). This further supports the earlier finding that there exists an association between non-adherence to CGM use and suboptimal management (i.e. missing the target goal). It is important to note that this finding was more prevalent for some subjects (e.g. subjects 3 and 5) and not applicable to others (e.g. subjects 2 and 9). Therefore, it shows that this phenomenon is subject-dependent and not generalizable across all people. This aligns with findings from prior work [11,16,24] that factors which influence usage and adherence patterns to PHDs vary across individuals. The results of this paper add to this body of work by identifying missed health goals as a potential factor that contributes to non-adherence.
Group-Level Data Gap Analysis
Per Table 1, subjects in this study can be broken into subgroups to support the investigation of potential associations between distinct groups and non-adherence to PHDs. We performed a Two-Sample T-test with the null hypothesis: "the average duration of data gaps in different management-and age-subgroups is the same". This can be expressed mathematically as H 0 : µ 1 = µ 2 , where µ 1 and µ 2 Table 2: One-way ANOVA table for testing the null hypothesis that "the average duration of data gaps in different BG categories is the same" -using dataset-2. The result shows to reject the null hypothesis. is the average gap duration for each group. We used the ADA's glycemic target criteria of A1C less than 7% (≈ average BG < 154 mg/dL) [1,25] as a threshold to divide subjects from both datasets into 2 subgroups: well-controlled (n = 23) vs. poorly controlled (n = 21) subjects with diabetes. Secondly, we used age as another criteria and the median age of 40.38 yrs as a threshold to divide subjects from dataset-2 into two equal-size groups (n=17): older vs. younger subjects. Table 4 shows the results from this analysis and supports to reject the null hypothesis that the average gap duration is the same across groups. We found that there was a statistically significant difference in the average gap duration (p − value = .00063332) of subjects with well-controlled diabetes vs. poorly-controlled diabetes. A key result is found is that subjects with poorly-controlled diabetes had a worse adherence to CGMs as evident through the missing data compared to subjects with well-controlled diabetes. This group-level analysis aligns with the earlier results that suboptimal outcomes (or missed target goals) is a potential factor that influences non-adherence to PHDs. Additionally, we found that older subjects had significantly (p − value = .0034) worse adherence to CGMs, as evident through more missing data, than younger subjects. This aligns with prior research [11,22] which identifies age to be a factor associated with varying adherence levels to PHDs, and even CGMs more specifically [15]. These findings can guide tailored PHD design and interventions, although, it is important to note that there is individual heterogeneity as shown in Fig. 8, and the group-level finding is not a blanket statement for all people identified subgroups.
DISCUSSION
In this study, we have investigated adherence to PHDs, with a focus on wearing behavior of CGMs used for diabetes management. We analyzed two independent datasets from a total of 44 subjects for 60 -270 days and found that missing data (i.e. data gaps) is not uncommon. Our results show that suboptimal (i.e. low / high) BG values is one factor that is associated with non-wearing behavior, identified through data gaps. Additionally, the length of data gaps is influenced by management outcomes, such that longer gap durations (i.e. periods of missing data) are significantly associated with extreme (i.e. very low / very high) BG categories. It is important to note that the analysis in this work shows an association, not causality. Prior work supports that there are many reasons for data gaps in CGM readings, such as intermittent sensor error, sensor compression, and user errors [13]. In addition to these, other factors associated with non-adherence to prescription PHDs include knowledge/education, age, associated costs, psychosocial, usability, and contextual factors [10,15,22,32,33,36]. Nevertheless, the results of this paper highlight a critical dilemma. PHDs are developed to enable ubiquitous monitoring of health status, however, if users do not wear and use the devices consistently when not achieving the target goals then the benefit is limited. Conversely, if users wear and use the PHDs more often when they are achieving the target goals then the recorded data may be slightly biased and not a true reflection of the user's BG status. This is particularly important with regards to prescription PHDs, such as CGMs, given that doctors and care-givers rely on this information to understand and evaluate management and to guide treatment plans. From our dataset-1, we found that majority of data gaps (≈ 80%) occurred during the daytime between the hours of 6AM and 12AM (i.e. midnight). Given that most users are likely to be awake and able to make wearing choices during the daytime, this observation is expected. However, we did not observe any significant differences between the average gap duration during the day versus at night. We also observed more data gaps during the weekends (i.e. Saturday and Sunday) compared to during the weekdays (i.e. Monday -Friday). But there were no significant differences between the average gap duration on weekdays vs. weekends.
To account for non-wearing behavior influenced by suboptimal management, we recommend that particular attention should be paid to the BG category users were in prior to the start of data gaps (or missing data events). This knowledge can be implemented in context-aware systems that include data-driven adherence analysis in embedded algorithms. Currently, CGM manufacturers such as Medtronic [23] and Dexcom [6] include "sensor wear (per week)" and "sensor usage" in their reports for patients, caregivers, and doctors. However, to the best of our knowledge, there is no analysis on when CGM devices are taken off. Therefore, if there is a pattern of users taking off their CGM device during periods of suboptimal management, this insight will be missed. Such adherence analysis is also applicable to other health domains in which PHDs are beneficial [17], especially as it relates to chronic disease management. For example, significant research has been committed toward wearable PHDs for continuous monitoring of blood pressure, stress, mental illness, and much more [2,3,5]. As wearable PHD become a reality in other domains, adherence to these PHDs should be considered with specific attention paid to management outcomes when data gaps or missing data events occur. This analysis can inform targeted interventions to improve adherence to PHDs and health outcomes. Johnson et al. [17] present other application spaces in which PHDs can serve dual-functions, namely for delivering medication and monitoring adherence to medical devices.
Limitations
Despite the interesting results found in this study, there are limitations that should be addressed in future work. First and foremost, given that the dataset was contributed by active members of online diabetes communities, these users are likely more invested in their health and may have better outcomes than the population at large. For example, Figure 4a shows the median wear time in both datasets is greater than 22-hours/day. This is relatively uncommon for prescription and non-prescription PHDs [15,16,24]. Additionally, the subject-inclusion criteria for this research was > 65% wear-time for the range of data contributed. Therefore, a more general dataset will likely showcase worse suboptimal management and lower adherence to CGM or other PHDs. Nonetheless, as shown in Table 1, the datasets in this study included representative samples from subjects with well-controlled and poorly-controlled diabetes based on the ADA glycemic target criteria [1], therefore, we expect that our results are reproducible.
Another limitation is the assumption that data gaps or missing data are directly indicative of non-adherence to PHDs, specifically CGMs in this case study. Majority of CGMs on the market today use a disposable sensor that has a lifetime of about 3 -14 days depending on the device [36]. Therefore, some data gaps are expected for sensor replacement and device restart. Additionally, some data gaps may be related to CGM battery replacement or recharge, although these are less likely since the battery life of CGM transmitters is about six months [42]. Future work will include follow-up interviews with users to understand reasons for data gaps and nonadherence to PHDs. This learning can further improve the design of such devices.
CONCLUSION AND FUTURE WORK
To the best of our knowledge, the work presented in this paper is one of the few studies that use quantitative, data-driven methods to understand day-to-day factors that affect adherence to prescription wearable medical devices. More specifically, our results suggest that adherence to PHDs is influenced by performance toward the target goal. With a focus on CGMs used in diabetes care, we found that ≈ 33% of missing data occurred when users were not achieving their goal of maintaining BG within the normal range. There was significantly longer durations of missing data when users were farthest away from the target goal (i.e. in extreme or more severe blood glucose categories). Additionally, subjects with poorly-controlled diabetes were observed to have significantly longer average data gap durations than subjects with well-controlled diabetes. This knowledge can inform the design of context-aware systems that include data-driven adherence analysis in embedded algorithms and provide interventions to improve outcomes.
As a starting point for future work, we recommend that PHD adherence analysis should combine qualitative evaluations of nonwearing behavior with data-driven analysis for a more comprehensive understanding of contributing factors. It is important to note that there should be a distinction between non-prescription PHDs and consumer wearable devices such as physical activity trackers and prescription PHDs such as CGMs. Given that PHDs for diabetes care are relatively advanced, this application space is ideal for learning insights that can influence future development and use of data from such devices. Future work following this study will explore other contextual factors that influence missing data events as well as good and/or suboptimal management in daily living. The long-term goal is to develop data-driven decision-support tools to improve health. | 2020-06-11T01:01:25.983Z | 2020-05-18T00:00:00.000 | {
"year": 2020,
"sha1": "33f0a630e6657346075102e74df0172c652f62b9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "33f0a630e6657346075102e74df0172c652f62b9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
71714820 | pes2o/s2orc | v3-fos-license | Long noncoding RNAs: p53’s secret weapon in the fight against cancer?
p53 regulates the expression of hundreds of genes. Recent surprising observations indicate that no single protein-coding gene controls the tumor suppressor effects of p53. This raises the possibility that a subset of these genes, regulated by a p53-induced long noncoding RNA (lncRNA), could control p53’s tumor suppressor function. We propose molecular mechanisms through which lncRNAs could regulate this subset of genes and hypothesize an exciting, direct role of lncRNAs in p53’s genome stability maintenance function. Exploring these mechanisms could reveal lncRNAs as indispensable mediators of p53 and lay the foundation for understanding how other transcription factors could act via lncRNAs.
Introduction
Regulation of gene expression plays a crucial role during development and disease.In a given cell, this regulation generally occurs at the level of transcription and is controlled by transcription factors that activate the expression of hundreds of genes to control multiple pathways and diverse phenotypes.In a complex disease such as cancer, is the biology of a transcription factor controlled via a single target gene, controlled via a subset of target genes, or distributed among all its targets?We propose a model according to which a transcription factor mediates its effects via a subset of its target genes.This subset is potentially selected by specific long noncoding RNAs (lncRNAs), an emerging class of regulatory RNAs greater than 200 nucleotides long [1].This model and the underlying mechanistic scenarios that we propose are based on our current understanding of lncRNAs and recent observations on p53 [2][3][4][5], the most well-studied transcription factor and a major tumor suppressor mutated in more than half of human cancers [6][7][8].Exploring this model will enhance our understanding of the molecular mechanisms by which gene expression is controlled and lead to the development of improved cancer therapies.In addition to lncRNAs, p53-mediated tumor suppression could also be regulated by p53 binding at enhancers and p53-regulated enhancer RNAs [9][10][11][12].Here, we will focus on the role of lncRNAs in mediating the tumor suppressor effects of p53.
A lncRNA can regulate a subset of p53 targets. Why?
Recent unexpected findings have led to a new search for mechanisms responsible for p53-mediated tumor suppression.p53 has been studied for almost four decades and is known to directly bind to DNA in a sequence-specific manner to induce the expression of a myriad of genes.The outcomes of p53 activation are diverse and include cell cycle arrest, apoptosis, and senescence [13].Although the genes that mediate these phenotypes downstream of p53 have been identified, recent puzzling observations strongly suggest that they are not sufficient on their own, for p53-mediated tumor suppression in vivo [4,[14][15][16].For example, unlike p53-knockout mice that develop highly penetrant spontaneous tumors within six months of age, the triple knockout for p21, Puma, and Noxa-the critical mediators of p53-induced apoptosis, cell cycle arrest, and senescence-are not prone to spontaneous tumor development [5].Perhaps what is even more surprising are the findings from two recent innovative functional genomics studies in which the authors found that no single protein-coding gene controls the antiproliferative effects of p53 [2,3].These studies utilized RNA interference (RNAi) to knock down protein-coding genes, but noncoding RNAs were not targeted.Because some lncRNAs have been shown to directly regulate the transcription of many genes [17,18] or even an entire chromosome [19,20], it raises the exciting possibility that lncRNAs play a major role in mediating the effects of p53 by regulating a subset of p53 targets.We propose that some p53-induced lncRNAs control the expression of a subset of genes directly or indirectly up-regulated by p53 and, consequently, p53-mediated tumor suppression (Fig 1).This regulation could be mediated by lncRNAs directly regulated by p53 and/or lncRNAs that are activated in coordination with p53 but not in a direct manner [12].
Why lncRNAs?First, although the majority of lncRNAs transcribed from the human genome have not been studied at the molecular level, some lncRNAs play a crucial role in regulating key cellular processes, including, but not limited to, cellular proliferation, metastasis, differentiation, and genomic instability [21].Because these processes are also controlled by p53, many lncRNAs have the potential to be functionally integrated into the p53 pathway.For example, we have recently shown that the p53-induced lncRNAs PURPL [22] and PINCR [23] regulate p53 itself or the induction of a subset of p53 target genes, including BTG2, RRM2B, and GPX1, which are implicated in cell cycle arrest and apoptosis during DNA damage.In addition, the p53-induced lncRNAs NEAT1 and PINT have been recently shown to play crucial roles in p53-mediated tumor suppression [16,[24][25][26][27][28].Second, because lncRNAs are typically expressed at low levels, it may be that they do not directly regulate the expression of hundreds of genes.Although this argument is very difficult to address experimentally, lincRNA-p21, a well-studied p53-induced lncRNA [29][30][31], is expressed at eight molecules per cell and has been proposed to lack genome-wide regulatory functions [31,32].Therefore, lncRNAs are more likely to regulate a subset of genes within a pathway.Third, just as p53 directly or indirectly controls the expression of genes at various stages transcriptionally and post-transcriptionally, lncRNAs have been shown to modulate gene expression at a variety of stages, often depending on their patterns of subcellular localization.Therefore, lncRNA low abundance and high functional correlation with p53 suggest that lncRNAs likely play a critical role in the p53 pathway by regulating a subset of p53 targets, which control tumor suppressor activities.
A lncRNA can regulate a subset of p53 targets by associating with DNA. Mechanisms?
The p53-regulated transcriptome consists of genes that are directly up-regulated, indirectly up-regulated, and indirectly repressed (Fig 2).The subset of genes in the p53-regulated transcriptome that mediate the tumor suppressor effects of p53 could, in principle, belong to either or all of these categories.A lncRNA could act as an activator and/or repressor of gene expression and modulate the expression of a subset of the p53-regulated transcriptome.What are the molecular mechanisms by which a lncRNA regulates these genes?Perhaps the most widely conceived function of lncRNAs is their association with chromatin to activate or repress gene expression.Therefore, the efficacy of lncRNA function lies in its specificity for target gene loci and must involve mechanisms through which it can associate with chromatin at the intended loci.
One mechanism through which this could be achieved is via direct association of the lncRNA with chromatin through the formation of RNA-DNA hybrids (R-loops) at regulatory DNA sequences of a subset of p53 targets (Fig 3A).R-loops are thermodynamically favorable and thus represent a plausible mechanism to facilitate direct, sequence-specific binding of the lncRNA at target gene loci [33].These interactions could occur at regulatory regions of a subset of p53 targets that are hot spots for R-loop formation, including promoters, 1 to 2 kb downstream of the transcription start site or near the polyadenylation sequence at the 3´end of the target gene.Formation of such R-loops by lncRNAs may be aided by sequence elements that promote R-loop formation, such as G-rich sequences in the lncRNA and negatively supercoiled DNA.Alternatively, RNA helicases bound to a lncRNA could facilitate the conversion of intramolecular Guanine-Cytosine (GC)-rich sequences in the lncRNA to single-stranded regions thereby increasing the affinity of the lncRNA for the target DNA.In addition, a lncRNA can be recruited to chromatin by its interaction with DNA-and RNA-binding proteins (DRBPs), a class of proteins that can bind to DNA as well as RNA [34].In this case, the DNA sequence recognized by the lncRNA will be determined by the specific binding motifs in the DRBP (Fig 3B).
Through these mechanisms of lncRNA association with chromatin, a lncRNA will be able to interact with regulatory regions of p53 targets in a sequence-specific manner.Where could these regulatory regions be in the genome?These regions could be in enhancers, promoters, insulators, or the gene body of a subset of p53 targets.In this way, a lncRNA may recruit transcriptional regulators to aid in transcription, thus increasing the expression of certain p53 target genes.A lncRNA may also increase target gene expression by facilitating chromatin looping.In this case, the lncRNA can interact with enhancer regions to increase the transcriptional activity of p53 targets by altering chromatin structure.For example, PINCR likely plays a role in the formation of chromatin loops between the promoters and enhancers of the p53 targets BTG2, GPX1, and RRM2B [23].Additionally, lncRNAs may influence histone modifications at regulatory sites to repress or induce gene expression.This regulation can be achieved through lncRNA interactions with chromatin-modifying enzymes.lncRNAs could also facilitate association of RNA-binding proteins (RBPs) with chromatin to regulate the expression of a subset of p53 targets.For example, the lncRNA XIST has been shown to bind to several RBPs to achieve X-chromosome inactivation [35].
A lncRNA can regulate a subset of p53 targets by associating with pre-mRNAs. How?
A lncRNA can indirectly associate with chromatin via interactions with pre-mRNAs of a subset of p53 targets.RNA-RNA interactions are extremely stable, indicating that this can be a significant mechanism through which lncRNAs associate with chromatin.These interactions may be direct, involving sequence-specific binding of the lncRNA to the pre-mRNA (Fig 3C).Alternatively, this pre-mRNA-facilitated association of the lncRNA with chromatin could occur indirectly via RBPs that recognize specific motifs in the pre-mRNA and the lncRNA (Fig 3D ).For example, MALAT1, a nuclear-speckle-localized lncRNA, has been shown to interact indirectly with pre-mRNAs through RBPs [36].These direct or indirect RNA-RNA interactions will allow for sequence-specific control of a subset of p53 target genes at the post-transcriptional level, resulting in facilitating or inhibiting mRNA splicing and/or processing by a p53-induced lncRNA.
A lncRNA could regulate a subset of p53 targets beyond interactions with chromatin. How?
The mechanisms of associating with chromatin outlined above may represent the most commonly utilized mode by which nuclear-retained lncRNAs dictate a subset of p53 targets.However, lncRNAs that localize to the cytoplasm also have the potential to exert similar functions.Some cytoplasmic lncRNAs are known to promote either mRNA stability or degradation [37].A p53-induced lncRNA could bind sites targeted by the mRNA degradation machinery to increase the stability of mRNAs in the p53-regulated transcriptome.Alternatively, the lncRNA may recruit degradation machinery and bind to such mRNAs to decrease stability.These lncRNA-mRNA interactions could also occur indirectly via RBPs that provide sequence specificity.For example, mRNAs of specific p53 targets may be destabilized by lncRNAs under normal conditions but stabilized by lncRNAs in response to p53 activation.Additionally, lncRNAs could promote or inhibit the translation of a subset of p53-regulated mRNAs on polysomes.For example, the human lncRNA RoR has been shown to repress p53 mRNA translation [38].Finally, a lncRNA that localizes to the cytoplasm may alter mRNA levels by acting as competing endogenous RNA (ceRNA) [39].In this scenario, the lncRNA may prevent interaction of a microRNA (miRNA) with its target mRNAs by outcompeting mRNAs for shared miRNAs.
An important point to consider with each of these mechanistic scenarios is the copy number per cell of the lncRNA that is being considered.The p53-induced lncRNA and the target miRNA or protein should be present in stoichiometric amounts to substantially affect the expression of the target gene.Perhaps the best example of a cytoplasmic lncRNA in which the issue of stoichiometry was carefully considered is NORAD, a very abundant lncRNA that sequesters Pumilio proteins to maintain genome stability [40].
Direct regulation of DNA repair by a p53-induced lncRNA. How?
One of the hallmarks of cancer is DNA double-strand breaks (DSBs), which, if not repaired accurately, lead to mutations and chromosomal rearrangements [41].DSBs are generally repaired by homologous recombination (HR) or nonhomologous end joining (NHEJ).The preferred substrate for HR is the sister chromatid, but this substrate is only available during the S and G2 phases of the cell cycle.Therefore, during G0/G1, NHEJ can be used to join broken DNA ends without the use of a template DNA, but this is error prone.Alternatively, to repair DNA by HR in G0/G1 phase of the cell cycle, a cell can potentially use endogenous RNA as a substrate for DNA synthesis.Although direct involvement of RNA in DNA repair was unanticipated and considered a rare mechanism, there is now mounting evidence that endogenous RNAs can serve as a template for DSB repair [42][43][44][45][46][47].We propose that, in addition to the known roles of some lncRNAs-such as DDSR1 and damage-induced lncRNAs (dilncRNAs) in DNA repair pathways [48,49], and NORAD in the maintenance of genome stability [40,50]-specific antisense lncRNAs play direct roles in DSB repair by base-pairing to the damaged DNA.
RNA-mediated DNA repair reverses the central dogma in which DNA is synthesized by RNA.These repair mechanisms, therefore, could rely on RNA interacting with a reverse transcriptase (RT).Although this interaction is well established in the context of retroviruses, retrotransposons, and during telomere synthesis, reverse transcription is likely not limited to these scenarios.Nearly half of the noncoding human genome consists of repetitive elements [51], and over 75% of lncRNA sequences contain elements derived from retrotransposons [52].Therefore, the majority of lncRNAs contain sequences that are known to interact with RT in the context of DSB repair.One could imagine that genes that play more significant roles in cellular pathways may contain antisense lncRNAs to preserve their integrity.Taken together, this evidence leads to the hypothesis that lncRNAs play crucial roles in "guarding the genome" by functioning in DSB repair pathways.
p53 is known to play a crucial role in maintaining genomic stability [53].A recent report that utilized in vivo RNAi screening on select p53 targets provided strong evidence that in some contexts the genome stability function of p53 is regulated by the DNA repair gene Mlh1, and the regulation of the DNA repair process plays a very important role in p53-mediated tumor suppression [54].Most lncRNAs were not included in the RNAi library.This raises the following question.In addition to the regulation of the DNA repair function of p53 via Mlh1, can a p53-induced lncRNA also play a direct role in DSB repair?How could this occur, and what could be the advantages over other mechanisms?p53 is the guardian of the genome [13,55,56].It accomplishes its genome stability maintenance function by a variety of well-established mechanisms, including direct transcriptional regulation of specific genes that control cell cycle arrest, apoptosis, senescence, and DNA repair.For example, upon DNA damage, induction of cell cycle arrest by p53 ensures that DNA replication is turned off, thereby allowing more time for the damaged DNA to be repaired.If the DNA damage is too severe to be repaired, p53 kills the cell by inducing apoptosis.In cancer cells that express dysfunctional p53 or have no p53, these processes are disrupted leading to increased genome instability.
For the regulation of these processes via a protein-coding gene, the p53 target gene(s) would have to be transcriptionally induced by p53, and the corresponding mRNA would then be exported to the cytoplasm and translated.The effector protein may need to be post-translationally modified and, in some cases, imported to the nucleus.We propose that a p53-induced lncRNA could provide a faster and more efficient mechanism of fixing the damaged DNA by directly facilitating lncRNA-mediated DNA repair in the nucleus.
How could this occur?The repair may be templated or nontemplated and will utilize a lncRNA transcript homologous to the DNA flanking the DSB.This transcript could be an antisense lncRNA transcribed from the damaged locus prior to DNA damage or from the undamaged homologous allele in response to DNA damage.An exciting possibility is that the antisense lncRNA could pair with homologous DNA at the site of a DSB.In cis, a lncRNA could base-pair with both sides of the DSB and facilitate end joining (Fig 4).This lncRNA-DNA interaction at the site of the break would occur by forming R-loops.HR machinery has been shown to modulate R-loop formation in the context of genomic instability [57].Additionally, RNA-DNA hybrids have been shown to promote HR [58].In trans, an antisense lncRNA or a lncRNA that shows partial complementarity to the regions near the damaged DNA may associate with chromatin at the site of the DSB through a DRBP or directly by forming R-loops.Although both scenarios are possible, lncRNA repair in cis would be preferred because a transcript from the same site as the DSB would have greater repair frequency.
In addition to facilitating end joining, an antisense lncRNA could also act as a template for DNA repair, allowing for extension of one of the free DNA ends using a currently unidentified RT (Fig 4 ).A major advantage of lncRNA-mediated DSB repair could be that only a single molecule of the antisense lncRNA will be required.Our proposed models of lncRNA-mediated DNA repair could be exciting, prevalent, and largely unexplored mechanisms that may unmask the functions of hundreds of currently functionally uncharacterized lncRNAs.
Conclusions and beyond
We propose that a subset of p53 targets regulated by a lncRNA are required for effective tumor suppression.This hypothesis is supported by the low abundance of lncRNAs and their functional correlation with p53.As described above, lncRNAs may control this subset of p53 targets through direct association with chromatin or indirect association with chromatin via nascent pre-mRNA; these interactions may be facilitated by RBPs or DRBPs.Moreover, lncRNAs could modulate p53 targets at the level of translation via interactions with miRNAs or mRNAs on polysomes.Through these mechanisms, lncRNAs may be necessary for effective p53 tumor suppression through regulation of the expression of a subset of p53 targets.Our proposed model on a potential role of antisense p53-regulated lncRNAs in DNA DSB repair, in addition to the known role of Mlh1 in DNA repair downstream of p53 [54], is exciting to explore given the fact that p53 is the guardian of the genome.It is likely that these roles of lncRNAs may be broadly applicable to other transcription factors, accounting for the wide array of lncRNA cellular functions and a significant number of lncRNA genes in the human genome.Therefore, lncRNAs may represent integral regulators of transcription factor pathways and may be necessary for carrying out the observed functions of such transcription factors to play critical roles in development and disease.
Fig 4 .
Fig 4. Regulation of DSB repair in cis by a p53-induced antisense lncRNA.The antisense lncRNA binds DNA at either side of the break and facilitates end joining or acts as a template for resynthesis of damaged DNA.The RNA polymerase that transcribes the antisense lncRNA is shown as a green circle; the RT for synthesis of cDNA using the lncRNA as template is shown as an orange circle.DSB, double-strand break; lncRNA, long noncoding RNA; RT, reverse transcriptase.https://doi.org/10.1371/journal.pbio.3000143.g004 | 2019-02-25T05:56:08.521Z | 2019-02-01T00:00:00.000 | {
"year": 2019,
"sha1": "1267c857e2a758048024dec661e4e4178864db28",
"oa_license": "public-domain",
"oa_url": "https://journals.plos.org/plosbiology/article/file?id=10.1371/journal.pbio.3000143&type=printable",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1267c857e2a758048024dec661e4e4178864db28",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
211832523 | pes2o/s2orc | v3-fos-license | Simple Nanoparticles from the Assembly of Cationic Polymer and Antigen as Immunoadjuvants
Since antigens are negatively charged, they combine well with positively charged adjuvants. Here, ovalbumin (OVA) (0.1 mg·mL−1) and poly (diallyldimethylammonium chloride) (PDDA) (0.01 mg·mL−1) yielded PDDA/OVA assemblies characterized by dynamic light scattering (DLS) and scanning electron microscopy (SEM) as spherical nanoparticles (NPs) of 170 ± 4 nm hydrodynamic diameter, 30 ± 2 mV of zeta-potential and 0.11 ± 0.01 of polydispersity. Mice immunization with the NPs elicited high OVA-specific IgG1 and low OVA-specific IgG2a production, indicating a Th-2 response. Delayed-type hypersensitivity reaction (DTH) was low and comparable to the one elicited by Al(OH)3/OVA, suggesting again a Th-2 response. PDDA advantages as an adjuvant were simplicity (a single-component adjuvant), low concentration needed (0.01 mg·mL−1 PDDA) combined with antigen yielding neglectable cytotoxicity, and high stability of PDDA/OVA dispersions. The NPs elicited much higher OVA-specific antibodies production than Al(OH)3/OVA. In vivo, the nano-metric size possibly assured antigen presentation by antigen-presenting cells (APC) at the lymph nodes, in contrast to the location of Al(OH)3/OVA microparticles at the site of injection for longer periods with stimulation of local dendritic cells. In the future, it will be interesting to evaluate combinations of the antigen with NPs carrying both PDDA and elicitors of the Th-1 response.
Introduction
Adjuvants are essential components of modern vaccines; they enhance the magnitude and guide the type of adaptive immune response to produce the most effective form of immunity for each specific pathogen [1,2]. Adjuvants act, mostly, as antigen carriers or/and as pattern recognition receptors (PRR) agonists. The demand for adjuvants in vaccine formulations emerged from the application of purified antigens, which are poor immunogens due to lack of the danger signals of pathogens' entire cells, crucial for activating the innate immune system [1,3,4]. Currently, there are few adjuvants licensed for human use; aluminum-based adjuvants like Al(OH) 3 are the only ones approved worldwide performing the task of presenting negatively charged antigens due to their positive charge at the pH of water [5][6][7]. Other cationic adjuvants based on nanoparticles [8][9][10][11], liposomes [12][13][14], cationic bilayer fragments [8,15,16] or supported cationic bilayers on polymeric nanoparticles (NPs) [9], silica [17], All dispersions were obtained in Milli-Q water. Size distribution, zeta-average diameter (Dz), zeta-potential (ζ), polydispersity (P), conductance (G), and colloidal stability for the dispersions were determined by DLS using a Zeta-Plus Zeta-Potential Analyzer (Brookhaven Instruments Corporation, Holtsville, NY) equipped with a 677 nm laser. The diameter from DLS is the mean hydrodynamic (zeta-average) diameter Dz unless otherwise stated. Zeta-potential (ζ) was determined from the electrophoretic mobility µ and Smoluchowski equation, ζ = µη/ε, where η and ε are the viscosity and the dielectric constant of the medium, respectively. The physical properties of the dispersions (Dz, ζ, and polydispersity P) from the DLS technique were determined by applying well-defined mathematical equations [45]. The colloidal stability of the dispersions (at 0.01 and 0.1 mg·mL −1 of PDDA and OVA concentrations, respectively) was followed for 48 h by determining the effect of time on Dz, P, ζ, and G.
UV-Vis spectra of the PDDA/OVA dispersions were acquired at 25 • C using a Shimadzu UV-1800 spectrophotometer (SSI; Kyoto, Japan) from 200 to 800 nm in appropriate cuvettes against a Milli-Q blank of pure water.
For SEM experiments, the PDDA/OVA NPs dispersion in Milli-Q water (at 0.01 and 0.1 mg·mL −1 of PDDA and OVA concentrations, respectively) was placed on a round glass coverslip and dried overnight at room temperature before coating with a gold layer using a Leica EM SCD 050 sputtering apparatus for observation under a FEI Quanta 250 scanning electron microscope. L929 fibroblasts and J774A.1 macrophages cell lines were obtained from the American Type Culture Collection (ATCC; www.atcc.org) and cultured according to standard protocols for mammalian cells culture under sterile conditions, in an atmosphere of 90% humidity, 5% CO 2 , at 37 • C in RPMI-1640 medium supplemented with 10% fetal bovine serum (FBS), 1 unit/mL of penicillin-streptomycin, and 2 mM L-glutamine. For interaction with cells, PDDA, OVA, and PDDA/OVA solutions were also prepared in RPMI-1640 medium supplemented with 10% fetal bovine serum (FBS), 1 unit/mL of penicillin-streptomycin, and 2 mM L-glutamine. The interaction between cells and solutions to be tested was performed as follows. L929 fibroblasts and J774A.1 macrophages were plated into 96-well microtiter plates at a density of 10,000 cells/well and incubated for 12 h before replacing the culture medium by 100 µL of PDDA, OVA, or PDDA/OVA solutions. Thereafter, the mixtures were again incubated in a humidified CO 2 incubator for 3 and 24 h before determining the in vitro cytotoxicity of PDDA, OVA, and PDDA/OVA by the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide (MTT) assay [46]. A MTT stock solution at 5 mg·mL −1 in PBS was filtered for sterilization and removal of insoluble MTT residues. Ten microliters of this stock MTT solution was added per well, each well containing 100 µL of the plated cells interacting with OVA, PDDA, PDDA/OVA. After incubation (37 • C/ 2 h), the supernatants were withdrawn and cells adhered to the wells were mixed meticulously with 100 µL of an isopropanol solution acidified to 0.04 N HCl in order to dissolve formazan crystals. The absorbance of each well was recorded at 570 nm on a Multiskan Ex Microelisa reader. As a control, cells mixed with the culture medium only yielded 100% of cell viability. Cell viability in the presence of OVA, PDDA, or OVA/PDDA solutions was expressed as % of the control.
Determination of Cells Morphology in the Presence of PDDA by Scanning Electron Microscopy (SEM)
L929 fibroblasts (ATCC CCL-1) were grown up to 80-100% confluency before taking aliquots of this culture to counting in a Neubauer chamber to add about 100,000 cells/well in wells that had received spherical coverslips (one coverslip per well) in plates with 24 wells/plate so that cells were allowed to adhere on the coverslips for 12 h. The culture medium was replaced by PDDA solutions at concentrations of 0.01, 0.1, and 1 mg·mL −1 , and the samples were incubated in a humidified CO 2 incubator for 3 h. Cells were then fixed in the Karnovsky buffer (5% glutaraldehyde, paraformaldehyde 4%, sodium cacodylate 0.1 M, and pH 7.2) for 3h before progressive dehydration was performed with added ethanol solutions (concentrations ranging from 7.5% to 100%; 15 min interaction with cells per concentration). Thereafter, the cells on the coverslips were further dried in a Leica CPD 030 critical-point dryer device and covered with gold in a Leica EM SCD 050 sputtering device for observation in an FEI Quanta 250 scanning electron microscope.
Immunization Scheme
Groups of five BALB/c mice were immunized subcutaneously at two separate sites on the base of the tail with a total injection volume of 0.2 mL/animal using solutions or dispersions at the final concentrations of adjuvants and/or OVA antigen shown in Table 1. A booster using the same dose employed for priming was carried out at 21 days post-immunization.
Ethical procedures for experimentation with animals were followed and were in accordance with guidelines approved by the Instituto Butantan's Committee of Ethics on Animal Research (protocol number 7912280219). (10 µg/mL in 0.01 M phosphate buffer saline (PBS), pH 7.2). The wells were blocked with 3% gelatin in PBS for 2 h and then incubated (1 h/37 • C) with serum samples serially diluted. In each well, 100 µL of goat anti-mouse biotin-conjugated IgG1 (1:1000) or IgG2a (1:500) (Southern Biotechnology Associates, AL, USA) were added to the corresponding plates and incubated for 1 h at 37 • C. Then, peroxidase-labeled streptavidin (100 µL of a 1:3000 dilution) was added and incubated for 1 h at 37 • C. Plates were washed three times after each incubation step with PBS containing 0.05% Tween 20 (PBST). Finally, 100 µL OPD substrate solution (1 mg·mL −1 ) and H 2 O 2 (1 µL/mL) diluted in 0.1 M citrate-phosphate (pH 5.0) were added to each well and incubated for 15 min at room temperature. The reaction was stopped by adding 50 µL of 2 M H 2 SO 4 to each well. Absorbance was determined at 492 nm for individual wells using an ELISA plate reader (Multiskan Ex, Thermo Electron Corporation). The results were expressed for diluted serum samples as the mean absorbance ± standard deviation. The primary response of IgG1 was presented at dilution of 1/256 and the secondary response at 1/16348. The primary response of IgG2a was presented at dilutions 1/8 and the secondary response at 1/128, the dilutions chosen correspond to the linear part of the titration curve.
Determination of Delayed-Type Hypersensitivity Reaction (DTH)
For evaluation of the delayed-type hypersensitivity reaction, five mice/group on the fifth day post-immunization were injected at the left-hind footpad with a previously heated (1 h/80 • C) and denaturated OVA solution (30 µL, 2 µg/µL in saline). As a control, the same volume of saline was injected at the right-hind footpad of each animal. Differential footpad swelling was determined 24 h after injection with a Mitutoyo digital micrometer and considered as the difference between swelling of left and right paws for the same animal. The results were represented as arithmetic mean footpad swelling ± standard deviation.
Statistical Analysis
To compare results for different groups, two-way analysis of variance (ANOVA) followed by Tukey's multiple comparisons test was used. P-values below 0.05 were considered significant. Statistical analysis was performed using the Origin 2018 program (Origin Lab Corporation, Northampton, USA). Statistical analyses for IgG1 or IgG2a were performed separately.
Results and Discussion
3.1. Characterization of PDDA/OVA Dispersions Regarding Formation of Nanoparticles, Their Size, Surface Potential, Polydispersity, Morphology, and Colloidal Stability.
In order to ascertain the formation of nanoparticles from PDDA/OVA dispersions, the first property of the mixtures inferred by visual observation was the occurrence of a turbid appearance. The interaction between OVA and PDDA was driven by their opposite charges. OVA isoelectric point is around 4.5 [47]; its charge at the pH of water (6.3) is negative. PDDA, as a cationic antimicrobial polymer [40], is expected to interact electrostatically with OVA. In fact, PDDA and BSA and other negatively charged proteins interacted in dispersion to yield nanoparticles [41,42,48]. Figure 1 shows micrographs of PDDA/OVA dispersions obtained by SEM. These revealed the occurrence of NPs with a mean diameter (D) of dried NPs of D = 234 ± 42 nm as derived from ImageJ software. Typically, some aggregation due to the drying process (required by the SEM technique) might have slightly increased the mean size of the NPs. Indeed, using a method as DLS, the mean hydrodynamic diameter of the same NPs dispersion was Dz = 170 ± 4 nm ( Figure 2a). The NPs at 0.1 mg·mL −1 OVA and 0.01 mg·mL −1 PDDA displayed a positive net charge (ζ = 30 ± 2), and the dispersion of NPs exhibited a relatively low polydispersity of P = 0.11 ± 0.01. a mean diameter (D) of dried NPs of D = 234 ± 42 nm as derived from ImageJ software. Typically, some aggregation due to the drying process (required by the SEM technique) might have slightly increased the mean size of the NPs. Indeed, using a method as DLS, the mean hydrodynamic diameter of the same NPs dispersion was Dz = 170 ± 4 nm ( Figure 2a). The NPs at 0.1 mg·mL −1 OVA and 0.01 mg·mL −1 PDDA displayed a positive net charge (ζ = 30 ± 2), and the dispersion of NPs exhibited a relatively low polydispersity of P = 0.11 ± 0.01. Curiously, the concentration of the OVA stock solution used to prepare the PDDA/OVA dispersions was important to define the PDDA/OVA NPs size (not shown). The lower the OVA stock solution concentration, the lower the size of PDDA/OVA NPs, possibly due to variable degrees of intermolecular OVA/OVA aggregation. At high concentrations, OVA/OVA aggregation in the stock solution would be higher than at low concentrations. In the literature, OVA/OVA aggregation with increasing OVA concentration has already been observed by other authors [49]. The nano-size of all PDDA/OVA NPs in water dispersion was reconfirmed by determining the dependence of turbidity on the wavelength of the incident light (λ). Rayleigh scattering by turbid dispersions of NPs means a linear dependence of turbidity with 1/λ 4 as the one shown in Figure 2 (b). Usually, Rayleigh scattering occurs for spherical particles in water dispersion with diameters very small compared with λ [50,51], showing that the PDDA/OVA NPs are indeed spherical and nanosized.
We defined 0.1 and 0.01 mg·mL −1 as the OVA and PDDA concentrations, respectively, to be used throughout this study based on the experiments described next.
In order to evaluate the effect of [PDDA] on the physical properties of PDDA/OVA NPs at 0.1 mg·mL −1 OVA, the concomitant change in size (Dz), zeta-potential (ζ), polydispersity (P), and conductance (G) was obtained using the DLS apparatus. NPs size initially increased with PDDA concentration attaining a maximal value when ζ was zero meaning that absence of electrostatic repulsion between the NPs would cause loss of their colloidal stability (Figure 3a,b). Further increasing PDDA concentration to 0.01 mg·mL −1 re-stabilized the dispersion, yielding positive and relatively high zeta-potential (around 30 mV). This was the highest zeta-potential obtained over all ranges of PDDA concentrations tested. However, increasing [PDDA] above 0.01 mg·mL −1 , an increase in the conductance (G) of the dispersions revealed that further incorporation of PDDA onto NPs did not occur (Figure 3d). Additional PDDA remained in the bulk solution increasing G. The poly-cation Curiously, the concentration of the OVA stock solution used to prepare the PDDA/OVA dispersions was important to define the PDDA/OVA NPs size (not shown). The lower the OVA stock solution concentration, the lower the size of PDDA/OVA NPs, possibly due to variable degrees of intermolecular OVA/OVA aggregation. At high concentrations, OVA/OVA aggregation in the stock solution would be higher than at low concentrations. In the literature, OVA/OVA aggregation with increasing OVA concentration has already been observed by other authors [49].
The nano-size of all PDDA/OVA NPs in water dispersion was reconfirmed by determining the dependence of turbidity on the wavelength of the incident light (λ). Rayleigh scattering by turbid dispersions of NPs means a linear dependence of turbidity with 1/λ 4 as the one shown in Figure 2b. Usually, Rayleigh scattering occurs for spherical particles in water dispersion with diameters very small compared with λ [50,51], showing that the PDDA/OVA NPs are indeed spherical and nano-sized.
We defined 0.1 and 0.01 mg·mL −1 as the OVA and PDDA concentrations, respectively, to be used throughout this study based on the experiments described next.
In order to evaluate the effect of [PDDA] on the physical properties of PDDA/OVA NPs at 0.1 mg·mL −1 OVA, the concomitant change in size (Dz), zeta-potential (ζ), polydispersity (P), and conductance (G) was obtained using the DLS apparatus. NPs size initially increased with PDDA concentration attaining a maximal value when ζ was zero meaning that absence of electrostatic repulsion between the NPs would cause loss of their colloidal stability (Figure 3a,b). Further increasing PDDA concentration to 0.01 mg·mL −1 re-stabilized the dispersion, yielding positive and relatively high zeta-potential (around 30 mV). This was the highest zeta-potential obtained over all ranges of PDDA concentrations tested. However, increasing [PDDA] above 0.01 mg·mL −1 , an increase in the conductance (G) of the dispersions revealed that further incorporation of PDDA onto NPs did not occur (Figure 3d). Additional PDDA remained in the bulk solution increasing G. The poly-cation nature of PDDA has indeed been shown to result in increasing conductance with increases in PDDA concentration in water solutions [52]. Over a range of high [PDDA], above 0.2 mg·mL −1 PDDA, the polydispersity (P) of the dispersions increased (Figure 3c), possibly due to PDDA-induced bridging flocculation and PDDA/OVA NPs aggregation [39]. From the results in Figure 3, the best conditions for obtaining positively charged, stable, and nano-sized PDDA/OVA NPs became clear, namely, 0.1 and 0.01 mg·mL −1 for OVA and PDDA concentrations, respectively. At these concentrations, the usual cytotoxicity of cationic polymers is minimized [29]. The smallest concentration possible of the cationic polymer PDDA for obtaining the desired properties of the NPs was determined as 0.01 mg·mL −1 at 0.1 mg·mL −1 OVA (Figure 3). In Figure 4, using these same concentrations, the colloidal stability for the PDDA/OVA NPs in water From the results in Figure 3, the best conditions for obtaining positively charged, stable, and nano-sized PDDA/OVA NPs became clear, namely, 0.1 and 0.01 mg·mL −1 for OVA and PDDA concentrations, respectively. At these concentrations, the usual cytotoxicity of cationic polymers is minimized [29]. The smallest concentration possible of the cationic polymer PDDA for obtaining the desired properties of the NPs was determined as 0.01 mg·mL −1 at 0.1 mg·mL −1 OVA (Figure 3). In Figure 4, using these same concentrations, the colloidal stability for the PDDA/OVA NPs in water was determined over the time remaining excellent during all the observation period (48 h) and possibly beyond (from visual and macroscopic observation). The NPs were stable colloids. Paul Dubin and co-workers also found good colloidal stability for PDDA/BSA NPs over 4 months [42].
Cytotoxicity of PDDA/OVA NPs Against Mammalian Cells in Culture
The cytotoxicity of the PDDA0.01/OVA0.1 formulation meaning 0.01 and 0.1 mg·mL-1 of PDDA and OVA concentrations was evaluated against two cell lines in culture, fibroblasts and macrophages, using two different techniques, the MTT assay and SEM.
In Figure 5, significant cytotoxicity for PDDA against the mammalian cells was obtained only above 0.01 mg·mL −1 for the two time points (3 and 24 h), and serum concentrations were tested (2% and 10% FBS). This agrees with data by Fischer and coworkers who determined low cytotoxicity for 0.01 mg·mL −1 PDDA against fibroblasts. Along similar lines, hemolytic effects of PDDA did not occur at such low concentration [53].
The use of two serum concentrations aimed at finding an eventual effect of serum proteins neutralization of the positive charges on PDDA contributing to diminished cytotoxicity. The results in Figure 5 showed that both serum concentrations had a similar effect on cytotoxicity against the
Cytotoxicity of PDDA/OVA NPs Against Mammalian Cells in Culture
The cytotoxicity of the PDDA0.01/OVA0.1 formulation meaning 0.01 and 0.1 mg·mL-1 of PDDA and OVA concentrations was evaluated against two cell lines in culture, fibroblasts and macrophages, using two different techniques, the MTT assay and SEM.
In Figure 5, significant cytotoxicity for PDDA against the mammalian cells was obtained only above 0.01 mg·mL −1 for the two time points (3 and 24 h), and serum concentrations were tested (2% and 10% FBS). This agrees with data by Fischer and coworkers who determined low cytotoxicity for 0.01 mg·mL −1 PDDA against fibroblasts. Along similar lines, hemolytic effects of PDDA did not occur at such low concentration [53].
The use of two serum concentrations aimed at finding an eventual effect of serum proteins neutralization of the positive charges on PDDA contributing to diminished cytotoxicity. The results in Figure 5 showed that both serum concentrations had a similar effect on cytotoxicity against the two cell lines. Possibly, at 0.01 mg·mL −1 PDDA, BFS at 2% was enough to combine with all PDDA molecules so that PDDA alone was not available to combine with the cells. On the other hand, at higher [PDDA], PDDA molecules that did not combine with BFS would attach to the cells exerting their cytotoxic activity. The hemolytic activity of PDDA was reported as important only above [PDDA] = 1 mg·mL −1 [29,33,53].
In agreement with the present findings, Fischer and co-workers also reported low cytotoxicity of PDDA against fibroblasts at 0.01 mg·mL −1 using the MTT assay besides showing that PDDA was less toxic against the cells than PEI or PLL [29]. In agreement with the present findings, Fischer and co-workers also reported low cytotoxicity of PDDA against fibroblasts at 0.01 mg·mL −1 using the MTT assay besides showing that PDDA was less toxic against the cells than PEI or PLL [29]. The effect of PDDA on morphology of the fibroblasts is presented in Figure 6. Holes could be seen on the cells indicating that the many debris observed in the SEM micrographs stemmed from the cell membrane, apparently withdraw from the cell by the cationic polymer PDDA. Similar cell debris were observed by Martinez et al. with 10 mM poly(allylamine hydrochloride) [31] and with linear or ramified PEI or PLL [30], suggesting a general feature of poly-cations acting by disruption of cell membranes. This also took place for multi-resistant bacteria submitted to PDDA assemblies with anionic carboxymethylcellulose (CMC) for which both the cell wall and the cell membrane were lysed allowing the leakage of intracellular compounds [32]. The effect of PDDA on morphology of the fibroblasts is presented in Figure 6. Holes could be seen on the cells indicating that the many debris observed in the SEM micrographs stemmed from the cell membrane, apparently withdraw from the cell by the cationic polymer PDDA. Similar cell debris were observed by Martinez et al. with 10 mM poly(allylamine hydrochloride) [31] and with linear or ramified PEI or PLL [30], suggesting a general feature of poly-cations acting by disruption of cell membranes. This also took place for multi-resistant bacteria submitted to PDDA assemblies with anionic carboxymethylcellulose (CMC) for which both the cell wall and the cell membrane were lysed allowing the leakage of intracellular compounds [32].
Immunoadjuvant Properties of PDDA/OVA NPs.
Adjuvants as Al(OH)3 usually improve the antigen-specific production of IgG1, whereas adjuvants as the cationic lipid dioctadecyldimethylammonium bromide (DODAB) dispersed as bilayer assemblies usually implement the antigen-specific production of IgG2a [8,15,20,25,27]. Figure 7 shows the humoral responses of mice challenged with Al(OH)3/OVA0.1 dispersion and PDDA0.01/OVA0.1 NPs as determined by ELISA for detection of IgG1 and IgG2a antibodies. The NPs displayed an immune response profile qualitatively very similar to the one of Al (OH)3 [54] yielding an expressive increase in IgG1 and a discrete increase in IgG2a production as usual for a Th-2 implemented response by an adjuvant. However, in quantitative comparison with Al(OH)3, PDDA as an adjuvant the NPs were more effective than Al(OH)3/OVA for implementing OVA-specific IgG1 production. In the literature, the size histogram for Al(OH)3 dispersion alone or carrying an antigen showed a broad size distribution with high polydispersity and low colloidal stability as depicted from the presence of precipitated material [17]. There was a much higher production of IgG1 than IgG2a, as depicted from the serum dilutions used to determine by ELISA both antibodies. Whereas the serum dilution used to determine the primary IgG1 production was 1/256, the one used to determine primary IgG2a production was 1/8 only. This means that the serum contained much larger IgG1 concentration than the one of IgG2a (Figure 7).
Immunoadjuvant Properties of PDDA/OVA NPs.
Adjuvants as Al(OH) 3 usually improve the antigen-specific production of IgG1, whereas adjuvants as the cationic lipid dioctadecyldimethylammonium bromide (DODAB) dispersed as bilayer assemblies usually implement the antigen-specific production of IgG2a [8,15,20,25,27]. Figure 7 shows the humoral responses of mice challenged with Al(OH) 3 /OVA0.1 dispersion and PDDA0.01/OVA0.1 NPs as determined by ELISA for detection of IgG1 and IgG2a antibodies. The NPs displayed an immune response profile qualitatively very similar to the one of Al (OH) 3 [54] yielding an expressive increase in IgG1 and a discrete increase in IgG2a production as usual for a Th-2 implemented response by an adjuvant. However, in quantitative comparison with Al(OH) 3 , PDDA as an adjuvant the NPs were more effective than Al(OH) 3 /OVA for implementing OVA-specific IgG1 production. In the literature, the size histogram for Al(OH) 3 dispersion alone or carrying an antigen showed a broad size distribution with high polydispersity and low colloidal stability as depicted from the presence of precipitated material [17]. There was a much higher production of IgG1 than IgG2a, as depicted from the serum dilutions used to determine by ELISA both antibodies. Whereas the serum dilution used to determine the primary IgG1 production was 1/256, the one used to determine primary IgG2a production was 1/8 only. This means that the serum contained much larger IgG1 concentration than the one of IgG2a (Figure 7). Table 1. Mean absorbance at 492 nm± standard deviation related to anti-OVA IgG1 and IgG2a antibodies production was determined over time after immunization of BALB/c mice from sera collected on days 14 and 21 post-immunization (primary response) or on day 28 (secondary response). The primary IgG1 production was determined at 1/256 serum dilution whereas the secondary one was obtained at 1/16348 dilution. The primary IgG2a production was determined at 1/8 serum dilution whereas the secondary one was obtained at 1/128 dilution. P < 0.05 compared to naive group (#), p < 0.05 compared to OVA group (o), p < 0.05 compared to Al(OH)3/OVA group (∆).
The advantages of the PDDA/OVA NPs are their nano-size ( Figure 1 and Figure 2), high colloidal stability (Figure 4), and visual absence of precipitated material. Possible consequences in vivo would be the optimization of antigen presentation by APC at the lymph nodes. Indeed, Manolova and coworkers [55] found that nano-sized adjuvant/antigen assemblies directly delivered the antigen to the APC in the lymph nodes, whereas micro-sized ones stayed longer at the site of injection becoming mostly associated with dendritic cells (DC) from the site of injection. This could explain the higher production of IgG1 induced by PDDA/OVA NPs in comparison to Al(OH)3 /OVA.
Other cationic polymers carrying antigen also improved the humoral response above the one obtained with the traditional Al(OH)3. Diverse forms of PEI elicited potent mucosal adjuvant activity for viral subunit glycoprotein antigens; a single intranasal administration of influenza hemagglutinin or herpes simplex virus type-2 (HSV-2) glycoprotein D with PEI elicited robust antibody-mediated protection also superior to the one by existing experimental mucosal adjuvants [36].
Using other injection modes, PEI administered subcutaneously with viral glycoprotein (HIV-1 gp140) also enhanced antigen-specific serum IgG production. PEI elicited higher titers of both antigen-binding and -neutralizing antibodies than alum in mice and rabbits, and induced an increased proportion of antibodies reactive with native antigen. In an intraperitoneal model, PEI recruited neutrophils followed by monocytes to the site of administration and enhanced antigen uptake by antigen-presenting cells [37].
Gold nanorods modified with cationic CTAB, PDDA, and PEI were used as promising DNA vaccine adjuvants for HIV-1 treatment and, curiously, also yielded high ratios IgG1/IgG2a. Among these three cationic molecules, PDDA was the one yielding the higher ratios [43]. Plebanski and coworkers designed magnetic lipoplexes for enhanced DNA vaccine delivery combining PEI, superparamagnetic iron oxide nanoparticles (SPIONs) with the mucoadhesive hyaluronic acid (HA) and a plasmid encoding a malaria antigen increasing IgG1, IgG2a, and IgG2b [10,18]. Table 1. Mean absorbance at 492 nm± standard deviation related to anti-OVA IgG1 and IgG2a antibodies production was determined over time after immunization of BALB/c mice from sera collected on days 14 and 21 post-immunization (primary response) or on day 28 (secondary response). The primary IgG1 production was determined at 1/256 serum dilution whereas the secondary one was obtained at 1/16348 dilution. The primary IgG2a production was determined at 1/8 serum dilution whereas the secondary one was obtained at 1/128 dilution. p < 0.05 compared to naive group (#), p < 0.05 compared to OVA group (o), p < 0.05 compared to Al(OH) 3 /OVA group (∆).
The advantages of the PDDA/OVA NPs are their nano-size (Figures 1 and 2), high colloidal stability (Figure 4), and visual absence of precipitated material. Possible consequences in vivo would be the optimization of antigen presentation by APC at the lymph nodes. Indeed, Manolova and co-workers [55] found that nano-sized adjuvant/antigen assemblies directly delivered the antigen to the APC in the lymph nodes, whereas micro-sized ones stayed longer at the site of injection becoming mostly associated with dendritic cells (DC) from the site of injection. This could explain the higher production of IgG1 induced by PDDA/OVA NPs in comparison to Al(OH) 3 /OVA.
Other cationic polymers carrying antigen also improved the humoral response above the one obtained with the traditional Al(OH) 3. Diverse forms of PEI elicited potent mucosal adjuvant activity for viral subunit glycoprotein antigens; a single intranasal administration of influenza hemagglutinin or herpes simplex virus type-2 (HSV-2) glycoprotein D with PEI elicited robust antibody-mediated protection also superior to the one by existing experimental mucosal adjuvants [36].
Using other injection modes, PEI administered subcutaneously with viral glycoprotein (HIV-1 gp140) also enhanced antigen-specific serum IgG production. PEI elicited higher titers of both antigen-binding and -neutralizing antibodies than alum in mice and rabbits, and induced an increased proportion of antibodies reactive with native antigen. In an intraperitoneal model, PEI recruited neutrophils followed by monocytes to the site of administration and enhanced antigen uptake by antigen-presenting cells [37].
Gold nanorods modified with cationic CTAB, PDDA, and PEI were used as promising DNA vaccine adjuvants for HIV-1 treatment and, curiously, also yielded high ratios IgG1/IgG2a. Among these three cationic molecules, PDDA was the one yielding the higher ratios [43]. Plebanski and coworkers designed magnetic lipoplexes for enhanced DNA vaccine delivery combining PEI, superparamagnetic iron oxide nanoparticles (SPIONs) with the mucoadhesive hyaluronic acid (HA) and a plasmid encoding a malaria antigen increasing IgG1, IgG2a, and IgG2b [10,18].
In order to ascertain the cell-mediated immunity induced by the PDDA/OVA NPs, the delayed-type hypersensitivity reaction in immunized mice was determined. The DTH response induced by PDDA/OVA was low and equal to the one observed in mice for Al(OH) 3 /OVA (Figure 8). This reconfirms the Th-2 profile induced by PDDA/OVA NPs. Other Th-1 inducers such DODAB/OVA or DODAB/CpG/OVA gave footpad swellings above 1.25 mm in contrast to the value lower than 0.5 mm obtained for PDDA/OVA or Al(OH) 3 /OVA [8,9,12,15,17]. Thus, the low DTH response agreed with the predominant Th-2 profile for PDDA/OVA NPs.
Recently, we described hybrid NPs made of poly (methyl methacrylate) (PMMA) in the presence of poly (diallyl dimethyl ammonium) chloride (PDDA) using dioctadecyl dimethyl ammonium bromide (DODAB) as an emulsifier and found that DODAB remains attached to the NPs core [56][57][58][59]. This system might bring interesting applications for antigen presentation since DODAB might eventually add the missing Th-1 response to the antigen-presenting NPs. In order to ascertain the cell-mediated immunity induced by the PDDA/OVA NPs, the delayedtype hypersensitivity reaction in immunized mice was determined. The DTH response induced by PDDA/OVA was low and equal to the one observed in mice for Al(OH)3/OVA (Figure 8). This reconfirms the Th-2 profile induced by PDDA/OVA NPs. Other Th-1 inducers such DODAB/OVA or DODAB/CpG/OVA gave footpad swellings above 1.25 mm in contrast to the value lower than 0.5 mm obtained for PDDA/OVA or Al(OH)3/OVA [8,9,12,15,17]. Thus, the low DTH response agreed with the predominant Th-2 profile for PDDA/OVA NPs.
Recently, we described hybrid NPs made of poly (methyl methacrylate) (PMMA) in the presence of poly (diallyl dimethyl ammonium) chloride (PDDA) using dioctadecyl dimethyl ammonium bromide (DODAB) as an emulsifier and found that DODAB remains attached to the NPs core [56][57][58][59]. This system might bring interesting applications for antigen presentation since DODAB might eventually add the missing Th-1 response to the antigen-presenting NPs. Table 1. p < 0.05 compared to naive group (#), p < 0.05 compared to OVA group (o).
Conclusions
PDDA/OVA at 0.01 and 0.10 mg·mL −1 assembled as stable cationic nanoparticles in water with Dz = 170 nm ± 4, ζ = 30 mV ± 2, and P = 0.11 ± 0.01 effectively presented OVA to the immune system inducing a Th-2 type (humoral) response superior to the one induced by Al(OH)3. One of the most important advantages of the PDDA system as an adjuvant is its simplicity; it is a single-component adjuvant. In addition, at the low PDDA concentration used, cytotoxicity of the cationic polymer became neglectable. At and above 0.1 mg·mL −1 , significant PDDA cytotoxicity against fibroblasts and macrophages in culture induced holes and cellular debris observable in SEM micrographs of the mammalian cells. PDDA/OVA NPs were nano-sized, stable, and well-dispersed in water without precipitated material. These properties would lead to in vivo optimization of antigen presentation by APC at the lymph nodes in contrast to the micro-sized Al(OH)3/OVA dispersion that would remain at the site of injection staying longer at the site of injection and stimulating local dendritic cells. Determination of delayed-type hypersensitivity (DTH) reaction from footpad swelling revealed the low cellular response induced by PDDA/OVA similarly to Al(OH)3 /OVA. In the future, it will be interesting to evaluate combinations of the antigen with NPs carrying both PDDA and elicitors of the Th-1 response such as cationic lipid and/or CpG. For continuing this work, the mechanism of PDDAinduced cell death might be investigated from the morphology of the cell nucleus using a fluorescent label to discriminate between necrosis or apoptosis. The location of nanoparticles inside the cell will have to be assessed using an ovalbumin fluorescent label and the CD8+/CD4+ cell proliferation from Table 1. p < 0.05 compared to naive group (#), p < 0.05 compared to OVA group (o).
Conclusions
PDDA/OVA at 0.01 and 0.10 mg·mL −1 assembled as stable cationic nanoparticles in water with Dz = 170 nm ± 4, ζ = 30 mV ± 2, and P = 0.11 ± 0.01 effectively presented OVA to the immune system inducing a Th-2 type (humoral) response superior to the one induced by Al(OH) 3 . One of the most important advantages of the PDDA system as an adjuvant is its simplicity; it is a single-component adjuvant. In addition, at the low PDDA concentration used, cytotoxicity of the cationic polymer became neglectable. At and above 0.1 mg·mL −1 , significant PDDA cytotoxicity against fibroblasts and macrophages in culture induced holes and cellular debris observable in SEM micrographs of the mammalian cells. PDDA/OVA NPs were nano-sized, stable, and well-dispersed in water without precipitated material. These properties would lead to in vivo optimization of antigen presentation by APC at the lymph nodes in contrast to the micro-sized Al(OH) 3 /OVA dispersion that would remain at the site of injection staying longer at the site of injection and stimulating local dendritic cells. Determination of delayed-type hypersensitivity (DTH) reaction from footpad swelling revealed the low cellular response induced by PDDA/OVA similarly to Al(OH) 3 /OVA. In the future, it will be interesting to evaluate combinations of the antigen with NPs carrying both PDDA and elicitors of the Th-1 response such as cationic lipid and/or CpG. For continuing this work, the mechanism of PDDA-induced cell death might be investigated from the morphology of the cell nucleus using a fluorescent label to discriminate between necrosis or apoptosis. The location of nanoparticles inside the cell will have to be assessed using an ovalbumin fluorescent label and the CD8+/CD4+ cell proliferation from flow cytometry and cytokines profile determination will reconfirm the Th-1/Th-2 response induced by PDDA/OVA NPs. | 2020-03-04T14:04:35.424Z | 2020-02-28T00:00:00.000 | {
"year": 2020,
"sha1": "938f757b2da1b00708238c9463d06545e9bd2278",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-393X/8/1/105/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "21f4c9d2b43e5467b99d197e0394917980f95bf2",
"s2fieldsofstudy": [
"Biology",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
208923170 | pes2o/s2orc | v3-fos-license | Poly[[μ10-4,4′-(ethane-1,2-diyldioxy)dibenzoato]dipotassium]
The title salt, [K2(C16H12O6)]n, was obtained by the reaction of 1,2-bis[4-(ethyl-carboxyl)-phenoxyl]ethane with KOH in water. The anion lies on a crystallographic inversion center, which is located at the mid-point of the central C—C bond. The K+ cation is coordinated by six O atoms, two from the chelating carboxylate group of the anion and four from four neighboring and monodentately binding anions, giving rise to an irregular [KO6] coordination polyhedron. The coordination mode of the cation leads to the formation of K/O layers parallel to (100). These layers are linked by the nearly coplanar anions (r.m.s. deviation of 0.064 Å of the carboxyl, aryl and O—CH2 groups from the least-squares plane) into a three-dimentional network.
The title salt, [K 2 (C 16 H 12 O 6 )] n , was obtained by the reaction of 1,2-bis[4-(ethyl-carboxyl)-phenoxyl]ethane with KOH in water. The anion lies on a crystallographic inversion center, which is located at the mid-point of the central C-C bond. The K + cation is coordinated by six O atoms, two from the chelating carboxylate group of the anion and four from four neighboring and monodentately binding anions, giving rise to an irregular [KO 6 ] coordination polyhedron. The coordination mode of the cation leads to the formation of K/O layers parallel to (100). These layers are linked by the nearly coplanar anions (r.m.s. deviation of 0.064 Å of the carboxyl, aryl and O-CH 2 groups from the least-squares plane) into a three-dimentional network.
Data collection: SMART (Bruker, 2001); cell refinement: SAINT (Bruker, 2002); data reduction: SAINT; program(s) used to solve structure: SHELXS97 (Sheldrick, 2008); program(s) used to refine structure: SHELXL97 (Sheldrick, 2008); molecular graphics: XP in SHELXTL (Sheldrick, 2008); software used to prepare material for publication: publCIF (Westrip, 2010 The coordination chemistry of carboxylic compounds is attracting current attention, based on interesting properties like gas adsorption and separation, catalysis, magnetism, luminescence and host-guest chemistry (Su et al., 2010;Zhu et al., 2008) of these compounds. It is well-known that carboxylic acids are excellent building blocks for the construction of coordination polymers, yielding extended frameworks by virtue of their bridging abilities (Ma et al., 2005;Zhang & Chen, 2008). Hence, the present paper aims to promote the search for new metal carboxylic complexes exhibiting special properties in many fields, in particular those bearing multicarboxylic-type ligands, a promising but still rather underdeveloped field of research. We report here the structure of a new polymeric dipotassium dicarboxylic compound ( Fig. 1).
In the asymmetric unit one half of the anion is present. The anion lies on a crystallographic inversion center, which is located at the mid point of the C8-C8 i bond (symmetry code (i) = -x-1, -y-1, -z). All bond lengths and angles of the anion are within normal ranges (Allen et al., 1987). The K + ion is coordinated by six oxygen atoms, two from the ligand and four from neighboring ligands, in a distorted [KO 6 ] polyhedron (Fig 2), whereby one anion coordinates all in all to ten K + cations. The benzene rings of the anion are parallel to each other with a plane-to-plane distance of 3.488 Å. The carboxyl, aryl and O-CH 2 moieties are coplanar with an r.m.s. deviation of 0.0638 Å.
A three-dimensional network is spanned owing to the coordination mode of the potassium cations. The K + cations and the O atoms of the carboxylate anions form a layer parallel to (100). These layers are finally connected by the substituted ethane moieties into a three-dimensional structure (Fig. 2).
Experimental
The precusor of the title compound was prepared by a reported procedure (Ma & Yang, 2011). The title compound was synthesized by the reaction of the precusor, diethyl 4,4′-(ethane-1,2-diyldioxy)-dibenzoate and potassium hydroxide in the conditions as follows: The precusor (1.0 g, 2.8 mM) and KOH (0.31 g, 5.6 mM) were put in water (150 cm 3 ) in a 250 cm 3 flask and the system was stirred for 24 h at 373 K for all solids dissolved and cooled down to room temperature.
After filtration, a colorless solution was obtained. Evaporation of the solution gave a white solid (0.82 g, 77 %), which was washed with ethanol two times (10 ml each). Slow evaporation of a solution of the title compound in water led to the formation of colorless crystals, which were suitable for X-ray characterization.
Refinement
Hydrogen atoms bonded to the C atoms of the anion were positioned geometrically and refined using a riding model with C-H = 0.93 -0.97 Å and with Uiso(H) = 1.2 times Ueq(C). These hydrogen atoms were assigned isotropic thermal parameters and allowed to ride on their respective parent atoms.
Poly[[µ 10 -4,4′-(ethane-1,2-diyldioxy)dibenzoato]dipotassium]
Crystal data Special details Geometry. All esds (except the esd in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell esds are taken into account individually in the estimation of esds in distances, angles and torsion angles; correlations between esds in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell esds is used for estimating esds involving l.s. planes. Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > 2sigma(F 2 ) is used only for calculating R-factors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger. | 2018-04-03T00:05:57.255Z | 2012-03-03T00:00:00.000 | {
"year": 2012,
"sha1": "07d2348c961d8b24a66baa0bb8007f789e7be2e8",
"oa_license": "CCBY",
"oa_url": "http://journals.iucr.org/e/issues/2012/04/00/wm2592/wm2592.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "07d2348c961d8b24a66baa0bb8007f789e7be2e8",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine",
"Computer Science"
]
} |
254618283 | pes2o/s2orc | v3-fos-license | Protective Actions of α-Tocopherol on Cell Membrane Lipids of Paraquat-Stressed Human Astrocytes Using Microarray Technology, MALDI-MS and Lipidomic Analysis
Cellular senescence is one of the main contributors to some neurodegenerative disorders. The early detection of senescent cells or their related effects is a key aspect in treating disease progression. In this functional deterioration, oxidative stress and lipid peroxidation play an important role. Endogenous antioxidant compounds, such as α-tocopherol (vitamin E), can mitigate these undesirable effects, particularly lipid peroxidation, by blocking the reaction between free radicals and unsaturated fatty acid. While the antioxidant actions of α-tocopherol have been studied in various systems, monitoring the specific effects on cell membrane lipids at scales compatible with large screenings has not yet been accomplished. Understanding the changes responsible for this protection against one of the consequences of senescence is therefore necessary. Thus, the goal of this study was to determinate the changes in the lipid environment of a Paraquat-treated human astrocytic cell line, as a cellular oxidative stress model, and the specific actions of the antioxidant, α-tocopherol, using cell membrane microarray technology, MALDI-MS and lipidomic analysis. The stress induced by Paraquat exposure significantly decreased cell viability and triggered membrane lipid changes, such as an increase in certain species of ceramides that are lipid mediators of apoptotic pathways. The pre-treatment of cells with α-tocopherol mitigated these effects, enhancing cell viability and modulating the lipid profile in Paraquat-treated astrocytes. These results demonstrate the lipid modulation effects of α-tocopherol against Paraquat-promoted oxidative stress and validate a novel analytical high-throughput method combining cell cultures, microarray technology, MALDI-MS and multivariate analysis to study antioxidant compounds against cellular senescence.
Introduction
Lipid peroxidation is a physiological chain reaction process produced by free radicals [1,2] through enzymatic and non-enzymatic mechanisms [3]. In enzymatic mechanisms, lipid peroxidation can be produced by lipoxygenases (LOX), cyclooxygenases (COX), cytochrome c [4], or cytochrome P450 [5,6]. Whereas, in non-enzymatic mechanisms, the production is mediated by oxygen-free radicals [7], such as superoxide (O •− 2 ) or hydroxyl radicals (OH • ), among others [8]. However, lipid peroxidation is regulated by a diverse array of antioxidant defense systems, such as the glutathione system or catalase and superoxide dismutase enzymes [6,9], as well as non-enzymatic compounds, such as vitamin E, beta-carotene or glutathione [6]. If an imbalance between the oxidant and antioxidant compounds is produced, it turns into a pathological process, which provokes the modification of physical properties of membranes, such as permeability, packing of lipids and proteins [10], and a loss of function [4] that leads to cell damage [7].
The non-enzymatic formation of lipid radicals is a three-phase reaction (initiation, propagation, and termination) which is primarily produced in polyunsaturated fatty acids (PUFAs) [11], due to its relative weak hydrogen-carbon bond [11,12], where a hydrogen radical (H • ) is abstracted from the lipid chain [13]. The lipid radical can react with other molecules, giving rise to lipid hydroperoxides, which are extremely unstable and decompose themselves in secondary products [12,13]. The initiation phase requires an initiator molecule, such as iron or copper ions [8], reactive nitrogen species (RNS), or reactive oxygen species (ROS) [13].
The increased presence of reactive oxygen species entails the pathological process of oxidative stress, which has been related to several diseases [14] such as cancer, cardiovascular and neurodegenerative diseases [15]. As the brain is an organ with a huge energy consumption, it is more susceptible to oxidative stress. In addition, it is rich in phospholipids, and is especially enriched in PUFAs [16]. Oxidative stress generation can originate from different sources, such as NADPH oxidase enzymes [17,18], monoamine oxidase [19], peroxisomes, and lastly, the mitochondrial electron transport chain (mETC) [18,20,21] as the principal generator, primarily by complexes I and III [22].
Paraquat is a highly toxic herbicide that can lead to severe brain damage [23] due to its interference with NADP+ reduction [24,25]. It has also been reported as a complex I and III inhibitor [24]. In the first mechanism, Paraquat ions (PQ 2+ ) are reduced to their monocation radical form, which can react with oxygen and generate ROS [24,26,27], whereas in the last case, the inhibition of mitochondrial complexes provokes mETC dysfunction. Thus, as the principal toxic effect of paraquat is related to oxidative stress, its effects may be partially reversed using antioxidant compounds [25,28]. Concretely α-tocopherol, the vitamin E predominant form, as well as other tocopherols and tocotrienols, have lipoperoxyl radical scavenging activity [28][29][30][31]. Vitamin E is an effective antioxidant against oxidation mediated by free radicals [28,30]. The mechanism of action of this compound consists in donating a hydrogen atom to lipoperoxyl radicals generating its own radical [32].
As this antioxidant compound can have beneficial effects over oxidative stress conditions, levels of oxidation subproducts [33], such as monocyte superoxide anion concentrations, have been analyzed in humans [34]. However, the effects over lipids and lipoperoxyls have not yet been examined. In this work, we test the effects of stressing human astrocytic cells with Paraquat and the modification of these changes by the antioxidant α-tocopherol. Cell viability and lipid-profiling are performed. For the latter, we combine our Cell Membrane Microarray (CMMA) technology with MALDI mass spectrometry, as it allows for the simultaneous lipid-profiling of many samples [35]. Our analysis demonstrates the importance of the lipid composition and lipid homeostasis in cell viability upon oxidative stress and opens the door to high-throughput analysis of antioxidant compounds' effects on relevant membrane samples.
Cell Culture and Treatments
The human astrocytic cell line 1321N1 was seeded at 10,000 cells/cm 2 and cultivated in 12 well plates in complete medium (DMEM medium with 1 g/L glucose, 5% Fetal Bovine Serum (FBS), 1% L-glutamine (L-Glut) and 1% Penicillin-Streptomycin Solution Hybri-Max (P/S)) for 24 h at 37 • C, 5% CO 2 and constant humidity. The cells were habituated to low serum conditions (DMEM 1 g/L glucose, 0.2% Charcoal-treated FBS, 1% L-Glut, and 1% P/S) for 12 h before treatments began. After adaptation, a pre-treatment of 3 h with or without α-Tocopherol (1 µM), prepared in a low serum medium, was performed before treatment, with or without Paraquat (500 µM), in low serum conditions for different times (24,48,72,96, and 120 h). Cell viability assays were performed after each time interval as described below.
Viability Assay
To analyze the cellular viability in every condition and time point, Trypan-blue viability assays were performed. The cells were detached from the culture well by a mechanical method to avoid enzymatic treatments (changing media for PBS at 4 • C and tapping the plate laterally). The cell suspension was transferred to a microtube and diluted 1:1 with 0.4% Trypan blue solution. The cells were counted using a Neubauer chamber in an inverted microscope Olympus CKX41 (Olympus Corporation, Tokyo, Japan). Dead cells (with a compromised membrane) were stained in dark blue. The percentage of live cells, with respect to total cells, was calculated in order to determine the best time for MS analysis.
Viability data handling and analysis was carried out using Excel and GraphPad software (version 9.2). Briefly, cell viability data were presented as a percentage of cell growth. The identification of outliers was carried out by applying the following formulas: * SD = standard deviation; DF = deviation factor; Points were identified as outliers and excluded if Y 1 was higher than the point analyzed or Y 2 was lower than the point examined. We used a deviation factor of 1.25 in our analysis. The data were expressed as means of independent data points ± S.E.M. The results were analysed using one-way two-tailed two-way ANOVA with Tukey's post hoc. Statistical differences were indicated with p-values ≤ 0.05.
Cell Membrane Extraction and CMMA Fabrication
In order to fabricate Cell Membrane Microarrays (CMMA), the 1321N1 cell line was seeded at 15,000 cells/cm 2 and cultivated in 25 cm 2 culture flask following the same conditions explained previously. The cells were cultivated until 80% confluency was reached (obtaining 10 6 cells per flask), and then treated as described previously. The cells were detached from the flasks and homogenized using a Teflon-glass grinder (Heidolph RZR 2020, Schwabach, Germany) in 20 volumes of homogenization buffer (1 mM EGTA, 3 mM MgCl 2 , and 50 mM Tris-HCl, pH 7.4). The crude homogenate was subjected to a 1500-rpm centrifugation (AllegraTM X 22R centrifuge, Beckman Coulter, CA, USA) for 5 min at 4 • C, and the resulting supernatant was collected and centrifuged at 18,000× g (Microfuge ® 22R centrifuge, Beckman Coulter, CA, USA) for 15 min (4 • C). With this protocol, a fraction enriched in plasma membrane and membranes from internal organelles, including mitochondria, was obtained, as it has been demonstrated by detection of GLUT-4 transporter and Insulin Receptor β subunit [36,37], and in previous studies by cytochrome C detection [36], mETC enzymes and acetylcholinesterase activity [18,38,39]. The tubes were finally decanted, and the pellets were frozen at −80 • C, with the exception of one aliquot, which was used to determine the protein concentration. The protein concentration was determined by the Bradford method and adjusted to a final concentration of 5 mg/mL [40,41].
The membrane homogenates were resuspended in buffer and printed onto glass slides using a non-contact microarrayer (Nanoplotter NP 2.1, Gesim Bioinstruments, Radeberg, Germany) using a solenoid tip, placing 2 replicates of each sample (30 nL/spot), into pre-activated glass microscope slides. Printing was carried out under controlled humidity (relative humidity 60%) at a controlled temperature of 4 • C. The CMMAs were stored at −20 • C until usage. The CMMAs were validated before usage by different methods, including Bradford staining, for protein determination, enzyme activity assays (NADH oxidoreductase, succinate dehydrogenase, and cytochrome c oxidase), and radioligand binding assays [18,[38][39][40].
MALDI-MS Lipidomic Analysis
The Cell Membrane Microarrays were covered with a suitable matrix with the aid of a standard glass sublimator (Ace Glass 8233,NJ, USA), producing a uniform film of approximately 0.2 mg/cm 2 [35]. For positive-ion and negative-ion modes, 2-mercaptobenzothiazole (MBT) and 1,5-diaminoaphtalene (DAN) were used, respectively [35,42]. The CMMAs were scanned, as in the MALDI imaging experiment. The area of the array was explored following a grid coordinate separated by 250 µm; as each spot has a diameter of 280 µm, six pixels were recorded at each spot. The mass spectrometer used was an LTQ-Orbitrap XL (Thermo Fisher Scientific, Waltham, Massachusetts, USA), equipped with a MALDI source with a N 2 laser (60 Hz, 100 µJ/pulse maximum power output). The laser spot is an ellipsoid of approximately 50-60 µm × 140-160 µm. Two microscans of 10 shots/pixel were used, with a laser power output of 20 µJ for MS+ and 30 µJ for MS-and a resolution of 250 µm. Data loading included spectra normalization by total ion current (TIC), spectra alignment, and peak picking, filtering all the m/z with intensity < 0.5% of the strongest peak in the spectrum.
The MALDI spectra were aligned using MATLAB (Matworks, Natick, Massachusetts, USA), lipids assignment was performed using the Lipid Maps LMSD database. For the MALDI data analysis, the MS+ and MS-data were normalized separately and then analyzed together. The matrix peaks and isotopic distribution were removed, and the remaining peaks were normalized against their total ion current (TIC). The MS+ and MS-data were standardized using the z-score method and using the following formula: X: observed value; µ: mean of the sample; σ: standard deviation of the sample.
Viability of Human Astrocytic Cells upon Different Treatments
The characterization of cell viability in the cultures was performed in order to decide the conditions in which to study the membrane lipid changes by CMMAs. The culture medium was maintained along the treatments, so that the cells would be entering senescence. The cultures were monitored until all of the cells were dead.
Paraquat treatment decreased cell viability at all time points, with respect to the control, changing from 83% to 68% at 24 h ( Figure 1B, C). Moreover, with α-tocopherol pre-treatment, the viability improved significantly at 24 h (96%) ( Figure 1C). At 72 h, the Paraquat-induced mortality reached 90% (10% viability), and the viability was rescued (up to 40%) if the cells were pre-treated with α-tocopherol. No viability differences between the control and the 3-h exposure to α tocopherol were observed when not combined with Paraquat. The cells in the control conditions reached death (98.6 %) at 96 h, while those treated with α-tocopherol only or with a pre-treatment of α-tocopherol, followed by Paraquat, survived longer (reaching death at 120 h).
Differences were observed at 24, 48, and 72 h between the Paraquat-treated and control situations and between the Paraquat-treated cells with or without α-tocopherol pre-treatment. The time chosen for mass spectrometry was 24 h, at which point viability is still not much compromised and the quantitative differences between treatments are larger.
Lipidomic Analysis in Cell Membrane Microarrys Is Able to Reveal Paraquat-Triggered Changes in Human Astrocytic Membranes
We used CMMAs composed of membranes from the human astrocytic cell line 1321N1, obtained in three different conditions: Control, Paraquat (500 µM, 24 h), and Paraquat preceded by α-tocopherol (1 µM, 3 h). Membrane arrays were developed for MALDI mass spectrometry analysis of their lipid fingerprint. The differences between the lipid fingerprint were analyzed between membranes from the control and Paraquat-treated cells, and between membranes in Paraquat conditions with or without α-tocopherol pre-treatment. Both MS+ and MS-modes were performed. Specific lipid adducts were detected in the Paraquat-treated samples, in contrast to the control situation; 66 lipid adducts were only present in the Paraquat-treated membranes with respect to the control situation. Nevertheless, the vast majority of the lipid adducts present in the Paraquat condition changed when the cells were pre-treated with α-tocopherol. Differences were observed at 24, 48, and 72 h between the Paraquat-treated and control situations and between the Paraquat-treated cells with or without α-tocopherol pre- The membranes from the Paraquat-treated cells displayed very long chain ceramides (C > 26) Cer 40:0; O3, Cer 40:1; O3, and Cer 40:2; O3 (red arrows in Figure 2A), which were absent in the control membranes, and sphingomyelins generally increased their presence in the Paraquat treated samples (Figure 2A). Long-chain glycerophosphates (PA with C > 26) ( Figure 2B) also displayed a general increase, particularly evident in PA 40:6, with a very long chain with more than three unsaturations. Lipids of the phosphoethanolamines family are present as normal and as the ether version (PE and PE O-), where one of the fatty acids is connected to the glycerol molecule by an oxygen atom. In this regard, PE 36:3, PE 38:3, and PE O-36:2 were only present in the Paraquat-treated membranes (red arrows in Figure 2B), while PE 36:2 appeared to decrease (−17.62% ± 7.2), as was the case for other PEs ( Figure 2B). Moreover, two similar lipidic species with opposite behavior, upon Paraquat treatment, were observed in pairs differing in their degree of unsaturation: e.g., PE 38:2 showed a decrease of −35.7% ± 1.4, whereas PE 38:3 is one of the species only present in Paraquat treated samples. Finally, oxidated PE-O species of more than 30 carbons generally increased with Paraquat treatment. Very long chain phosphocholines ( Figure 2C) are increased due to Paraquat treatment: the more unsaturation, the higher the change rates. In addition, the lysophosphocholines (lipid adducts with only one fatty acid) LPC 16:0 and LPC 18:1, presented an increase of 138% ± 25 and 261.8% ± 43.2, respectively, whereas the LPC 16:0 ether form appeared only upon Paraquat treatment (red arrows in Figure 2C). Phosphocholines with one or two unsaturations were increased in the membranes from the Paraquat treated cells, while their saturated forms were reduced or absent: e.g., PC32:1 (189.29% ± 28.1) and PC 32:2 (207.12% ± 27.0) compared with PC 32:0 (−35.74% ± 1.4). Finally, PI adducts were present as very long chain lipids, with a general increase observed in the forms with more unsaturations, while their saturated form or those with lower unsaturations were reduced or absent. ( Figure 2C). Thus, the lipidomic analysis revealed a distinguishable fingerprint in cell membranes due to paraquat treatment, that we can identify using our CMMAs.
In contrast, saturated glycerophosphates generally increased; the lower the number of double bonds, the higher the change rate ( Figure 3B), while phosphoglycerols showed a general decrease due to the antioxidant pre-treatment. Examples of two similar lipidic species with opposite behavior upon α-tocopherol pre-treatment also occurred. Such is the case of PG 36:1 (15.49% ± 20.32), which appeared to increase, whereas PG 36:2 did not change (−3.81% ± 3.84). This pattern can also be observed between the different phosphoethanolamine adducts; ether PE generally decreased, while normal adducts displayed a mild increase. The exception is PE O-40:5, an ether PE only present when Paraquat treatment was preceded by α-tocopherol.
To summarize, clear differences triggered by Paraquat in the lipid fingerprint were detected in our system in ceramides, sphingomyelins, glycolipids, and unsaturated phospholipids. In contrast, unsaturated ceramides and phospholipids were reduced when the cells were treated with α-tocopherol before the Paraquat stress ( Figure 4). (PG), glycerophosphocholines and lysophosphocholines (PC and LPC), glycerophosphoinositols and lysophosphoinositols (PI and LPI). Ether forms indicated by the O-suffix. Oxygen number inside lipid head and fatty acids in Ceramides and Sphingomyelins is indicated by O with a suffix.
To summarize, clear differences triggered by Paraquat in the lipid fingerprint were detected in our system in ceramides, sphingomyelins, glycolipids, and unsaturated phospholipids. In contrast, unsaturated ceramides and phospholipids were reduced when the cells were treated with α-tocopherol before the Paraquat stress ( Figure 4).
Discussion
In the present study, we describe a method based on CMMA to observe lipidome changes in membranes from brain cells that are triggered by prooxidant and antioxidant treatments. The starting point is that α-tocopherol, with antioxidant properties against membrane lipid peroxidation, could protect from Paraquat damage [43]. We found that cell viability is enhanced when an α-tocopherol treatment precedes exposure to Paraquat. It is known that tocopherol and tocotrienol compounds prevent lipid peroxidation due to their ROS scavenger properties [44], protecting unsaturated fatty acids and lipid mediators [31]. In addition, α-tocopherol regulates the expression of genes implicated in apoptosis or antioxidant defenses, such as Bcl2-L1 or γ-glutamylcystein synthetase [44,45], or in lipid homeostasis, such as phospholipase A2 [46], which has 1-O-acyl ceramide synthase activity, using ceramides as an acceptor [47]. Thus, we predicted that lipid changes due to Paraquat exposure can be mitigated by this antioxidant pre-treatment.
We show that changes in the membrane lipid fingerprint between experimental conditions can be analyzed with our CMMA technology, using a significantly small amount of material per sample. This allows for the production of thousands of microarrays with negligible total sample amounts. Moreover, using this high-throughput technology allows for lipidomic analysis to be performed in many samples at the same time, therefore opening the path to future large-scale screenings. CMMAs have previously been used in different lipidomic studies to analyze the lipid fingerprint of nerve and peripheral tissue in animal models [35,40]. An additional advantage of this technology is that protein
Discussion
In the present study, we describe a method based on CMMA to observe lipidome changes in membranes from brain cells that are triggered by prooxidant and antioxidant treatments. The starting point is that α-tocopherol, with antioxidant properties against membrane lipid peroxidation, could protect from Paraquat damage [43]. We found that cell viability is enhanced when an α-tocopherol treatment precedes exposure to Paraquat. It is known that tocopherol and tocotrienol compounds prevent lipid peroxidation due to their ROS scavenger properties [44], protecting unsaturated fatty acids and lipid mediators [31]. In addition, α-tocopherol regulates the expression of genes implicated in apoptosis or antioxidant defenses, such as Bcl2-L1 or γ-glutamylcystein synthetase [44,45], or in lipid homeostasis, such as phospholipase A2 [46], which has 1-O-acyl ceramide synthase activity, using ceramides as an acceptor [47]. Thus, we predicted that lipid changes due to Paraquat exposure can be mitigated by this antioxidant pre-treatment.
We show that changes in the membrane lipid fingerprint between experimental conditions can be analyzed with our CMMA technology, using a significantly small amount of material per sample. This allows for the production of thousands of microarrays with negligible total sample amounts. Moreover, using this high-throughput technology allows for lipidomic analysis to be performed in many samples at the same time, therefore opening the path to future large-scale screenings. CMMAs have previously been used in different lipidomic studies to analyze the lipid fingerprint of nerve and peripheral tissue in animal models [35,40]. An additional advantage of this technology is that protein expression or activity assays can be performed, as the protein-lipid relation and protein functionality is maintained [35,[39][40][41][42]. Thus, the combination of membrane microarray technology with mass spectrometry results in a powerful technique to determine the effects of different compounds over lipid composition.
It is known that the ceramide content remains low in non-dividing cells, but can increase due to exposure to stress conditions, such as serum starvation, chemical compound exposure, or oxidative stress conditions [48]. These sphingolipids, composed of sphingosine group and fatty acid, are primarily produced by de novo synthesis [49], the salvage pathway, or sphingomyelinase pathways [50]. We are able to detect an increase in ceramides in Paraquat-treated membranes, as all adducts increase their presence, whereas, with αtocopherol pre-treatment, the unsaturated ceramides decrease with respect to their values in Paraquat treatment, or remain absent. These bioactive lipid adducts can participate in diverse signaling pathways themselves or by their hydrolyzation to sphingosine through ceramidase enzymes [51]. The increased levels observed in the membranes from the Paraquat-treated cells, with respect to the control, can act as second messengers in apoptosis pathways [52,53], increasing cell death, as it is observed in Figure 1. Moreover, the increase in ceramide production might be due to an activation of the de novo pathway, and this activation may lead to an increase in other sphingolipids by ceramides transformation, such as sphingomyelins [49] by the action of the SMS enzyme. However, the detected ceramides had ultra-long chain (ULC) fatty acids, the presence of which is related to inflammation and different diseases [54], including neurodegenerative disorders [54,55]. With α-tocopherol exposure, sphingomyelin adducts, such as SM 35:1; O2, either increase or appear as new species, with respect to the membranes from cells treated only with Paraquat. In contrast, some ceramide species decrease with antioxidant pretreatment (Figure 3), which can lead to higher cell survival (Figure 1).
Phospholipids are particularly sensitive to oxidative stress exposure. Phosphoethanolamines and their ethers present changes upon Paraquat treatment. The observed increase in phosphoethanolamine ethers (PE O-) might be an adaptive response of the cell as they have antioxidant activity [56]. They can be precursors of plasmalogen, a lipid species whose oxidative products do not propagate lipid peroxidation [57], protecting other phospholipids, lipids, and lipoproteins from oxidative stress [58]. The observed decrease in most ether PE species when α-tocopherol precedes Paraquat treatment can reflect a return to basal conditions, where this antioxidant mechanism is not necessary [59]. The antioxidant properties of the PE O-species are mainly attributed to the preferential oxidation of the vinyl ether bond, which results in the protection of the polyunsaturated fatty acids [60]. In addition, in agreement with these observations, Paraquat treatment results in greater increases in ether forms of phosphatidylcholines, such as LPC O-16:0 and PC O-32:2, than their non-ether analogs ( Figure 2C). Furthermore, as shown in other studies, [59], an increased presence of PE 38:3 is observed in cell membranes upon Paraquat treatment. These species only appear in Paraquat-treated samples with or without pre-treatment with α-tocopherol. Nevertheless, these lipid adducts containing long-chain polyunsaturated fatty acids (LCPUFAs) intervene in brain inflammatory reactions. In Paraquat-treated membranes, PCs containing LCPUFAs are increased, while with α tocopherol pre-treatment, some of them disappear [61].
We have detected an increase in LPI 20:4 when Paraquat treatment is preceded by an αtocopherol pre-treatment. This lipidic adduct, which contains arachidonic acid (AA), might increase in the membranes due to lower AA release from the membrane phospholipids. Such an effect can be mediated by a reduction in phospholipase A2 activity [62], as αtocopherol is known to be a modulator of phospholipase A2 [46] or by an increase in phospholipase A1 or 2-acyl LPI transferase activity [63]. In addition, excessive free AA has been related to neurotoxic effects [64]; thus, the higher relative content of lipid adducts with AA may be related to a lower free AA content. This lipid adduct can be an agonist of GPR55, a G-protein coupled receptor, whose biological activities include the modulation of immune cells and insulin secretion, and also have a potential mitogen activity in cancer cells [63]. In contrast, lysophosphocholines stimulate intracellular ROS production and ATM/Chk2, ATR/Chk1, as well as Akt activation in endothelial cells [65]. LPC 16:0, which contains palmitoyl molecule, is present in the samples treated with Paraquat and has a slight increase when samples are pre-treated with α-tocopherol. As palmitoylation is particularly important for cell membrane stabilization [66,67], its increase in our samples may support cellular viability (Figure 1).
In conclusion, the data obtained from the MALDI mass spectrometry performed in our CMMA system on astrocytic human cell membranes after prooxidant and antioxidant treatments provide distinguishable and meaningful lipid fingerprints. These differences emerge from the relative increase in ultra-long chain glycerophospholipids, unsaturated ceramides, and lysophospholipids, caused by Paraquat treatment. Tocopherol pre-treatment changes these effects by reducing the presence of unsaturated ceramides, PIs, and PE adducts, and increasing LPC and LPI, which contain palmitoyl and arachidonic acid, respectively. A correct membrane lipidome is essential for proper membrane fluidity and functionality; therefore, the effects of antioxidant compounds over them might exert an important influence on cell viability. Moreover, CMMA technology allows for the performance of this MS analysis over membranes with a negligible sample amount, in contrast to usual MS. Future high-throughput studies will allow the simultaneous identification of compounds influencing lipid profiles and their adverse or beneficiary effects on cell membranes. | 2022-12-14T16:18:02.114Z | 2022-12-01T00:00:00.000 | {
"year": 2022,
"sha1": "dda48542b194b55a33ccab60b6056b8945f186d2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3921/11/12/2440/pdf?version=1670666673",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d3f97cba559561e7b5188e8a09b168fc8033ba8a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
245399742 | pes2o/s2orc | v3-fos-license | Construction and Application of Reservoir Flood Control Operation Rules Using the Decision Tree Algorithm
: Current conventional and optimal reservoir flood control operation methods insufficiently utilize historical reservoir operation data, which include rainfall, runoff generation, and inflow from the watershed, as well as the operational experience of decision makers over many years. Therefore, this study proposed and evaluated a new method for extracting reservoir flood control operation rules from historical operation data using the C4.5 algorithm. Thus, in this paper, the C4.5 algorithm is first introduced; then, the generation of the flood control operation dataset, the construction of decision tree-based (DT-based) rules, and the subsequent design of a real-time operating scheme are detailed. A case study of the Rizhao Reservoir is then employed to demonstrate the feasibility and even superiority of the operating scheme formulated using DT-based rules. Compared with previously proposed conventional and optimal reservoir operation methods, the DT-based method has the advantages of strong and convenient adaptability, enabling decision makers to effectively guide real-time reservoir operation.
Introduction
Flood disasters are currently among the major global problems faced by human society.From 1989 to 2018, 3945 major flood disasters occurred around the world, with China, India, the United States, and Indonesia experiencing the largest number: about 1200 in total [1].There were 109 flood disasters worldwide in 2018, causing 1995 deaths, affecting 12.62 million people, and resulting in $4.5 billion in direct economic losses [2].Although global flood deaths and affected populations have shown a continuous downward trend over the past 30 years, economic losses have shown an upward trend.Owing to the frequency of and significant economic losses associated with flood disasters, a considerable number of water conservation projects have been undertaken to reduce the adverse effects of floods.Among these, reservoirs are created by constructing a dam across a river.However, with ongoing socioeconomic development, the purpose of the reservoir has expanded from guaranteeing flood control safety of the river to including the provision of power generation, water supply, irrigation, ecological environment maintenance, navigation, sediment control, recreation, fisheries, etc.As of April 2020, 58,713 large dams have been constructed worldwide [3].According to a report by the World Commission on Dams, the improvement in operation and maintenance of existing dams offers opportunities to address local or regional development and to minimize social and environmental impacts [4].To do so, it is necessary to implement a scientific and reasonable reservoir flood control operation strategy.Thus, the goal of reservoir flood control operation studies is to define an optimal operation policy for a given reservoir that balances its various Water 2021, 13, 3654 2 of 14 purposes [5].This policy represents a powerful tool for the guidance of reservoir operation, serving as not only a decision-making reference during the planning and design of a water conservancy project but also a key to realizing the comprehensive benefits of the reservoir during its operation.
Existing reservoir flood control operation methods can be roughly divided into two groups: conventional operation methods and algorithmically optimized operation methods.Conventional reservoir operation methods are semi-empirical and semi-theoretical and are presented in the form of flood control operation graphs or tables.During real-time flood control operation, operational decisions (e.g., reservoir discharge or hydroelectric power generation) during each period are specified as a function of the appropriate available information (e.g., the current or previous reservoir water level, current or previous reservoir inflow, and time of year) [6].Such conventional methods have been widely used in reservoir operations owing to their intuitive and practical structure; however, they often fail to consider the latest operational data or to address the complex nonlinearity that exists between the relevant dependent and independent variables [7].Thus, as the complexity and interdependency of the systems considered in reservoir management increase, it becomes more difficult to obtain an optimal operating scheme using conventional methods.
Optimization algorithms have therefore been increasingly applied to formulate reservoir flood control operation strategies, effectively addressing the shortcomings of conventional methods [8].In the past decades, a wide range of optimization algorithms have been proposed that can generally be classified into conventional optimization algorithms and heuristic intelligent algorithms.Conventional optimization algorithms include linear programming [9][10][11], nonlinear programming, dynamic programming (DP) [12][13][14], and progressive optimality algorithm (POA) [15,16] approaches as well as various improvements thereof, such as the multi-stage DP [17,18], incremental DP [19,20], stochastic DP [21], parallel DP [22], and DP combined with POA (DP-POA) method [23].However, when faced with a sufficiently complex flood control system composed of multiple reservoirs, flood storage and detention areas, lakes, and other infrastructure, conventional optimization algorithms have obvious limitations, including a low convergence efficiency and the "curse of dimensionality".For example, as the number of reservoirs increases, the computational scale of DP increases exponentially.To address such issues, modern computing technology has enabled the development of heuristic intelligent algorithms based on artificial intelligence, resulting in general-purpose stochastic search methods that simulate natural selection and biological evolution.As they can be directly applied to address complex problems with nonlinear, discontinuous, non-differentiable, and multi-dimensional characteristics, they have been widely used to optimize flood control operations.At present, the most common heuristic intelligent algorithms include the genetic algorithm [24,25], non-dominated sorting genetic algorithm [26][27][28], particle swarm optimization [29,30], ant colony optimization [31,32], artificial neural network [33,34], support vector machine [35], simulated annealing [36], immune-inspired optimization [37], evolutionary algorithm [38,39], cultured evolutionary algorithm [40,41], and honey-bee mating optimization algorithm [42].However, although heuristic intelligent algorithms can determine an optimal operating policy, there remain many problems in their practical application.The flaws intrinsic to most heuristic intelligent algorithms include premature convergence owing to local fast convergence, poor local search capability owing to a large number of global searches, and long iteration time [43].Furthermore, the solutions provided by most algorithms are limited by the available calculation time as well as the constraints associated with certain optimizations [44].
At present, both flood control operation methods insufficiently utilize historical reservoir operation data.Importantly, these data contain not only the characteristics of and laws describing runoff generation and inflow from the watershed but also the vast experience of reservoir managers, which provides information supporting operating decisions according to different inflow scenarios.As a considerable body of reservoir operation data has been accumulated by water conservation departments, the use of data mining technology to extract flood control operation rules from these data offers a new method for real-time reservoir flood control operation.The decision tree (DT) algorithm is the most commonly used data mining model, as it creates decision rules and classification results following a tree structure [45].The DT algorithm has the advantages of being easier to understand, being easier to implement, and requiring relatively less workload than other approaches.Therefore, it has been widely used to address water conservation problems such as flood forecasting [46,47], flood or drought risk assessment [48][49][50][51][52][53], flood or drought classification [54,55], water quality prediction [56,57], inter-basin water transfer dispatching [58], water level prediction [59,60], and hydropower station power generation dispatching [61].Noymanee and Theeramunkong [46] adopted the boosted decision tree regression to forecast flood water levels in a real-time manner and achieved high forecasting accuracy.Nafari et al. [49] confirmed that the tree-based models were more accurate than the stage-damage function from Australia in a flood risk assessment.Sikorska et al. [54] presented a flood classification for identifying flood patterns at a catchment scale by means of a fuzzy decision tree, and the results showed that this method bore additional potential for analyses of flood patterns.Xi et al. [58] used the decision tree method to determine the diversion amount according to the inter-basin water transfer rules.Parvez et al. [61] proved that the C4.5 algorithm was more feasible for rapidly generating the schedules of cascaded hydropower plants.However, it has rarely been used in reservoir flood control operations.This study aims to formulate reservoir flood control operation rules using the DT algorithm.
The remainder of this paper is organized as follows: Section 2.1 describes the DT algorithm, Section 2.2 presents the construction of flood control operation rules using the DT algorithm (DT-based rules), and Section 2.3 designs a real-time operation procedure using these DT-based rules; Section 3 then introduces and discusses the results of a case study application of the proposed DT algorithm; and Section 4 provides a summary of the conclusions.
DT Algorithm
The DT algorithm, first proposed in the 1960s, is a greedy local search algorithm that first analyzes and processes historical data to construct a DT through top-down induction and then uses the DT to analyze new data [45].The DT algorithm is usually constructed beginning at the top of the tree and proceeding down into the branches, each of which represents a decision, and then into the leaves (or nodes), each of which is assigned a classifier value.Typical DT algorithms include the iterative dichotomiser 3 (ID3) [62], C4.5 [63], and classification and regression tree (CART) [64].
The ID3 algorithm uses the information gain as the splitting criterion for the DT to realize the induction and classification of the data.This has the advantages of providing a concise and clear basic theory as well as a strong learning ability.However, the ID3 algorithm has several drawbacks: it does not consider numerical attributes, missing attribute values are not taken into account, no pruning process is included, and it does not handle data with high dimensionality [65].The C4.5 algorithm is an enhanced version of the ID3 algorithm that applies the information gain ratio rather than information gain itself as the standard for attribute selection.This addresses several of the shortcomings of the ID3 algorithm by realizing numerical attribute treatment, working with missing values, and introducing a pruning process [65].The CART algorithm employs a binary induction method; that is, the DT generated by this algorithm is in the form of a binary tree.However, the C4.5 algorithm can handle continuous attributes more effectively than the CART algorithm.Therefore, the C4.5 algorithm was used to construct the DT-based flood control operation rules in this study.The specific steps of this algorithm are as follows.
Suppose that p is the number of samples in the set S; the class label attribute P i has c different values, where P i (i = 1, 2, . . ., c); and p i is the number of samples in class P i .The information entropy of S is thus given by Suppose that the set attribute A has m different values {a 1 , a 2 , . . ., a m }.Set S can thus be divided into m subsets {U 1 , U 2 , . . ., U m }, where U j contains a number of values from S in the sample and it has a value of a j in A. U j assumes that p ij is a subset of the samples of class P i .Thus, the entropy or expected information of the subsets divided by A is where is the weight of the jth subset.The smaller the entropy value, the higher the purity of the subset.
To determine whether the selected attribute A can effectively reduce the overall entropy, the information gain of attribute A can be defined as follows: in which a higher G ain (A) indicates a larger reduction in entropy and therefore a better attribute.Suppose that R(A) is the information gain ratio, defined as the ratio of G ain (A) to Calculating the R(A) of each attribute and attribute dataset using Equation ( 4), the attribute with the largest R(A) is taken as the split attribute to create nodes and to divide branch samples until all samples under a given node belong to the same category or the attribute can no longer be divided.Finally, reservoir flood control operation rules are constructed based on the derived DT (DT-based rules) by considering the various factors influencing reservoir discharge.
Construction of DT-Based Flood Control Operation Rules 2.2.1. Generation of Reservoir Flood Control Operation Dataset
The main factors affecting reservoir discharge include flood occurrence time, rainfall, net rainfall, reservoir water level, rate of inflow, and volume of inflow.In regions with an uneven distribution of rainfall throughout the year, the rainfall volume and intensity in the flood season are usually high, resulting in large discharges, whereas there is little rain in the non-flood season, resulting in small discharges.Therefore, the occurrence of flooding is related to reservoir discharge.The water level reflects the water stored by the reservoir; the higher the water level, the greater the discharge required during flooding to ensure the safety of the dam.The total rainfall, net rainfall, rate of inflow, and volume of inflow reflect the amount and intensity of inflow into the reservoir and are directly proportional to required discharge; that is, the larger these parameters, the greater the required discharge, and vice versa.This analysis indicates that a reservoir flood control operation dataset includes many attributes, among which the discharge in each period is the decision attribute, whereas the others are conditional attributes.
Construction of DT-Based Rules
To construct the reservoir operation rules using the C4.5 algorithm, all floods in the dataset are first divided into training and verification samples.Then, the attribute values for each training sample period are calculated to construct the flood control operation dataset.Finally, the C4.5 algorithm is used to extract operation rules from this dataset.
Real-Time Operation Procedure
Given a certain verification sample, the procedure for real-time operation is as follows: Step 1: Let i = 1.
Step 2: Use the conditional attributes in flood period i as inputs to the DT-based rules to solve the discharge q i in period i, and calculate the water level z i in period i using the water balance equation.
Step 3: To ensure downstream safety, confirm that q i is less than q*, the maximum discharge of the verification sample as regulated by conventional operation rules.If this is true, proceed to Step 4; otherwise, set q i = q*.
Step 4: If q i is less than q(z i ), which is the reservoir discharge capacity when the water level is z i , proceed to Step 5; otherwise, set q i = q(z i ).
Step 5: where T is the verification sample period count, return to Step 2; otherwise, end the operation.
The real-time operation procedure is illustrated in Figure 1.
Water 2022, 14, x FOR PEER REVIEW 5 of 14 for each training sample period are calculated to construct the flood control operation dataset.Finally, the C4.5 algorithm is used to extract operation rules from this dataset.
Real-Time Operation Procedure
Given a certain verification sample, the procedure for real-time operation is as follows: Step 1: Let i = 1.
Step 2: Use the conditional attributes in flood period i as inputs to the DT-based rules to solve the discharge qi in period i, and calculate the water level zi in period i using the water balance equation.
Step 3: To ensure downstream safety, confirm that qi is less than q*, the maximum discharge of the verification sample as regulated by conventional operation rules.If this is true, proceed to Step 4; otherwise, set qi = q*.
Step 4: If qi is less than q(zi), which is the reservoir discharge capacity when the water level is zi, proceed to Step 5; otherwise, set qi = q(zi).
Step 5: where T is the verification sample period count, return to Step 2; otherwise, end the operation.
The real-time operation procedure is illustrated in Figure 1.
Study Area
The Rizhao Reservoir was used as a case study for the application of the proposed DT-based rule in this study.This reservoir is located 16 km west of Donggang District, Rizhao City in Shandong Province, China, and belongs to the upper and middle reaches of the Futuan River.Construction of the Rizhao Reservoir began in October 1958 and was completed in June 1959.It is a large type-II reservoir with multi-year regulation used
Study Area
The Rizhao Reservoir was used as a case study for the application of the proposed DT-based rule in this study.This reservoir is located 16 km west of Donggang District, Rizhao City in Shandong Province, China, and belongs to the upper and middle reaches of the Futuan River.Construction of the Rizhao Reservoir began in October 1958 and was completed in June 1959.It is a large type-II reservoir with multi-year regulation used mainly to provide flood control and irrigation in combination with aquaculture, power generation, water supply, and other secondary objectives.The climate of the Rizhao Reservoir basin generally exhibits the characteristics of humid and semi-humid regions.Other basic parameters of the reservoir are shown in Table 1.To ensure safety downstream of the Rizhao Reservoir, two control discharges and two high-volume discharge states have been established: when the water level Z ≤ 43.46 m, the control discharge is 1000 m 3 /s; when 43.46 m < Z ≤ 43.79 m, the control discharge is 1900 m 3 /s; when 43.79 m < Z ≤ 44.02 m, the spillway sluices are completely opened; and when Z > 44.02 m, the spillway sluices and the north water release tunnel are completely opened together.
Flood Control Operation Dataset for the Rizhao Reservoir
Based on the historical operation data, 44 floods in 1970, 1974, 1975, 1976, 1998, and 2001-2021 were used as operation data, all of which required discharge by the spillway sluices.As the Rizhao Reservoir has high regulation performance and ensures downstream safety, the flood volume plays a major role in flood operation.In contrast, according to the climate characteristics of the reservoir basin, precipitation is unevenly distributed throughout the year.The flood season lasts from June to September, during which approximately 80% of annual precipitation is received; approximately 60% of annual precipitation is received in July and August alone.The non-flood season lasts from November to April of the following year, during which little precipitation is received.Considering the above characteristics, the flood occurrence time, reservoir water level, cumulative net rainfall, and discharge were taken as the attributes in the flood control operation dataset; the first three of these attributes were defined as conditional attributes and the last attribute was defined as the decision attribute.
Construction of DT-Based Rules for the Rizhao Reservoir
Forty floods occurring in 1970, 1974, 1975, 1976, 1998, and 2001-2018 were included in the training sample while four floods occurring in 2019-2021 were used as verification samples.First, the flood occurrence times, initial water levels, cumulative net rainfall, and discharges in the 40 floods constituting the training sample were sorted and classified for use as the flood control operation dataset.When the information gain rate was at its maximum, the classification of flood occurrence time, initial water level, cumulative net rainfall, and discharge were as shown in Tables 2 and 3. Owing to the uneven distribution of precipitation in the Rizhao Reservoir basin, its hydrological year was divided into three stages: June and September, July to August, and October to May of the following year, as shown in Table 2. Finally, the flood control operation dataset was input into the C4.5 algorithm to generate the DT-based rules for Rizhao Reservoir operation, as shown in Figure 2. It can be seen in Figure 2 that, (1) in the case of the same initial water level and cumulative net rainfall, the discharge in the flood season is greater than that in the non-flood season and, (2) the higher the initial water level and cumulative net rainfall, the greater the discharge.cumulative net rainfall, the discharge in the flood season is greater than that in the nonflood season and, (2) the higher the initial water level and cumulative net rainfall, the greater the discharge.Water 2021, 13, 3654 8 of 14
Results and Discussion
The discharges and water levels of the four verification samples were obtained according to the real-time operation procedure depicted in Figure 1 and are plotted together with the measured inflows, discharges, and water levels in Figures 3-6.The maximum discharge and water level are listed in Table 4, in which 1 represents flood regulation results obtained using the DT-based rules, 2 represents the measured values, and 3 represents the flood regulation results based on the conventional operation rules.The relative errors of the maximum discharge and maximum water level reported in Table 4 were determined, respectively, by the following: where q maxDT is the maximum discharge according to DT-based rules; z maxDT is the maximum water level according to DT-based rules; q max is the measured maximum discharge or the maximum discharge determined by the conventional operation rules; and z max is the measured maximum water level or the maximum water level determined by the conventional operation rules.
Results and Discussion
The discharges and water levels of the four verification samples were obtained according to the real-time operation procedure depicted in Figure 1 and are plotted together with the measured inflows, discharges, and water levels in Figures 3-6.The maximum discharge and water level are listed in Table 4, in which ① represents flood regulation results obtained using the DT-based rules, ② represents the measured values, and ③ represents the flood regulation results based on the conventional operation rules.The relative errors of the maximum discharge and maximum water level reported in Table 4 were determined, respectively, by the following: where qmaxDT is the maximum discharge according to DT-based rules; zmaxDT is the maximum water level according to DT-based rules; qmax is the measured maximum discharge or the maximum discharge determined by the conventional operation rules; and zmax is the measured maximum water level or the maximum water level determined by the conventional operation rules.
Results and Discussion
The discharges and water levels of the four verification samples were obtained according to the real-time operation procedure depicted in Figure 1 and are plotted together with the measured inflows, discharges, and water levels in Figures 3-6.The maximum discharge and water level are listed in Table 4, in which ① represents flood regulation results obtained using the DT-based rules, ② represents the measured values, and ③ represents the flood regulation results based on the conventional operation rules.The relative errors of the maximum discharge and maximum water level reported in Table 4 were determined, respectively, by the following: where qmaxDT is the maximum discharge according to DT-based rules; zmaxDT is the maximum water level according to DT-based rules; qmax is the measured maximum discharge or the maximum discharge determined by the conventional operation rules; and zmax is the measured maximum water level or the maximum water level determined by the conventional operation rules.The results shown in Table 4 are discussed below for each verification sample.
(1) For the 12 Aug 2019 flood, the maximum discharge of the operating scheme formulated using the conventional operation rules was the largest, followed by that of the operating scheme formulated using the DT-based rules, while the measured maximum discharge was the smallest; the relative error between ① and ② was small (5.0%), whereas the relative error between ① and ③ was large, reaching 58.33%.The results shown in Table 4 are discussed below for each verification sample.
(1) For the 12 Aug 2019 flood, the maximum discharge of the operating scheme formulated using the conventional operation rules was the largest, followed by that of the operating scheme formulated using the DT-based rules, while the measured maximum discharge was the smallest; the relative error between ① and ② was small (5.0%), whereas the relative error between ① and ③ was large, reaching 58.33%.The results shown in Table 4 are discussed below for each verification sample.
(1) For the 12 Aug 2019 flood, the maximum discharge of the operating scheme formulated using the conventional operation rules was the largest, followed by that of the operating scheme formulated using the DT-based rules, while the measured maximum discharge was the smallest; the relative error between 1 and 2 was small (5.0%), whereas the relative error between 1 and 3 was large, reaching 58.33%.The measured maximum water level was the largest, followed by the maximum water level of the operating scheme formulated using the DT-based rules, while that of the operating scheme formulated using the conventional operation rules was the smallest; the relative errors between 1 and 2 and between 1 and 3 were both small (0.3% and 0.48%, respectively).It can be observed that the operating scheme formulated using the conventional operation rules was the best in terms of reservoir safety because its maximum water level was the smallest.However, this condition was the most unsafe downstream because the corresponding maximum discharge was the largest.The measured operating scheme was the best in terms of downstream safety because its maximum discharge was the smallest.However, this condition exhibited the highest maximum water level, indicating that it was the worst operating scheme in terms of reservoir safety.The maximum discharge and maximum water level provided by the operating scheme formulated using the DT-based rules were between those of the measured values and the operating scheme formulated using the conventional operation rules.At the same time, the maximum discharge of 1200 m 3 /s provided by the operating scheme formulated using the DT-based rules remained below the controlled discharge of 1900 m 3 /s, indicating that it realized a suitable compromise and is therefore feasible.(2) For the 22 July 2020 flood, the maximum discharge of the operating scheme formulated using the conventional operation rules was the largest, followed by the measured maximum discharge, while the maximum discharge of the operating scheme formulated using the DT-based rules was the smallest; the relative error between 1 and 2 was small (24.23%) whereas the relative error between 1 and 3 was large, reaching 137.53%.The measured maximum water level was the largest, followed by the maximum water level of the operating scheme formulated using the conventional operation rules, while that of the operating scheme formulated using DT-based rules was the smallest; the relative errors between 1 and 2 and between 1 and 3 were both small (0.36% and 0.07%, respectively).It can be observed that the operating scheme formulated using the DT-based rules was the best in terms of both reservoir safety and downstream safety because its water level and discharge were simultaneously the smallest.At the same time, the discharge of 421 m 3 /s dictated by the operating scheme formulated using the DT-based rules was less than the control discharge of 1000 m 3 /s, confirming that this operating scheme was indeed the best among the three evaluated for this flood.(3) For the 13 Aug 2021 flood, the maximum discharge of the operating scheme formulated using the conventional operation rules was the largest, followed by the measured maximum discharge, while the maximum discharge of the operating scheme formulated using the DT-based rules was the smallest; the relative error between 1 and 2 was small (4.34%), whereas the relative error between 1 and 3 was large, reaching 72.41%.The maximum water level of the operating scheme formulated using the DT-based rules was the largest, followed by the measured maximum water level, while that of the operating scheme formulated using the conventional operation rules was the smallest; the relative errors between 1 and 2 and between 1 and 3 were both small (0.02% and 0.56%, respectively).It can be observed that the operating scheme formulated using the conventional operation rules was the best in terms of reservoir safety because its maximum water level was the smallest.However, this condition was the most unsafe downstream because the corresponding maximum discharge was the largest.The operating scheme formulated using the DT-based rules was the best in terms of downstream safety because its maximum discharge was the smallest.However, this condition exhibited the highest maximum water level, indicating that it was the worst operating scheme in terms of reservoir safety.The difference between the maximum water level provided by the operating scheme formulated using the DT-based rules and the measured maximum water level is very small, that is 0.01 m, and the maximum discharge of 580 m 3 /s is below the controlled discharge of 1000 m 3 /s, indicating that it is a feasible scheme.
(4) For the 26 Aug 2021 flood, the maximum discharge of the operating scheme formulated using the conventional operation rules was the largest, followed by the measured maximum discharge, while the maximum discharge of the operating scheme formulated using the DT-based rules was the smallest; the relative error between 1 and 2 was small (0.32%), whereas the relative error between 1 and 3 was large, reaching 218.46%.The measured maximum water level was the largest, followed by the maximum water level of the operating scheme formulated using the DT-based rules, while that of the operating scheme formulated using the conventional operation rules was the smallest; the relative errors between 1 and 2 and between 1 and 3 were both small (0.09% and 0.14%, respectively).It can be observed that the operating scheme formulated using the conventional operation rules was the best in terms of reservoir safety because its maximum water level was the smallest.However, this condition was the most unsafe downstream because the corresponding maximum discharge was the largest.The operating scheme formulated using the DT-based rules was the best in terms of downstream safety because its maximum discharge was the smallest.
The maximum water level provided by the operating scheme formulated using the DT-based rules were between those of the measured values and the operating scheme formulated using the conventional operation rules.At the same time, the maximum discharge of 314 m 3 /s provided by the operating scheme formulated using the DTbased rules remained below the controlled discharge of 1000 m 3 /s, indicating that it is a feasible scheme.
In summary, the operating scheme formulated using DT-based rules was shown to be feasible and, in some cases, better than the actual operating scheme and a scheme formulated using conventional operation rules.
Conclusions
In this paper, the DT algorithm was applied to formulate reservoir flood control operation rules that fully consider the influence of reservoir management experience, climate factors, and subsurface conditions of the watershed on discharge, realizing a fast and effective operating scheme that responds to various inflow scenarios under different hydrological periods.The following conclusions were obtained from this study: (1) The C4.5 algorithm was used to construct DT-based flood control operation rules for a reservoir.This algorithm has the advantages of easy implementation and strong operability and fully considers the influence of the climate and underlying surface conditions of the watershed as well as the operating experience of management in the process of constructing an operating scheme.(2) As can be seen from the results of the four verification samples, the maximum discharges of the operating schemes formulated using the DT-based rules with the flood number 22 July 2020, 13 Aug 2021, and 26 Aug 2021 are the smallest; the maximum discharge with the flood number 12 Aug 2019 is smaller than that of the operating scheme formulated using the conventional operation rules and only 5% larger than the measured maximum discharge.The maximum water levels of the operating schemes formulated using the DT-based rules with the flood numbers 12 Aug 2019 and 26 Aug 2021 are between those of the measured values and the operating scheme formulated using the conventional operation rules; the maximum water level with the flood number 22 July 2020 is the smallest; the maximum water level with the flood number 13 Aug 2021 is only 0.02% larger than the measured maximum value.To sum up, the operating scheme formulated using the DT-based rules was feasible and, in some cases, superior to the actual operating scheme and an operating scheme based on conventional operation rules.Among optimization algorithms, DT-based rules have the advantages of strong and convenient adaptability, allowing decision makers to guide real-time reservoir operation.Therefore, the DT-based method for constructing reservoir flood control operation rules proposed in this paper can provide practical guidance for the real-time operation of reservoirs.
Figure 1 .
Figure 1.The flowchart of the real-time reservoir operation procedure using DT-based rules.
Figure 1 .
Figure 1.The flowchart of the real-time reservoir operation procedure using DT-based rules.
by DT-based rules Measured water level Water level by DT-based rules
Table 1 .
Basic parameters of the Rizhao Reservoir.
Table 2 .
Classification of cumulative net rainfall.
Table 3 .
Classification of all attributes.
Note: Z limit represents the flood limited water level.
Table 2 .
Classification of cumulative net rainfall.
Table 3 .
Classification of all attributes.
Table 4 .
Maximum discharges and water levels according to operating schemes.Aug 2019 is the flood number, which represents that the rainfall of the flood began on 12 August 2019.
Table 4 .
Maximum discharges and water levels according to operating schemes.Aug 2019 is the flood number, which represents that the rainfall of the flood began on 12 August 2019.
Table 4 .
Maximum discharges and water levels according to operating schemes. | 2021-12-23T16:14:53.638Z | 2021-12-20T00:00:00.000 | {
"year": 2021,
"sha1": "60b5b76bcb5ba70727900ef9f6df7a5d1a5ad09a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4441/13/24/3654/pdf?version=1639989052",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "748577400eb8fc9cd77c8e8b27aadca30943ec81",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
56145354 | pes2o/s2orc | v3-fos-license | Correlation between Vitamin D Receptor and Monocyte Chemotactic Protein-1 Polymorphisms and Spontaneous Bacterial Peritonitis in Decompensated Liver Disease
AIM: In this study, we intended to investigate whether MCP1 and vitamin D receptor (VDR) polymorphisms are associated with increased incidence of SBP in cirrhotic patients. Additionally, we aimed to assess the levels of vitamin D, MCP-1 and other biochemical markers in such patients. MATERIALS AND METHODS: A prospective case control study was performed on (60) patients with post-hepatitis C virus decompensated liver disease divided into Group I (n=30) patients with SBP and Group II (n=30) patients with no SBP; and a control group of 30 healthy volunteers. Estimation of serum MCP-1 and vitamin D polymorphisms by real time PCR and serum level of vitamin D by ELISA for patients and control groups; ascitic fluid level of MCP-1 by real time PCR for patients groups; and quantitative gene expression of MCP-1 in serum and ascitic fluid for patients groups was done. R E S U LT S : O u r s t u d y r e v e a l e d t h a t M C P -1 a n d V D R polymorphisms heterozygote genotypes AG in MCP-1 and Ff in VDR were mostly linked to SBP patients. Conversely, one ORIGINAL ARTICLE Correlation between Vitamin D Receptor and Monocyte Chemotactic Protein-1 Polymorphisms and Spontaneous Bacterial Peritonitis in Decompensated Liver Disease Nadia Abdelaaty Abdelkader, Dina Sabry, Mohamed A. S. Al-Ghussein, Hany M. Dabbous, Enas Allam 1815 Journal of GHR 2015 October 21 4(10): 1815-1820 ISSN 2224-3992 (print) ISSN 2224-6509 (online) Online Submissions: http://www.ghrnet.org/index./joghr/ doi:10.17554/j.issn.2224-3992.2015.04.532 © 2015 ACT. All rights reserved. Journal of Gastroenterology and Hepatology Research using the Innu PREP blood DNA mini kit (Analytic jena, Germany) following the manufacturer's instructions. The identification of the polymorphism was carried out using PCR, followed by a restriction fragment length polymorphism (RFLP) assay, using a PvuII site, which is introduced by the presence of the G nucleotide. The regulatory region of the MCP-1 gene (from -2746 at -1817) was amplified by polymerase chain-reaction (PCR) using the following specific primer: Forward: 5′-CCGAGATGTTCCCAGCACAG-3′ and Reverse: 5′-CTGCTTTGCTTGTGCCTCTT -3′. PCR was performed using buffer 10× (10 mM Tris-HCl pH 9, 2.0 mM MgCl2, 50 mM KCl), 200 μM dNTPs, 2.5 pmoles of each primer, 5 μL of DNA, 0.5 U Taq polymerase (Amersham Pharmacia Biotech, Piscataway, NJ, USA) and ddH2O up to a final volume of 40 μL. The following thermal profiles were run: 95°C for 40 sec, 56°C for 30 sec, and 72°C for 4 min. After a final extension of 10 min at 72°CC, 7μL of the PCR products were resolved in 2% agarose gels stained with ethidium bromide previous dilution in blue juice buffer to check the expected 930-bp band. After checking, 8 μL of the PCR products were digested with 10 U of PvuII in 10× buffer and H2O up to a final volume of 20 μL at 37°C for 2 hr. The resulting products were separated by gel-electrophoresis in 1.5% agarose gels, containing ethidium bromide in a final concentration of 0.5 g/mL Samples showing only a 930 bp band were assigned as A/A, samples showing two bands of 708 and 222 bp were considered G/G and samples showing three bands at 930, 708 and 222 bp were typed A/G. Vitamin D receptor (VDR) polymorphisms DNA was used in the PCR amplification of sequences containing previous ly descr ibed VDR res t r ic t ion-f ragmentlength polymorphisms defined by the restriction endonucleases FokI. The primer sequence used in this study was as follows: Forward: 5'-AGCTGGCCCTGGCACTGA CTCTGCTCT-3' and Reverse: 5'-ATGGAAACACCTTGCTTCTTCTCCCTC-3'. The cycling profile involved denaturation at 94°C for 15 sec, annealing at 55°C for 30 sec, and extension at 72°C for 30 sec for 35 cycles. Final extension was continued at 72°C for 5 min. The amplification procedure was carried out in a PCR thermal cycler (Thermo Scientific, Fenland). PCR products were digested overnight with restriction endonuclease in accordance with the manufacturer’s instructions (Roche Molecular Biochemicals, Indianapolis, IN, USA). Digested products were analyzed by electrophoresis in a 2% agarose gel and ethidium bromide staining. Quantitative assessment of MCP-1 gene expression by Real time PCR A. RNA extraction from blood and ascitic fluid samples: using SV Total RNA Isolation system (Promega, USA) according to manufactures instructions. B. cDNA synthesis: 5 μg of the extracted RNA was reverse transcribed into cDNA using RT-PCR kit (Stratagene USA). C. Real-time quantitative PCR (qPCR) using SYBR Green I: qPCR amplification and analysis were performed using an Applied Biosystem with software version 3.1 (StepOneTM, USA). The qPCR assay with the primer sets were optimized at the annealing temperature. All cDNA including previously prepared samples, internal control (for Glyceraldehyde 3-phosphate dehydrogenase (GAPDH) gene expression as housekeeping gene), and non-template control (water to confirm the absence of DNA contamination in the reaction mixture), were in duplicate. The cDNA was subsequently amplified with the Syber Green I PCR Master Kit (Fermentas) in a have been identified as predisposing factors for SBP. There is evidence that the most powerful predictive factor for SBP is an ascitic fluid total protein level ≤ 1gm /dL which reflects a low complement concentration and decreased opsonisation capacity. Vitamin D, vitamin D related receptor peptide and vitamin D polymorphism are involved in many infectious diseases. Vitamin D insufficiency may lead to increase in incidence of infections in cirrhotic patients’ especially spontaneous bacterial peritonitis. The leukocytes, infections and inflammation are combined with the presence of monocyte chemotactic protein-1 (MCP-1). MCP1belongs to the CC chemokine super family and plays a critical role in the recruitment and activation of leukocytes during acute inflammation. Activated monocytes and fibroblasts may generate MCP-1 by lipopolysaccharide (LPS) or cytokine stimulation. In this study, we intended to investigate whether MCP-1 and vitamin D receptor polymorphisms are associated with increased incidence of SBP in cirrhotic patients. Additionally, we aimed to assess the levels of vitamin D, MCP-1 and other biochemical markers in such patients. PATIENTS AND METHODS Study Design: Prospective case control study. Study Setting: The included patients were collected from Tropical Medicine Department, Ain Shams University Hospital in the period from January 2013 to October 2013. The study protocol was performed in accordance with the ethical guidelines of the 1975 Declaration of Helsinki. Sample size: By using statistical methods sixty (60) patients were required to achieve an alpha error of 5% with a test power of 10%. This study included sixty (60) patients with post-hepatitis C virus decompensated liver disease. They were categorized into two groups according to the presence of spontaneous bacterial peritonitis (SBP) as follows: Group I (n=30) included patients with SBP. Group II (n=30) includes patients with no SBP. A control group of 30 healthy volunteers were included in our study. Patients in group I were diagnosed to have SBP according to Rimola et al, 2000. Patients in group II were those with ascites related to decompansated liver disease and had no clinical symptoms or signs indicating SBP. A written informed consent to participate in the study was obtained from all participants before enrollment in the study. Those with evidence of infection at any site other than SBP were excluded. Patients with alcoholic liver cirrhosis, Wilson disease, hemochromatosis, glycogen storage disease, and malignant or tuberculous ascites were excluded from this study. All patients were subjected to history taking and thorough clinical examination. Laboratory investigations including CBC, Liver profile, prothrombin time “PT”, Kidney functions, serum MCP-1 and vitamin D polymorphisms by real time PCR and serum level of vitamin D by ELISA for patients and control groups and quantitative gene expression of MCP-1 in serum and ascitic fluid for patients groups. The following parameters were assessed in ascitic fluid samples; Total protein, Serum Ascites Albumin Gradient (SAAG); Absolute polymorph nuclear leukocyte count “APLC” ascites, and Ascitic fluid level of MCP-1 By real time PCR for patients groups. Estimation of MCP-1 and vitamin D MCP-1 polymorphisms Genomic DNA was prepared from venous blood samples on EDTA 1816 © 2015 ACT. All rights reserved. Abdelkader NA et al . VDR and MCP-1 polymorphisms in SBP
INTRODUCTION
using the Innu PREP blood DNA mini kit (Analytic jena, Germany) following the manufacturer's instructions. The identification of the polymorphism was carried out using PCR, followed by a restriction fragment length polymorphism (RFLP) assay, using a PvuII site, which is introduced by the presence of the G nucleotide. The regulatory region of the MCP-1 gene (from -2746 at -1817) was amplified by polymerase chain-reaction (PCR) using the following specific primer: Forward: 5′-CCGAGATGTTCCCAGCACAG-3′ and Reverse: 5′-CTGCTTTGCTTGTGCCTCTT -3′ [11] . PCR was performed using buffer 10× (10 mM Tris-HCl pH 9, 2.0 mM MgCl2, 50 mM KCl), 200 µM dNTPs, 2.5 pmoles of each primer, 5 μL of DNA, 0.5 U Taq polymerase (Amersham Pharmacia Biotech, Piscataway, NJ, USA) and ddH2O up to a final volume of 40 μL. The following thermal profiles were run: 95℃ for 40 sec, 56℃ for 30 sec, and 72℃ for 4 min. After a final extension of 10 min at 72℃C, 7μL of the PCR products were resolved in 2% agarose gels stained with ethidium bromide previous dilution in blue juice buffer to check the expected 930-bp band. After checking, 8 μL of the PCR products were digested with 10 U of PvuII in 10× buffer and H2O up to a final volume of 20 µL at 37℃ for 2 hr. The resulting products were separated by gel-electrophoresis in 1.5% agarose gels, containing ethidium bromide in a final concentration of 0.5 g/mL Samples showing only a 930 bp band were assigned as A/A, samples showing two bands of 708 and 222 bp were considered G/G and samples showing three bands at 930, 708 and 222 bp were typed A/G.
Vitamin D receptor (VDR) polymorphisms
DNA was used in the PCR amplification of sequences containing previously described VDR restriction-fragment-length polymorphisms defined by the restriction endonucleases FokI. The primer sequence used in this study was as follows: Forward: 5'-AGCTGGCCCTGGCACTGA CTCTGCTCT-3' and Reverse: 5'-ATGGAAACACCTTGCTTCTTCTCCCTC-3'. The cycling profile involved denaturation at 94℃ for 15 sec, annealing at 55℃ for 30 sec, and extension at 72℃ for 30 sec for 35 cycles. Final extension was continued at 72℃ for 5 min. The amplification procedure was carried out in a PCR thermal cycler (Thermo Scientific, Fenland). PCR products were digested overnight with restriction endonuclease in accordance with the manufacturer's instructions (Roche Molecular Biochemicals, Indianapolis, IN, USA). Digested products were analyzed by electrophoresis in a 2% agarose gel and ethidium bromide staining [12] .
Quantitative assessment of MCP-1 gene expression by Real time PCR
A. RNA extraction from blood and ascitic fluid samples: using SV Total RNA Isolation system (Promega, USA) according to manufactures instructions.
B. cDNA synthesis: 5 µg of the extracted RNA was reverse transcribed into cDNA using RT-PCR kit (Stratagene USA).
C. Real-time quantitative PCR (qPCR) using SYBR Green I: qPCR amplification and analysis were performed using an Applied Biosystem with software version 3.1 (StepOne™, USA). The qPCR assay with the primer sets were optimized at the annealing temperature. All cDNA including previously prepared samples, internal control (for Glyceraldehyde 3-phosphate dehydrogenase (GAPDH) gene expression as housekeeping gene), and non-template control (water to confirm the absence of DNA contamination in the reaction mixture), were in duplicate. The cDNA was subsequently amplified with the Syber Green I PCR Master Kit (Fermentas) in a have been identified as predisposing factors for SBP [5] . There is evidence that the most powerful predictive factor for SBP is an ascitic fluid total protein level ≤ 1gm /dL which reflects a low complement concentration and decreased opsonisation capacity [6] .
Vitamin D, vitamin D related receptor peptide and vitamin D polymorphism are involved in many infectious diseases. Vitamin D insufficiency may lead to increase in incidence of infections in cirrhotic patients' especially spontaneous bacterial peritonitis [7] .
The leukocytes, infections and inflammation are combined with the presence of monocyte chemotactic protein-1 (MCP-1). MCP-1belongs to the CC chemokine super family and plays a critical role in the recruitment and activation of leukocytes during acute inflammation. Activated monocytes and fibroblasts may generate MCP-1 by lipopolysaccharide (LPS) or cytokine stimulation [8] .
In this study, we intended to investigate whether MCP-1 and vitamin D receptor polymorphisms are associated with increased incidence of SBP in cirrhotic patients. Additionally, we aimed to assess the levels of vitamin D, MCP-1 and other biochemical markers in such patients.
Study Design: Prospective case control study.
Study Setting: The included patients were collected from Tropical Medicine Department, Ain Shams University Hospital in the period from January 2013 to October 2013. The study protocol was performed in accordance with the ethical guidelines of the 1975 Declaration of Helsinki.
Sample size: By using statistical methods sixty (60) patients were required to achieve an alpha error of 5% with a test power of 10%. This study included sixty (60) patients with post-hepatitis C virus decompensated liver disease. They were categorized into two groups according to the presence of spontaneous bacterial peritonitis (SBP) as follows: Group I (n=30) included patients with SBP. Group II (n=30) includes patients with no SBP. A control group of 30 healthy volunteers were included in our study.
Patients in group I were diagnosed to have SBP according to Rimola et al, 2000 [9] . Patients in group II were those with ascites related to decompansated liver disease and had no clinical symptoms or signs indicating SBP. A written informed consent to participate in the study was obtained from all participants before enrollment in the study. Those with evidence of infection at any site other than SBP were excluded. Patients with alcoholic liver cirrhosis, Wilson disease, hemochromatosis, glycogen storage disease, and malignant or tuberculous ascites were excluded from this study. All patients were subjected to history taking and thorough clinical examination. Laboratory investigations including CBC, Liver profile, prothrombin time "PT", Kidney functions, serum MCP-1 and vitamin D polymorphisms by real time PCR and serum level of vitamin D by ELISA for patients and control groups and quantitative gene expression of MCP-1 in serum and ascitic fluid for patients groups.
The following parameters were assessed in ascitic fluid samples; Total protein, Serum Ascites Albumin Gradient (SAAG) [10] ; Absolute polymorph nuclear leukocyte count "APLC" ascites, and Ascitic fluid level of MCP-1 By real time PCR for patients groups.
Estimation of MCP-1 and vitamin D MCP-1 polymorphisms
Genomic DNA was prepared from venous blood samples on EDTA We used 1μM of both primers specific for each target gene. MCP-1 gene was amplified using the following same primer sequence used for MCP-1 polymorphism assessment. GAPDH gene was amplified using the following specific primer: Forward: 5'CGCTCTCTGCTCCTCCTGTT 3' and Reverse: 5' CCATGGTGTCTGAGCGATGT 3' [13] . Quantitative assessment of vitamin D serum levels ELISA kits were supplied by (R&D system Minneapolis, USA) to assess serum levels of vitamin D (ng/mL) in all studied groups. The techniques were done according to manufacturer references.
Statistical analysis
Results were disclosed as means ± standard deviations. One-way ANOVA and Tukey's multiple comparison post hoc tests were performed. P < 0.05 was considered significant.
Baseline demographic and clinical characteristics
The demographic and clinical characteristics of the studied groups are represented in table 1. As outlined in Table 1, all patients were Egyptian Arabs predominantly middle age males. They had decompensated liver disease with or without SBP and the healthy controls were age and sex matched. There was no statistically significant difference between both studied groups of patients regarding the clinical characteristics data. Figure 1 illustrates MCP-1 polymorphisms genotypes distribution and allele's frequencies in all studied groups. Healthy controls genotypes distribution didn't depart from those on the basis of Hardy-Weinberg equilibrium and AA was dominant by 53.33%. However, cirrhotic patients dominated GG genotype by 66.66% and SBP group dominated AG genotype by 73.33% with a significant difference (P≤0.05) for both from control and each other. Also, a significant association of the G allele frequency appears for both groups of patients (cirrhotic 76.66% and SBP 53.33%) when compared to healthy volunteers and each other (P≤0.05). Contrarily, the A allele 73.33% is predominant in healthy controls.
Genotypes distribution and alleles frequencies of MCP-1 polymorphisms
PCR products for MCP-1 gene (930 bp) appeared in figure 2A before cutting with restriction enzyme for different groups and after ( Figure 2B) cutting with restriction; A/A genotype at (930 bp), A/ G genotype ( Figure 2C) at (930 bp, 708 bp and 222bp) and G/G genotype at (708 bp and 222bp). VDR polymorphisms genotypes distribution and allele's frequencies VDR polymorphisms genotypes distribution and allele's frequencies are shown in figure 3. The healthy controls genotypes distribution dominated FF genotype that was 86.66%. On the other hand, cirrhotic group dominated ff genotype by 56.66% and SBP group dominated Ff genotype by 43.33% with a significant difference (P≤0.05) for both from control and each other. Association with, a significant connection of the f allele frequency appears for both groups of patients (cirrhotic 68.33% and SBP 61.66%) when compared to healthy volunteers and each other (P≤0.05). Contrarily, healthy control group is associated with the F allele by 88.33%.
PCR products restricted with FokI were showed in figure 4. Three bands after treatment with enzyme were showed as FF homozygotes (266 bp), Ff heterozygotes (193 bp), and ff homozygotes (73 bp) according to the restriction pattern ( Figure 4).
Vitamin D, MCP-1 and albumin levels:
Vitamin D level is normal as expected in the control healthy group (III) that appeared in figure 5A. Consequently, due to liver disease a significant reduction for both cirrhotic (II) and SBP (I) groups (15±5.4 and 8±3.4 ng/mL; P≤0.05 when compared to control), respectively.
MCP-1 gene expression was elevated in serum and ascitic fluid samples of both cirrhotic and SBP groups significantly versus serum level of control group with P≤0.05 as shown in figure 5B. Figure 5C shows a significant (P≤0.05) reduction in albumin levels in all groups against healthy control.
DISCUSSION
The investigated MCP-1 polymorphisms patients' results clarified the correlation between SBP dominated with AG genotype and cirrhotic patients dominated with GG genotype, while healthy controls dominated AA genotype. Accordingly, the G allele frequency was significantly higher in both SBP and cirrhotic patients as a risk factor in contrarily with healthy controls carrying the A allele as protective factor.
In the same side, VDR polymorphisms examined analysis observation justified the association between SBP dominated with Ff genotype and cirrhotic patients dominated with ff genotype, while healthy controls dominated FF genotype. Consequently, the f allele frequency was significantly higher in both SBP and cirrhotic patients as a risk factor in contrarily with healthy controls carrying the F allele as protective factor.
The two polymorphisms agree in between that; the heterozygote genotypes AG in MCP-1 and Ff in VDR were mostly linked to SBP patients. On the other hand, one homozygous genotype was linked to cirrhotic GG in MCP-1 and ff in VDR, while the other was linked to the remaining healthy controls AA in MCP-1 and FF in VDR. The protective allele factors A and F were present in healthy controls, while G and f alleles were considered as risk factors.
The literature was extensively surveyed; MCP-1 results confirmed with Gäbele et al [14] , who reported that G-allele polymorphism carriers were more frequent with alcohol induced cirrhosis patients than in heavy drinkers without evidence of liver damage. In vitro stimulated monocytes from G-allele carrying subjects produced MCP-1 more than cells from AA homozygous subjects [15] and were significantly more frequent in HCV patients with more advanced fibrosis and severe inflammation [16] .
To the best of our knowledge, this study was conducted for the first time on SBP cirrhotic patients to study its association with VDR polymorphism. Similar confirmatory results were observed for Hepatitis B, tuberculosis and leprosy patients [12,17,18] . They all share that VDR polymorphism Ff heterozygote genotype was linked with the examined diseases ( SBP in our study). As expected, vitamin D level decreased in both cirrhotic and SBP patients because of liver disease [19] and also may be due to VDR polymorphism. Zhang et al [7] concluded that vitamin D insufficiency was universal among cirrhotic patients with ascites, and the situation was more severe with more serious cirrhosis. Recently, a decreased vitamin D levels were reported to be associated with increased liver damage and mortality in alcoholic liver diseases. Trépo et al [20] .
In the present study, the mean level of MCP-1 gene expression in serum was significantly higher in patients with SBP than control subjects. A study done by Giron-Gonzaâlez et al in 2001 [21] found a non significant increase in the mean value of MCP-1 in serum of patients with SBP than control subjects. They related their results to activated immune and inflammatory reactions in these patients and subsequently, proinflammatory cytokines are elevated.
In our study, there was a significant increase of the mean level of the relative MCP-1 gene expression in both serum and ascitic fluid of SBP rather than cirrhotic ascitic patients without SBP. This agrees with previous researches [14,15,20] suggesting that MCP-1 plays a pathophysiological role during the development and the course of SBP.
Similarly, Kim et al (2007) [22] got the same results and explained the role of MCP-1 in SBP. The immune system is stimulated by bacterial invasion. MCP-1 acts as a chemotactic factor for monocytes and macrophages; thus, these cells migrate into the ascitic fluid. These monocytes and macrophages release TNF-α and other cytokines, which in turn induce the expression of adhesion molecules on endothelial cells, thereby mediating a systemic reaction to the infection.
Giron-Gonzaâlez et al [21] found that MCP-1 levels in ascites were significantly higher when compared with their levels in serum, suggesting a chemotactic gradient towards the peritoneal cavity, even in the absence of infection. This chemotactic gradient could be operative in the chemotaxis of monocytes/macrophages and thus might also modify the systemic response to the infection.
In conclusion, MCP-1 and VDR polymorphisms heterozygote genotypes AG in MCP-1 and Ff in VDR were mostly linked to SBP patients. Conversely, one homozygous genotype was linked to cirrhotic GG in MCP-1 and ff in VDR, while the other was linked to the remaining healthy controls AA in MCP-1 and FF in VDR. A and F alleles may be considered protective factors in healthy controls, while G and f alleles may be considered risk factors. Serum vitamin D was significantly lower and MCP-1 (both serum and ascitic fluid) was significantly higher in SBP group in comparison to cirrhotic and control groups. Higher level of MCP-1 could be an early predictor of SBP. Vitamin D deficiency in cirrhotic patients may be among risk factor for SBP. | 2019-03-12T13:04:32.351Z | 2015-11-21T00:00:00.000 | {
"year": 2015,
"sha1": "8ec7b941b4389983146b86227422e3f8f3bfb7ad",
"oa_license": "CCBYNC",
"oa_url": "http://www.ghrnet.org/index.php/joghr/article/download/1102/1610",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d40456a05b873d165e96d8d3f4a6f5a65a3ba3eb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
26984278 | pes2o/s2orc | v3-fos-license | General rules for bosonic bunching in multimode interferometers
We perform a comprehensive set of experiments that characterize bosonic bunching of up to 3 photons in interferometers of up to 16 modes. Our experiments verify two rules that govern bosonic bunching. The first rule, obtained recently in [1,2], predicts the average behavior of the bunching probability and is known as the bosonic birthday paradox. The second rule is new, and establishes a n!-factor quantum enhancement for the probability that all n bosons bunch in a single output mode, with respect to the case of distinguishable bosons. Besides its fundamental importance in phenomena such as Bose-Einstein condensation, bosonic bunching can be exploited in applications such as linear optical quantum computing and quantum-enhanced metrology.
The so-called birthday paradox is the realization that a small number of independent random events (in this case, birthdays) may result in surprisingly large coincidence probabilities. Recently a theoretical analysis was made of the analogue situation for bosons [1]: how many non-interacting bosons distributed randomly in m modes are required for a large overlap probability? The answer requires a quantification of the characteristic bunching exhibited by bosons, responsible for phenomena such as Bose-Einstein condensation. Here we report experiments which characterize bosonic bunching of up to three photons evolving in linear-optical chips with as many as 16 modes. Besides verifying the predictions associated with the bosonic birthday paradox, our experiments also confirm a new, sharper bosonic bunching law that we prove. Our results provide a comprehensive picture of bosonic coalescence in multimode chips, which may have applications in quantum communication, metrology and quantum computation.
In the usual formulation of the birthday paradox, one is asked to estimate how large a (random) group of people must be so that there is a better than even chance that at least two people share a birthday (Fig. 1 a). It is called a paradox because intuition leads most people to overestimate the number of random independent events required for a reasonable probability of coincidence (in this case, the correct answer is 23). Recently, an analysis was made of the analogue situation in quantum mechanics, which dictates that indistinguishable, non-interacting particles will have their statistical behaviour ruled by their bosonic (Fig. 1 b) or fermionic nature. The birthday paradox for fermions is known as the Pauli exclusion principle: no two fermions can occupy the same state. Bosons, on the other hand, tend to bunch together more than classical particles do.
Solving the bosonic birthday problem [1,2] requires us to quantify the bosonic bunching behaviour responsible for fundamental phenomena such as Bose-Einstein condensation. This is a signature feature of ensembles of bosonic particles, which below a certain temperature tend to macroscopically populate the same (lowest energy) state, and has been observed in a large variety of physical systems [3][4][5]. In quantum optics, a wellknown bosonic bunching example is the Hong-Ou-Mandel two-photon coalescence [6] observed in balanced beamsplitters as well as in a variety of linear optical interferometers [7][8][9][10][11][12][13][14]. In such a process, two indistinguishable photons impinging on the input ports of a balanced beam-splitter will exit from the same output port, while distinguishable photons have a non-zero probability of exiting from different ports. Besides its importance for tests on the foundations of quantum mechanics [15], bosonic bunching is useful in applications such as quantum-enhanced metrology [16] and photonic quantum computation [17].
In this Letter we experimentally characterize the bosonic bunching behavior of indistinguishable photons as they interfere in randomly chosen interferometers of different sizes. We will compare the results with those corresponding to classical, distinguishable photons, and see how the bosonic nature of indistinguishable photons results in more coincidences in the birthday problem than classical probability theory predicts.
For our experiments, we fabricated integrated optical interferometers in a borosilicate glass by femtosecond laser waveguide writing [18,19]. This technique consists in a direct inscription of waveguides in the volume of the transparent substrate, using the nonlinear absorption of focused femtosecond pulses to induce a permanent and localized increase in the refractive index. Single photons may jump between waveguides by evanescent coupling in regions where waveguides are brought close together; precise control of the coupling between the waveguides and of the photon path length, enabled by a 3D waveguide design [11], provides arbitrary interferometers with different topologies (Fig. 1 c). The input state is a Fock state of two or three individual photons obtained by parametric down-conversion ( Fig. 1 d). Controllable delays between the input photons are used to change the regime from classical distinguishability to quantum, bosonic in- b respectively of quantum (i.e. indistinguishable) and classical (i.e. distinguishable) photons after each interferometer. We note that these probabilities depend both on the interferometer's design and the input state used. A bunching event involves, by definition, the overlap of at least two photons in a single output mode. The classical bunching probability p (c) b is obtained from single-photon experiments that characterize the transition probabilities between each input/output combination. To measure p (q) b , we set up experiments with n input photons (each entering a different mode), and detected rates of n-fold coincidences of photons coming out in n different modes of each chip. Each experimental run was done in identical conditions and for the same time interval, varying only the delays that make the particles distinguishable or not, and so give us an estimate of the ratio t ≡ (1 − p Together with our measured p (c) b , this allowed us to estimate the bunching probability for indistinguishable photons p We summarize the experimental results for a number of different photonic chips in Figs. 2 a-g (two-photon ex-periments) and Fig. 3 (three-photon experiments). The results are in good agreement with theory, taking into account the partial indistinguishability of the photon source [20]. The shaded regions indicates the average bunching behaviour in uniformly sampled unitaries. For all interferometers we used we find that indistinguishable photons display a higher "birthday" coincidence rate than distinguishable photons do (p ; this is known to be true for averages [2]. Furthermore, p falls as m increases, as predicted in [1,2]. This latter result is somewhat counter-intuitive, given the bunching behaviour of bosons; in fact, it was shown that both p (q) b and the bunching probability associated with the classical birthday paradox decay with the same asymptotic behaviour as m increases [2].
Our results allow for direct comparisons between indistinguishable bosons and classical particles in simple versions of the birthday paradox. For example, our random 7-mode chip spread 3 indistinguishable photons so that they bunched with probability p pletely opposite behaviour is expected for fermions, since the Pauli exclusion forbids coincident "birthdays". Twoparticle fermionic statistics may be simulated by exploiting the symmetry of two-photon wave-functions in an additional degree of freedom [21,22]. Here, we injected the interferometers with two photons, in an anti-symmetric polarization-entangled state, in two different input ports [22,23]. The results are shown in Fig. 2 h, where a suppression of the bunching probability can be observed for the case of simulated indistinguishable fermions.
Having experimentally verified the predictions associated with the bosonic birthday paradox, we now turn back to theory to prove a result that reveals a stronger contrast between the bunching behaviours of bosons and classical particles. This can be done by focusing on the probability of full bunching, i.e. that all n bosons leave the interferometer in the same mode. Despite becoming exponentially rare as n increases [24], these events reveal a distinctive quantum/classical signature. Consider n individual bosons entering a m-mode linear interferometer described by a m × m unitary U . Let g k denote the occupation number of input mode k. Let us denote the probabilities that all bosons leave the interferometer in mode j by q c (j) (distinguishable bosons) and q q (j) (indistinguishable bosons). Then the ratio r f b = q q (j)/q c (j) = n!/ k g k !, independently of U, m, j. The proof of this statement is reported in the Methods Section. Note that the quantum enhancement in fullbunching probabilities is as high as n! when at most one boson is injected into each input mode, as in our photonic experiments.
We estimated the quantum/classical full-bunching probability ratio r f b by introducing delays to change the distinguishability regime, and performing photon counting measurements in selected output ports, using fiber beam-splitters and multiple single-photon detectors. In Fig. 4 (blue data) we plot the (full-)bunching ratio for all two-photon experiments referred to in Fig. 2, and find good agreement with the predicted quantum enhancement factor of 2! = 2. Note that in two-photon experiments every bunching event is also a full bunching event, which means that when n = 2 the ratio r f b = r b = 2, independently of the number m of modes in the interferometer.
We have also measured three-photon, full-bunching probabilities in random interferometers with number of modes m = 3, 5, 7. Perfectly indistinguishable photons would result in the predicted 3! = 6-fold quantum enhancement for full-bunching probabilities. The partial indistinguishability of our three injected photons reduces this quantum enhancement to a factor 3.59 ± 0.15. The results can be seen in Fig. 4 (red data), showing good agreement with the predicted value. Again, we can interpret these experimental results in terms of the birthday problem. Our random 7-mode chip scattered the 3 indistinguishable photons in such a way that they all left in a single given mode with probability p b , green points). Blue line corresponds to the classical "birthday" coincidence probability. Shaded areas correspond to the interval [p b − 1.5 σ; p b + 1.5σ] with a numerical sampling over 10000 uniformly random unitaries, p b being the average bunching probability and σ its standard deviation. Red area: simulation taking into account one partially distinguishable photon with indistinguishability parameter α = 0.63 ± 0.03 (see Supplementary Information and Refs. [10,11]). Green area: three distinguishable photons. Error bars in the experimental data are due to the Poissonian statistics of the measured events, and where not visible are smaller than the symbol. Inset: numerical simulation of the effect of photon distinguishability on the bunching probability p b . Grey area: perfectly indistinguishable photons (α = 1). ever, will all have been born on, say, a Sunday with a much lower probability p (c) In conclusion, our experiments characterize the bunching behaviour of up to three photons evolving in integrated multimode circuits. Our results verify the predictions of the bosonic birthday paradox and also show that photons follow a new, sharper bosonic bunching law that we have derived. Besides its fundamental importance in the description of bosonic quantum systems, the bunching behaviour of bosons can be exploited in contexts ranging from quantum computation to quantum metrology [25].
Methods
Experimental details. Photons are produced in the pulsed regime at 785 nm exploiting the parametric down conversion process by pumping a 2 mm long BBO crystal by a 392.5 nm wavelength pump field. Spectral filtering by 3 nm interferential filters, coupling into single-mode fibers and propagation through different delay lines are performed before coupling into the chips. For the threephoton measurements, one of the four generated pho- tons acts as the trigger for coincidence detection, while the other three are coupled inside the chip. The output modes are detected by using multimode fibers and singlephoton avalanche photodiodes. Coincidences between different detectors allow us to reconstruct the probability of obtaining a given output state. The full-bunching contributions are measured by splitting in two (three) equal parts the desired output mode with fiber beam-splitters and by measuring two-fold (three-fold) coincidences.
The measurements for the m = 2, 8, 12, 16-mode chips with two photons and for the fermionic case were performed with a continuous wave parametric down conversion source in a BBO crystal at 810 nm.
Proof of full-bunching law for bosons. The input state of n indistinguishable bosons distributed in m modes is a Fock state |G = |g 1 g 2 ...g m , with well-defined mode occupation numbers g i . The bosons will be distributed randomly among the m modes by the action of a m-mode linear interferometer described by a m×m unitary matrix U , which induces a unitary transformation U F on the Fock space. As described in [1], the probability amplitude associated with input |G and output |H = |h 1 h 2 ...h m is given by where U G,H is the matrix obtained by repeating g i times the i th row of U , and h j times its j th column [26], and per(A) denotes the permanent of matrix A [27]. Recall that |U i,j | 2 is the probability that a single boson entering mode j will exit in mode i. Then a simple counting argument gives the probability p G,H that distinguishable bosons will enter the interferometer with occupation numbers g 1 g 2 ...g m and leave with occupation numbers h 1 h 2 ...h m : where |U G,H | 2 is the matrix obtained by taking the absolute value squared of each corresponding element of U G,H . Let us now introduce an alternative, convenient way of representing the input occupation numbers. Define a n-tuple of m integers r i so that the first g 1 integers are 1, followed by a sequence of g 2 2's, and so on until we have g m m's. As an example, input occupation numbers g 1 = 2, g 2 = 1, g 3 = 0, g 4 = 3 would give r = (1, 1, 2, 4, 4, 4). Using Eq. (1) we can evaluate the probability q q (j) that the n indistinguishable bosons will all exit in mode j: where A is a n × n matrix with elements A i,k = U j,r k . Since all rows of A are equal, per(A) is a sum of n! identical terms, each equal to k U j,r k . Hence Using Eq. (2), we can calculate the probability q c (j) that n distinguishable bosons will leave the interferometer in mode j: q c (j) = per(B)/n!, where B has elements Our new bosonic full-bunching law regards the value of the quantum/classical full-bunching ratio, which we can now calculate to be r f b = q q (j)/q c (j) = n!/ k (g k !). Additional information Correspondence and requests for materials should be addressed to F.S., E.G. and R.O. | 2013-09-26T08:38:14.000Z | 2013-05-14T00:00:00.000 | {
"year": 2013,
"sha1": "895e0f745ae74626d215b5c3f31f54c57044a594",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1305.3188",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9fe819e445665a8c9d89497f7f3ef83d33fb7912",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
52810961 | pes2o/s2orc | v3-fos-license | Cognitive‐motor interference during goal‐directed upper‐limb movements
Abstract Research and clinical practice have focused on effects of a cognitive dual‐task on highly automated motor tasks such as walking or maintaining balance. Despite potential importance for daily life performance, there are only a few small studies on dual‐task effects on upper‐limb motor control. We therefore developed a protocol for assessing cognitive‐motor interference (CMI) during upper‐limb motor control and used it to evaluate dual‐task effects in 57 healthy individuals and two highly prevalent neurological disorders associated with deficits of cognitive and motor processing (57 patients with Parkinson's disease [PD], 57 stroke patients). Performance was evaluated in cognitive and motor domains under single‐ and dual‐task conditions. Patterns of CMI were explored to evaluate overall attentional capacity and attention allocation. As expected, patients with neurological deficits showed different patterns of CMI compared to healthy individuals, depending on diagnosis (PD or stroke) and severity of cognitive and/or motor symptoms. Healthy individuals experienced CMI especially under challenging conditions of the motor task. CMI was greater in PD patients, presumably due to insufficient attentional capacity in relation to increased cognitive involvement in motor control. Although no general increase of CMI was observed in stroke patients, correlation analyses suggested that especially patients with severe motor dysfunction experienced CMI. Clinical ratings of cognitive and motor function were weakly associated with CMI, suggesting that CMI reflects a different construct than these unidimensional clinical tests. It remains to be investigated whether CMI is an indicator of difficulties with day‐to‐day activities.
| INTRODUCTION
Unidimensional clinical tests for cognitive function and motor function may underestimate impairments of daily life activities. These activities typically require adequate interaction with the environment and often involve the simultaneous performance of two or more tasks (such as walking and talking, or writing while talking on the phone). Competing attentional demands can lead to decrement in performance, especially when the attentional demand of one or both tasks is high or attentional capacity is reduced. Interference may thus be disproportionately great in neurological conditions that are associated with deficits of motor and/or cognitive processing, such as Parkinson's disease (PD; Kelly, Eusterbrock, & Shumway-Cook, 2012), multiple sclerosis (Leone, Patti, & Feys, 2015), Alzheimer's disease (Camicioli, Howieson, Lehman, & Kaye, 1997), and stroke (Plummer et al., 2013).
The relative change in performance associated with "dualtasking" is referred to as dual-task interference or the dualtask effect (DTE; Plummer & Eskes, 2015). In research and clinical practice, DTE is often only quantified in the motor domain (e.g., decrease in walking speed) as an index of automaticity of motor control, without considering (changes in) performance on the cognitive dual task. To better understand cognitive-motor interference (CMI) and to be able to evaluate changes in response to treatment, it is critical to assess performance in both the cognitive and motor domain under single-and dual-task conditions (Plummer & Eskes, 2015;Rochester, Galna, Lord, & Burn, 2014). Evaluation of DTEs in both domains does not only provide insight into cognitive and motor function separately, but also contributes to understanding of CMI in terms of attentional capacity (i.e., total DTE in both domains) as well as attention allocation (i.e., task prioritization) and (Plummer & Eskes, 2015;Plummer, Villalobos, Vayda, Moser, & Johnson, 2014;Plummer et al., 2013).
To date, research and clinical practice have mainly focused on the effects of a cognitive dual-task (e.g., counting backwards or word naming) on highly automated motor tasks such as walking or maintaining balance (for reviews see Amboni, Barone, and Hausdorff (2013); Plummer et al. (2013)). Despite the potential importance for daily life performance, there are only a few small studies on DTEs during upper-limb motor control, which is assumed to be more cognitively driven and thus less automated than gross motor activities such as walking (Alberts et al., 2008;Broeder et al., 2014;Frankemolle et al., 2010;Houwink, Steenbergen, Prange, Buurke, & Geurts, 2013;Mills et al., 2015;Pradhan, Scherer, Matsuoka, & Kelly, 2011;Van Impe, Coxon, Goble, Wenderoth, & Swinnen, 2011).
In this study, we therefore developed a protocol for evaluating patterns of CMI during simultaneous performance of a cognitive task and an upper-limb motor task. We used it to evaluate DTEs in both the motor and cognitive domain in healthy individuals and in two highly prevalent neurological conditions associated with deficits of cognitive and motor processing (PD and stroke). These distinct patient groups were chosen as a generalized "proof of concept" because they were expected to show increased levels of CMI and because the considerable variation in severity of cognitive and motor impairments within these patient groups would allow evaluation of the association between the severity of cognitive and/ or motor impairments and (patterns of) CMI.
The cognitive task consisted of the auditory Stroop task (Cohen & Martin, 1975), a time-critical task requiring continuous attention, which has previously proved successful in eliciting CMI even in healthy individuals (e.g., Weerdesteyn, Schillings, Van Galen, & Duysens, 2003). The motor task involved goal-directed upper-limb movements to control a virtual mouse presented on a LED TV and to collect virtual pieces of cheese (targets) as fast as possible while avoiding a virtual cat (obstacle). Single-task performances as well as DTEs in both the cognitive and motor domain were compared between healthy individuals, PD patients with varying degree of cognitive and motor symptoms, and chronic stroke patients with reduced function of the upper extremity. Patterns of CMI were explored to evaluate overall attentional capacity and attention allocation.
Our primary hypothesis was that CMI would be greater in both PD and stroke patients compared to age-matched controls due to increased cognitive involvement in motor control, reduced attentional capacity, and/or deficits in attention allocation. We also hypothesized that a higher motor-task complexity (i.e., catching targets while avoiding obstacles, compared to catching targets only) would have a detrimental effect on dual-task performance within each group. It was anticipated that CMI would be greater in more severely affected patients, and that attention allocation would be a reflection of their cognitive and/or motor abilities.
| Participants
For this cross-sectional study we recruited 57 patients with PD fulfilling the UK PD Brain Bank criteria (Gibb & Lees, attention, cognitive-motor interference, dual-task, Parkinson's disease, stroke 1988) and 57 chronic stroke patients (>8 weeks poststroke) with reduced function of the upper extremity as determined by the Fugl-Meyer Upper Extremity Scale (FM-UE;Fugl-Meyer, Jääskö, Leyman, Olsson, & Steglind, 1975; see Table 1 for patient characteristics). Patients were recruited from the outpatient clinics of the Department of Neurology and the Department of Rehabilitation Medicine of the Leiden University Medical Center and from a list of patients who were discharged from the Rijnlands Rehabilitation Center between January 2013 and June 2014. Patients were excluded if they had disorders of the central nervous system or other conditions that could affect motor function of the upper extremity supplementary to PD or stroke. All patients were allowed to take their routine medications at the time of the experiment. Fifty-seven healthy controls (23 women, 34 men; mean ± SD age: 63.8 ± 7.6 years), who were sex-matched and agematched (±3 years) at group level to the patients, were recruited both through advertisements and from a database of volunteers who had participated in previous studies. Controls had normal or corrected to normal vision and hearing, had no apparent cognitive disorders or deficits, and had no history of disorders affecting the function of the upper extremities. Written informed consent was obtained according to the Declaration of Helsinki. The ethical committee of the Leiden University Medical Center approved the study protocol.
| Clinical assessment
Cognitive function was evaluated in PD patients using the SCales for Outcomes in PArkinson's disease-COGnition (SCOPA-COG; Marinus et al., 2003) and in stroke patients using the Montreal Cognitive Assessment (MoCA; Nasreddine et al., 2005). The severity of motor symptoms in PD patients was measured using the Hoehn and Yahr scale (Hoehn & Yahr, 1967) and section III of the Movement Disorder Society version of the Unified Parkinson's Disease Rating Scale (MDS-UPDRS-III; Goetz et al., 2008). The severity of upper-limb motor symptoms in stroke patients was measured using the FM-UE (Fugl-Meyer et al., 1975). In controls, hand dominance was assessed using a Dutch version of the Edinburgh Handedness Questionnaire (Oldfield, 1971).
| Cognitive task
The auditory Stroop task (Cohen & Martin, 1975) was used as cognitive task. The words "high" and "low", spoken by a woman's voice in either a high pitch or a low pitch, were presented to the participants with an interstimulus interval of 2 s. Participants were instructed to verbally indicate the pitch of the word they heard (ignoring the actual word presented) by responding "high" or "low" as accurately and as quickly as possible. Participants were allowed to correct their response before the next stimulus occurred. The stimuli (50% congruent and 50% incongruent, ordered randomly) were presented via a headset (Trust 15480 Comfortfit) and were recorded together with the responses using Moo0 Voice Recorder (version 1.4.3. www.moo0.com). The single-task cognitive condition consisted of 11 stimuli (total duration: 30 s). During the dual-task conditions, duration of the cognitive task was equal to that of the motor task (i.e., from start to finish of the motor task).
| Motor task
Participants sat in a chair or in their own wheelchair placed circa 1.5 m in front of a 60" LED TV (Sharp LC-60LE652E, Sharp Electronics Europe Ltd., Usbridge, UK). Movements of the arms and trunk were recorded using a Microsoft Kinect ™ v2 sensor that was mounted above the LED TV. Based on depth data obtained with an infrared laser transmitter and an infrared camera, the Kinect for Windows software development kit (SDK 2.0, www.microsoft.com) provided real-time 3D-coordinates of the wrist, elbow, shoulder, head, and trunk at a sampling rate of 30 Hz. D-flow software (Motekforce Link, Amsterdam, The Netherlands; Geijtenbeek, Steenbrink, Otten, & Even-Zohar, 2011) expanded with a data fusion component (NCF, Noldus, Wageningen, The Netherlands) was used for controlling the experiment and data storage. Participants performed unsupported goal-directed movements in the frontal plane ( Figure 1a) to control the horizontal and vertical movements of a virtual gray mouse, presented against a background of virtual wood on the LED TV, to collect virtual pieces of yellow cheese (targets) as fast as possible while avoiding a virtual black-and-white cat (obstacle; present in the high-difficulty level only). Patients performed the task with their (most) affected arm. Controls were randomly assigned to perform the task with either their dominant arm (n = 29) or nondominant arm (n = 28).
Each condition of the motor task consisted of two series of 24 targets. Targets were evenly distributed over eight positions within the individually determined reachable workspace area (see Figure 1b) and were presented one at a time in a pseudorandom order (i.e., three blocks of eight targets; each target position was presented once within a block, in random order, to ensure that the eight target positions were evenly distributed within each condition; two targets within the same quadrant were always separated by at least one target in a different quadrant; there was no extra pause between these blocks of eight trials). The center of the LED TV corresponded to the center of the participant's reachable workspace area and all positions and movements of the virtual objects were scaled such that the upper and lower edges of the LED TV corresponded to the extremes of the participant's reachable workspace area. Hence, for all participants the targets were presented at the exact same positions on the LED TV, but the associated movement distance depended on the individually determined reachable workspace area. The horizontal and vertical positions of the virtual mouse on the LED TV were determined by the measured horizontal and vertical position of the wrist in the frontal plane (relative to the center of the participant's reachable workspace area). A first-order filter (τ = 0.05 s) was applied to the wrist position signal to minimize the visual effects of high-frequency measurement noise.
Prior to the start of each series of 24 targets, the participant moved the virtual mouse toward a virtual start button in the center of the LED TV. After a 5-s countdown the first target appeared and the start button disappeared. A target was considered "caught" if the center of the virtual mouse was within 0.02 m from the center of the virtual cheese for 0.1 s. As soon as a target was caught, or if a target was not caught within 5 s after appearance, the target disappeared and the next target appeared. The participant thus moved the virtual mouse from one target to the next without returning to a "home position" in between.
To evaluate whether a higher complexity of the motor task would affect dual-task performance, two difficulty levels of the motor task were introduced: catching targets (i.e., without obstacles; "low difficulty") and catching targets while avoiding obstacles ("high difficulty"). In the high-difficulty conditions, Schematic representation of distribution of targets and obstacles (•) over the individually determined reachable workspace area. The center of the LED TV corresponded to the center of the participant's reachable workspace area (⊗). All positions of the virtual objects were scaled such that the upper and lower edges of the LED TV corresponded to the extremes of the participant's reachable workspace area. Targets were presented one at a time in a pseudorandom order. In the high-difficulty conditions one-third of the targets suddenly changed into an obstacle and the target appeared at a nearby location within the same quadrant eight out of the 24 targets per series (i.e., 16 out of 48 targets per condition) suddenly changed into an obstacle and the target appeared at a nearby location within the same quadrant (see Figure 1b). The obstacle (i.e., a virtual cat) appeared as soon as the mouse was within a specific distance from the target (depending on movement velocity so that the time available for obstacle avoidance was circa 0.8 s for all participants). If an obstacle was hit, that is, if the center of the virtual mouse was within 0.03 m from the center of the virtual cat, both the obstacle and target disappeared and the next target appeared. The obstacles were presented in a pseudorandom order (i.e., once for each of the target positions ( Figure 1b); evenly distributed between the first and second half of each series; two obstacles were always separated by at least one target without obstacle). Events in the motor task (e.g., start, appearance of target/obstacle, catch) were never accompanied by sound to avoid interference with the cognitive task under dual-task conditions.
| Procedure
Participants performed the following conditions: (a) single cognitive task; (b) single low-difficulty motor task, that is, without obstacles; (c) single high-difficulty motor task, that is, with obstacles; (d) dual task: cognitive task and lowdifficulty motor task simultaneously; and (e) dual task: cognitive task and high-difficulty motor task simultaneously. During dual-task conditions, participants were instructed to perform both tasks to their best ability.
Prior to each of the five conditions, participants performed a short practice (four targets). The order of single-task versus dual-task conditions as well as the order of low-versus high-difficulty levels within the motor task were randomized across participants. Patients who experienced limited physical capacity or complained of fatigue (four PD patients, 10 stroke patients) performed only one series per condition (i.e., 24 instead of 48 targets) to reduce the risk that not all conditions could be completed. After completing all conditions, participants rated the perceived "fun" and "difficulty" of the cognitive and motor task on 11-point numeric rating scales (0: none, 10: maximum possible).
A subgroup of 12 PD patients, 12 stroke patients, and 12 healthy controls repeated the test after 1 week at the same hour of the day in order to determine test-retest reliability. Methodological details and results of this analysis are presented in Supporting Information Appendix S1A.
| Data processing
Data was processed using MATLAB (The Mathworks Inc., Natick MA, USA, version R2016a). Performance on the cognitive task (P C , in %s −1 ) was calculated as the percentage of correct answers (determined from the sound recordings) divided by the average response time of correct responses (determined from the sound recordings using a custom-made algorithm). Performance on the motor task (P M , in %s −1 ) was calculated as the percentage of collected targets divided by the average "catch time" (i.e., time in seconds between target appearance and catch).
Dual-task effect (DTE) was calculated as: separately for the cognitive task (DTE C ) and motor task (DTE M ), and separately for conditions involving the lowdifficulty and high-difficulty motor task. Negative DTE values indicate performance deterioration, or dual-task cost, while positive DTE values indicate an improvement, or dualtask benefit (Plummer & Eskes, 2015). DTE total was calculated as the average of DTE C and DTE M to provide an overall index of CMI. Priority was calculated as DTE M -DTE C , with positive values indicating motor priority and negative values indicating cognitive priority.
Based on the values of DTE C and DTE M , participants were classified according to the following patterns of CMI (see Figure 2, based on Plummer et al., 2013Plummer et al., , 2014Plummer & Eskes, 2015): (a) mutual interference, insufficient attentional resources; (b) capacity sharing with primary allocation to one task, insufficient attentional resources; (c) over-allocation of attention to one task; (d) no interference, sufficient attentional resources. Threshold values for interference and facilitation were set at −5% and +5%.
| Statistical analysis
In total 49 PD patients, 45 stroke patients and 56 controls (i.e., 150 out of the 171 originally included participants) were included in the group comparisons and analysis of CMI patterns. Participants were excluded from all statistical analyses for various reasons. Five PD patients and one stroke patient were unable to complete one or more tasks due to fatigue (caused by the larger study protocol where this experiment was part of). P C could not be evaluated in one PD patient with severe speech problems and in another PD patient due to technical issues. One control participant was unable to perform the cognitive task. P M could not be evaluated in 11 stroke patients who had a very limited reachable workspace area (<0.2 m 2 ). DTE M could not be calculated in one PD patient due to a single-task P M of 0%s −1 .
All statistical analyses were performed using IBM ® SPSS ® Statistics 23.0 (IBM Corp., Armonk NY). Normality curves were inspected and Kolmogorov-Smirnov tests were used to assess whether data were normally distributed. In total six outliers were observed for DTE, which were attributable to very low baseline values (distributed over one PD patient, three stroke patients, and one control participant; equally distributed over cognitive/motor tasks and low-/high-difficulty (1)
| 3151
BANK et Al. levels). To prevent these outliers from having a disproportionate impact on the statistical analysis of this variable, they were replaced by the mean minus two standard deviations of the remainder of the group (Field, 2009). Statistical analyses were conducted to compare either PD patients versus controls and stroke patients versus controls. Group differences in single-task performance in both the cognitive and motor domain were first evaluated. Single-task P C was compared between groups (PD patients vs. controls, stroke patients vs. controls) using independent t tests. Single-task P M was submitted to mixed analyses of variance (ANOVAs) with group (separate analyses for comparing PD vs. control, or stroke vs. control) as between-subject factor and difficulty (low vs. high) as within-subject factor. To test our hypotheses that CMI would be greater in patients compared to controls, and that a higher complexity of the motor task would be detrimental to dual-task performance, DTE was submitted to mixed ANOVAs with group (PD vs. control or stroke vs. control) as betweensubject factor and with task (cognitive vs. motor) and motortask difficulty (low vs. high) as within-subject factors. In order to explore whether DTE results were influenced by single-task performance, which is in the denominator of Equation (1), we repeated the analysis of DTE using a linear mixed model with single-task performance as a covariate. In a similar way, we explored whether single-task P M and DTE results were influenced by the individually determined reachable workspace area (results are presented in Supporting Information Appendix S1B). Effect sizes were quantified as Pearson's r for independent t tests and as partial eta squared ( 2 p ) for ANOVAs. Significance was set at p < 0.05. For ANOVAs, significant interaction effects were analyzed using simple effects analyses, which yielded the effect of one independent variable at individual levels of the other independent variable (Field, 2009).
To explore whether patterns of CMI differed between patients and controls, we used chi-square tests to compare the overall frequency distribution of CMI patterns for PD patients versus controls and for stroke patients versus controls, separately for each difficulty level of the motor task. Effect size was quantified as Kramer's V.
Within each patient group, we aimed to determine whether CMI was greater in more severely affected patients. To this end, we first calculated a "combined clinical severity score" (CCSS) for each patient from the clinical ratings of cognitive function and motor function. Clinical ratings were converted to Z-scores (for PD patients) or rankings (for stroke patients) and averaged over the two domains, such that lower CCSS values reflected more severely affected patients. It was evaluated whether CCSS was associated with overall dual-task interference (DTE total ) using Pearson's correlation coefficient for PD patients and Spearman's correlation coefficient for stroke patients. We subsequently evaluated whether attention allocation reflected the cognitive and/or motor abilities. Partial correlation analyses were used within each patient group to assess the unique contribution of impairments in the cognitive domain (correcting for clinical ratings of motor function) and impairments in the motor domain (correcting for clinical ratings of cognitive function) to dual-task effects (DTE total , DTE C , and DTE M ) and Priority. In specific, SCOPA-COG score and MDS-UPDRS-III score were used within the PD group as clinical ratings of cognitive function and motor function, respectively, and Pearson's correlation coefficient was used for partial correlations. MoCA score and FM-UE score were used within the stroke group as clinical ratings of cognitive function and motor function, respectively, and Spearman's correlation coefficient was used for partial correlations. Within each group, we also explored whether attention allocation was related to perceived "fun" and "difficulty" of the tasks (methodological details and results of this analysis are presented in Supporting Information Appendix S1C).
| RESULTS
Significant results for group comparisons of single-task performance, dual-task effects, and patterns of CMI are F I G U R E 2 Patterns of CMI (based on Plummer et al., 2013Plummer et al., , 2014Plummer & Eskes, 2015): (a) both tasks deteriorate ("mutual interference"), indicating insufficient attentional resources; (b) deteriorated performance on one of the tasks but not the other ("capacity sharing with primary allocation to one task"), indicating that one of the tasks is prioritized in an attempt to preserve performance in this domain when attentional resources are insufficient; (c) improvement on one task at the cost of deteriorated performance on the other task ("over-allocation of attention to one task"), which may be but not necessarily is due to insufficient attentional resources; (d) no interference or even facilitation, indicating sufficient attentional resources. Dotted lines at −5% and +5% indicate threshold values for interference and facilitation presented in Table 2. Results of associated posthoc analyses are described in the following sections. Correlation coefficients between dual-task effects and clinical tests are presented in Table 3.
| Single-task performance
Single-task P C was not significantly different between PD patients and controls (Figure 3a), whereas single-task P M was significantly lower in PD patients compared to controls (Figure 3b). Both single-task P C and single-task P M were lower in stroke patients compared to controls (Figure 3a,b). In all three groups, P M was lower for the high-difficulty compared to the low-difficulty level of the motor task (p < 0.001). This difficulty effect was more pronounced for controls ( 2 p = 0.69) than for PD patients ( 2 p = 0.46) and stroke patients ( 2 p = 0.32).
| Dual-task effects
Parkinson's disease patients experienced more interference (i.e., more negative values of DTE) than controls (main effect of group; Figure 3c). DTE was not different between stroke patients and controls (main effect of group, p = 0.81; Figure 3c). There were no significant interactions between group and task or motor-task T A B L E 2 Significant statistical results for group comparisons of single-and dual-task performance and patterns of CMI difficulty. Follow-up analyses on the interaction between task and motor-task difficulty yielded largely similar results for analyses based on PD/controls and analyses based on stroke/controls. In specific, interference on the cognitive task markedly increased for the high-difficulty compared to the low-difficulty level of the motor task (i.e., more negative DTE C when obstacles were introduced; PD/controls: p < 0.001, 2 p = 0.45; stroke/controls: p < 0.001, 2 p = 0.35) while interference on the motor task tended to decrease (i.e., slightly less negative DTE M ; PD/controls: p < 0.001, 2 p = 0.13; stroke/ controls: p = 0.05, 2 p = 0.04). The high-difficulty motor task was prioritized over the cognitive task (i.e., DTE C more negative than DTE M ; PD/controls: p < 0.001, 2 p = 0.29; stroke/controls: p < 0.001, 2 p = 0.19), whereas the cognitive task tended to be prioritized over the low-difficulty motor task (i.e., DTE C less negative than DTE M ; PD/controls: p = 0.02, 2 p = 0.06; stroke/ controls: p = 0.06, 2 p = 0.04).
| Patterns of CMI
The frequency distribution of participants over the four patterns of CMI was significantly different between PD patients and controls for the low-difficulty level of the motor task ( 1 3 = 16.44, p < 0.001, V = 0.40). This difference can easily be appreciated from Figure 4a,b: 67% of the PD patients fell within the "mutual interference" category (i.e., the lower left quadrant), compared to only 29% of controls. This difference between PD patients and controls failed to reach significance for the high-difficulty level of the motor task ( 1 3 = 7.12, p = 0.07, V = 0.26).
The frequency distribution of participants over the four patterns of CMI was similar between stroke patients and controls for the low-difficulty level of the motor task ( 1 3 = 0.37, p = 0.96, V = 0.06), but differed between these groups for the high-difficulty level of the motor task ( 1 3 = 8.02, p = 0.04, V = 0.28). From Figure 4d,f, it can be appreciated that stroke patients more often fell within the "mutual interference" category (42% of stroke patients vs. 32% of controls) or the "over-allocation" category (38% of stroke patients vs. 23% of controls), while controls more often fell within the "capacity sharing" category (11% of stroke patients vs. 34% of controls).
| Correlations with clinical tests
Significant positive correlations were observed between CCSS and DTE total within both patient groups (Table 3). More severely affected patients (i.e., patients with more negative values of CCSS) thus experienced more CMI under dual-task conditions (reflected by more negative values of DTE total ) than less affected patients.
Associations with impairments in either domain (cognitive, motor) can also be appreciated from Table 3. For PD patients, reduced cognitive function (i.e., lower score on SCOPA-COG) was associated with more deterioration of the cognitive task under dual-task conditions (i.e., more negative DTE C ). Impaired motor function (i.e., higher score on the MDS-UPDRS-III) was not associated with any DTE measure. For stroke patients, reduced cognitive function (i.e., lower score on MoCA) was not associated with any DTE measure, whereas impaired motor function (i.e., lower score on the FM-UE) was associated with more dual-task interference (i.e., more negative DTE total ). For F I G U R E 3 Results for (a) single-task cognitive performance P C ; (b) single-task motor performance P M ; (c) dual-task effects in each domain (cognitive: DTE C ; motor: DTE M ) complemented by the overall dual-task effect (DTE total ). Individual data points are presented. Bars represent mean values and error bars represent standard errors. **p < 0.01. a Statistical results for DTE are presented in Table 2 and described in the text both groups, no significant associations were observed between Priority and clinical ratings of cognitive or motor function.
| DISCUSSION
To our knowledge, this is the first study that systematically evaluated patterns of CMI during upper-limb motor control in a large sample of healthy individuals and two highly prevalent neurological conditions associated with deficits of cognitive and motor processing (PD and stroke).
As expected, healthy individuals experienced CMI during simultaneous performance of a cognitive task and a goaldirected upper-limb motor task, especially under challenging high-difficulty conditions of the motor task. Interference on the cognitive task markedly increased when obstacles were introduced in the motor task (i.e., more negative values of DTE C ), whereas interference on the motor task slightly decreased (i.e., less negative values of DTE M ). The highdifficulty motor task thus demanded-and was allocatedmore attention than the low-difficulty motor task, albeit at the cost of a deterioration of cognitive task performance (illustrated in Figure 4 by a shift toward the left side in all groups, most clearly observed in the control group). The lowdifficulty motor task was associated with less interference on the cognitive task, without a clear prioritization of one of the tasks (at group level).
In accordance with our hypotheses, patients with neurological deficits showed different patterns of CMI compared to healthy individuals, depending on diagnosis (PD or stroke) and severity of cognitive and/or motor symptoms. PD patients experienced greater CMI than controls, with the majority of patients showing interference in both the cognitive and the motor domain. Attentional demand thus exceeded capacity (i.e., attentional resources were insufficient) in the majority of PD patients. In contrast to our expectations, stroke patients in general did not experience greater CMI than controls. Substantial heterogeneity within this patient group (in terms of lesion location and severity of cognitive and motor impairments) may have played a role in this regard. Indeed, the patterns of CMI were more variable within the group of stroke patients compared to the control group, especially with regard to DTE M (i.e., larger dispersion along the y-axis of Figure 4).
In both patient groups the correlation between CCSS and DTE total indicated that CMI was greater in more severely affected patients. Differences between the two patient groups become apparent concerning the unique contributions of impairments in the cognitive and motor domain. Within the group of PD patients, the degree of interference during dualtask conditions appeared more related to cognitive function than to motor function. This finding potentially illustrates the impact of cognitive impairments on daily life activities in PD patients (Leroi, McDonald, Pantula, & Harbishettar, 2012;Rosenthal et al., 2010), who depend on cortical executive control even for routine tasks due to basal ganglia dysfunction (Redgrave et al., 2010). In contrast, within the group of stroke patients the degree of interference during dual-task conditions appeared to be more related to motor function than to cognitive function. This suggests that especially stroke F I G U R E 4 Patterns of CMI for controls (a, d), PD patients (b, e) and stroke patients (c, f), with separate plots for the low-difficulty (a-c) and high-difficulty (d-f) level of the motor task. Each circle represents one patient. Based on values of DTE C and DTE M , circles are color-coded according to the four main patterns of CMI presented in Figure 2: black = mutual interference; dark gray = capacity sharing with primary allocation to one task; light gray = overallocation of attention to one task; white = no interference, or facilitation. Dotted lines at −5% and +5% indicate threshold values for interference and facilitation | 3155 patients with severe motor dysfunction experience CMI due to increased cognitive involvement in motor control (in line with Houwink et al., 2013). Although circa 50% of the included stroke patients fulfilled the criteria for mild cognitive impairment (i.e., MoCA score <26), the impact of these relatively mild cognitive symptoms seems limited.
Previous studies, which have mainly focused on CMI during highly automated gross motor activities such as walking or maintaining balance, have revealed that healthy individuals typically show a reduction of walking speed or an increase of variability measures (indicating reduced stability) while dual-tasking. Stronger effects have been reported in elderly subjects, in subjects with mild cognitive impairment, in PD patients, and in subacute and chronic stroke patients with globally intact cognition (for reviews see Kelly et al., 2012;Amboni et al., 2013;Plummer et al., 2013). Although no definitive strategy regarding attention allocation and task prioritization has been identified, it should be noted that most studies reported interference in gait or balance, while DTEs in the cognitive domain were more variable (Plummer et al., 2013;Rochester et al., 2014;Smulders, van Swigchem, de Swart, Geurts, & Weerdesteyn, 2012). Similar findings of interference in the motor domain (with variable DTEs in the cognitive domain) have been reported in a few small studies involving upper-limb motor tasks such as writing (Broeder et al., 2014), circle drawing (Houwink et al., 2013), and isometric force matching (Alberts et al., 2008;Frankemolle et al., 2010). In line with these previous studies we also observed interference in the motor domain, which was accompanied by interference in the cognitive domain to a greater or lesser extent depending on task difficulty. Our results suggest that the motor task was allocated more attention when its difficulty was increased, at the expense of increased interference in the cognitive domain. The large variation in patterns of CMI within each group (Figure 4), however, points to considerable interindividual differences in attentional capacity and attention allocation.
Our results further suggest that healthy individuals were flexible in their attention allocation (see also Supporting Information Appendix S1C): they tended to prioritize the more "fun" task when task complexity allowed (i.e., with low-difficulty motor task), whereas they prioritized the motor task under more challenging conditions (i.e., with obstacles, high-difficulty motor task), perhaps to preserve at least a "minimally acceptable level of performance". Patients with neurological deficits seemed less flexible in their strategy: performance in dual-task conditions appeared more related to their cognitive and/or motor abilities than to fun ratings for the respective tasks. Attention allocation, however, was not simply reflective of cognitive or motor abilities. Together, these findings underscore that the mediators of dual-task interference are more complex than cognitive and motor abilities combined with "a core motivation to minimize danger and maximize pleasure" (Williams, 2006): also the cognitive reserve, compensatory abilities, personality, affect and expertise may play a role (Yogev-Seligmann, Rotem-Galili, Dickstein, Giladi, & Hausdorff, 2012). The self-selected strategy for task prioritization may thus differ between individuals, between different combinations of dualtasks (e.g., when difficulty of the motor task is increased), and even between measurement sessions (which may result in low test-retest reliability for DTE measures, see Supporting Information Appendix S1A).
Compared to previous works, our study has some important advantages. Firstly, our study includes a large(r) number of participants with a varying degree of cognitive and motor impairments, which allowed us to not only compare CMI between patients and controls, but also to explore the associations between DTEs and clinical tests of cognitive and motor function. Secondly, our study provides insight into the relationship between DTEs in the cognitive and motor domain (on group level as well as on individual level), revealing different patterns of CMI between groups and between individuals. Thirdly, speed-accuracy trade-off is taken into account in quantifying cognitive performance. A limitation of this study is that the upper-limb motor task could not be performed in severely affected patients with a very limited reachable workspace area (<0.2 m 2 ) because measurement errors were relatively large compared to the small amounts of voluntary movement, especially when the arm was held close to the trunk (as is often the case in severely affected stroke patients). This may have biased our results toward an underestimation of CMI in stroke patients. Before drawing general conclusions from this study, several other considerations should be taken into account as well. Firstly, our study was not intended to find the specific brain areas involved in CMI. This would require a more homogenous stroke population in terms of location of the lesion. Secondly, additional analyses presented in Supporting Information Appendix S1B showed that our findings, which were obtained in patients with reachable workspace >0.2 m 2 , were not attributable to or distorted by individual differences in reachable workspace area (and the associated differences in movement distance between the targets) or individual differences in single-task performance. When evaluating CMI in individual patients, however, it should be taken into account that a small deterioration or improvement of performance under dual-task conditions can lead to disproportionally large DTE values in patients with low singletask performance. For example, the greater variation of CMI patterns within the group of stroke patients (Figure 4) is partly due to low single-task performance in the cognitive and/or motor domain. Changes in absolute measures of single-and dual-task performance (e.g., dual-task P M in %s −1 ) should therefore be considered in addition to relative measures of dual-task performance (DTE in %; as recommended by Agmon, Kelly, Logsdon, Nguyen, & Belza, 2015;Plummer & Eskes, 2015). Thirdly, the DTE measures in this cross-sectional study provided useful insight into processes underlying CMI: they were sensitive to different levels of task complexity, different neurologic conditions, and different levels of disease severity. Unfortunately, test-retest reliability of the DTE measures appeared to be insufficient for use in longitudinal studies (see Supporting Information Appendix S1A). Fourthly, the present study focused on a gross measure of upper-limb motor control (i.e., percentage of collected targets divided by the average "catch time"), but the collected motion data also allows for a more detailed analysis of motor function (e.g., quantifying the relative contribution of arm vs. trunk movements in stroke patients and evaluate changes in their relative contribution in response to treatment (van Kordelaar et al., 2012)). Finally, current time-consuming steps in postprocessing (e.g., the manual scoring of responses on the cognitive test and manual removal of "non-responses" that was required in some cases) need to be further automated for implementation in the clinical setting.
Within the patient groups only weak associations between clinical ratings of cognitive or motor function and DTE measures were observed, suggesting that DTE measures reflect a different construct than the unidimensional clinical tests. It remains to be investigated whether these DTE measures are a better indicator of difficulties with daily life activities that require adequate interaction with the environment and/or involve the simultaneous performance of two or more tasks. Our current findings underscore the added value of DTE measures in both the cognitive and motor domain, as they provide insight into overall attentional capacity as well as attention allocation in patients with neurological deficits. It may tentatively be suggested that dual-task training (if possible using increasing levels of task complexity) provides opportunities for improving upper-limb motor control in daily life.
In conclusion, our findings show that healthy individuals experienced CMI during simultaneous performance of a cognitive task and a goal-directed upper-limb motor task, especially under challenging conditions of the motor task. CMI was greater in PD patients, presumably due to insufficient attentional capacity in relation to increased cognitive involvement in motor control. Although no general increase of CMI was observed in chronic stroke patients, our results suggest that especially stroke patients with severe motor dysfunction experience CMI due to increased cognitive involvement in motor control. | 2018-09-28T21:19:13.046Z | 2018-10-20T00:00:00.000 | {
"year": 2018,
"sha1": "860444fe1d7c384435ea928304b41d2768683943",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/ejn.14168",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "860444fe1d7c384435ea928304b41d2768683943",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248360842 | pes2o/s2orc | v3-fos-license | Policing Commercial Sex in 1970s France: Regulating the Racialized Sexual Order
Based on multi-sited archival research, this article examines the racialized regulation of commercial sex in 1970s France, and whether and how this was intertwined with the protection of a racialized, gendered, and class-based sexual order. In doing so, this article contributes to a contextualized and historicized analysis of the construction of race and colour-blindness in French legislation and law enforcement. During and after the Algerian War, colonial anxieties about sexual threats posed by North African male labour migrants in the French metropole played a role in the discussion on commercial sex and motivated politicians, policymakers and journalists to argue for its selective tolerance. The author argues that the indirect legislation on commercial sex granted discretionary power to the police to protect the sexual order through colourblind justifications. This enabled law enforcement to implement and enforce universalist legislation ‘from below’ in a racially particularistic way.
Introduction
'Fortunately, there are these houses [brothels]; without these you would be raped! […] We need these [prostitutes sic] for all these men!' Police officer (Goutte d'Or, 1977) The above quote illustrates the attitude of the French police toward the regulation of commercial sex in the 1970s. At that time in France, pimping and soliciting, including the operation of brothels, were criminalized. Yet, in response to a question from a white female reporter from the local newspaper, a white police officer operating in the workingclass neighbourhood of La Goutte d'Or in Paris exclaimed that the existence of brothels should be tolerated. The policeman was implicitly referring to the activities of single male migrant workers from the former French colonies in North Africa. The French authorities and wider society were concerned about the presence of these single men from the African continent. Anxieties expressed about the deviant and excessive sexuality of racialized men were used to justify such anti-immigrant rhetoric (Shepard, 2012(Shepard, , 2018. At the time, the French government was enforcing a relatively open immigration regime because the economy required cheap, temporary, male migrant labour (Weil, 1995;Sayad, 1980;Sayad, 1997). In this context, prostitution became a central discursive and regulatory issue that mitigated and articulated these anxieties. This paper examines the importance of race in the interplay between gender and class in the control of commercial sex in France in the 1970s by looking at the policing and regulation of commercial sex for postcolonial North African migrants.
It does this by investigating how the regulation of commercial sex was also a form of regulation of interracialized intimacies. Thompson (2009) has argued that interracialized intimacies are not necessarily just regulated through prohibition but also through indirect forms of control. This article draws upon insights from scholars on gender, intimacy and colonialism who have argued that the regulation of interracialized sex and intimacy has been integral to constructing and protecting racial hierarchies within the colonial sexual order (Stoler, 2010;Stoler, 1989;Stoler, 1995;Ray, 2015). Thus, to manage interracialized heterosexual contact, the colonial administration regulated prostitution to ensure that colonized men did not have access to commercial sex with white women (Howell, 2000;Razack, 1998;Staszak, 2014;Taraud, 2003). Research on the regulation of interracialized intimacies should be attuned to such indirect types of regulation that aim to make possible some forms of intimacy while preventing or restricting others.
Before proceeding, a note on terminology is warranted. I use the concept 'interracialized intimacies' to underline the processes of racialization involved in the designation of certain intimacies as interracial, based on Jin Haritaworn's (2007) understanding of interraciality. Moreover, I will use the term 'prostitute' (in quotation marks) in direct quotes and use 'prostitution' to reflect the historical discourse because this was the term in use at the time. However, I use 'commercial sex', 'sex worker' and 'sex buyer' as preferred current terminology.
Looking at the regulation of commercial sex contributes more generally to research on the construction of race in French society and law. In the United States, the field of critical race studies has looked at the law's involvement in constructing and perpetuating racial economic and social hierarchies (Möschel, 2011). This scholarship has, in particular, demonstrated that the contemporary paradigm of colour-blindness that normalizes the (selective) denial of race contributes to racial inequalities by obscuring racial logics (Delgado and Stefancic, 2017;Bell, 2005). Despite differences between the US and Europe, researchers on race have highlighted the urgency of critical race studies and the study of colour-blindness in continental Europe (Staiano, 2015;Moschel, 2007;Beaman and Petts, 2020). The French case is particularly interesting because of the political, academic, and societal rejection of research into race and racialization as 'un-French'despite the long tradition of anticolonial and decolonial resistance and scholarship across the (former) French colonial field (Fanon, 2002;Fanon, 1952;Césaire, 1989).
However, recently, interest in critical race studies and a so-called colonial turn in France have been gaining ground (Mack, 2021;Saada, 2014;Fassin and Fassin, 2013). Critical researchers on race in France have argued that French republican universalism has always coexisted with colonialism: the ideal of republican universalism was made possible through geographical particularism that renders the explicit mentioning of race invisible (Stovall and Van den Abbeele, 2003;Cooper, 2009;Zevounou, 2021b). In a recent special issue of La Revue des Droits de l'Homme [The Human Rights Review] on race and law, Zevounou (2021a;2021b) argues for the need to uncover the specific historical and contemporary mechanisms that have led to the silencing of race in French law and in the administrative apparatus in order to help understand the experiences of formerly colonized people living in the metropole. This paper contributes to this call: it gives a contextualized and historicized analysis of the construction of race in French law and law enforcement by looking at the regulation of indirectly targeted postcolonial migrants in the wake of decolonization.
In so doing, this paper also speaks to critical race feminist scholarship on commercial sex. Researchers have argued that the regulation of commercial sex can reveal underlying concerns about the social order (Mainsant, 2013a;Phoenix, 2009). More specifically, scholars have used a critical race (feminist) perspective to show how the regulation of the social order is dependent on the intersections of race, gender, class and migrationstatus of people selling sex (McClintock, 1992;Benoit et al., 2019b;Benoit et al., 2019a;Butler, 2015). In contemporary discussions, this line of argument takes a position on commercial sex as either exploitation or work, based on contesting concerns about sex, women's bodies, morality and labour/exploitation (Scoular, 2010). This scholarship, however, does not sufficiently explore the intersectional identities of those buying sex and the impact of these characteristics on regulation. Some work does look at sex buyers, but it focuses on their experiences and/or their objectification of people selling sex rather than on a critical race perspective on the law (Coy et al., 2019;Coy, 2008;Ondrášek et al., 2018). However, this article contributes to this area of research by specifically exploring how law enforcement has targeted sex buyers through regulation of the social order.
The sex buyers upon which this papers focuses were postcolonial labour migrants from North Africa, who lived and worked in France after the political decolonization of the former French territories and protectorates in North Africa. I will demonstrate how racialized and gendered sexual anxieties about migrant workers played a role in the police regulation of commercial sex and how this impacted through racialization of legislation and enforcement. In order to do so, I first set out the archival research methodology on which this paper is based. Then, I show how the discursive problematization of North African migrant workers in France as sexual threats played a role in ongoing discussions about commercial sex. I demonstrate how this in turn impacted upon the regulation of commercial sex and, specifically, the actions of the police. By looking at the societal and political discursive context, I argue that the outcome of the police's approach was to reproduce sexual hierarchies that regarded prostitution as a solution to the problematization of North African migrant men's presence in France and as a way of indirectly regulating interracialized intimacies.
Methodology
This article is based on original archival research conducted for my PhD study of the regulation of interracialized intimacies in France which was ethically approved by the Ethics Committee of Juridical and Criminological Research of the VU University of Amsterdam on 16 July 2018. I consulted both state and non-state archives that dealt with immigration and prostitution in France from 1956 to 1979. Specifically, I consulted the Ministry of Justice archives at the national archives and administrative police archives in the departmental archives of the Bouches-du-Rhône department. Parts of these archives were consulted with special permission under the 213-2 of the Code du Patrimoine.
I conducted my examination of these state archives based on a keyword search on 'prostitution' and criminalized acts concerning prostitution, such as 'soliciting', 'procurement', 'attacks against morality'. I also used different keywords referring to migrants from North Africa: namely, 'Algerians', 'North Africans', 'migrant workers', 'French Muslims', 'Tunisians' and 'Moroccans'. Based on these results, I then searched the inventories compiled by archivists on certain state services, ministries, or themes to obtain a comprehensive overview of the structure of the archive, which helps trace the interconnections between prostitution and migration.
These archives give an insight into state practices on law enforcement on prostitution in the context of postcolonial migration. The national archives provide an understanding of the broader regulation of the presence of North African migrants in the metropole. The archives of the Ministry of Justice shine a light on the higher decision-making processes of policing prostitution, whereas the administrative police archives of the department of the Bouches-du-Rhône contribute to an understanding of enforcement by local police. At that time, the Bouches-du-Rhône department had a relatively high concentration of North African migrants and migrant workers, given that its main city of Marseille functioned as the central port between France and North Africa (Londres, 2013).
To analyse societal and state discourse on prostitution, I also consulted the Institut National de l'Audiovisuel: the audio-visual archive of France that stores all television broadcasts aired on French television. Television emerged in the 1960s and 70s as the dominant medium in France with a civic and educational mission (Cohen & Levy, 2007). The national agency charged with radio and television, until 1974 the Radiodiffusion-Télévision Française, was under the authority of the Minister of Information. A law of 7 August 1974 reformed and liberalized television, placing it outside of strict state control but still remaining 'less liberal' than in other European democracies (Blum, 1984). These broadcasts are therefore illustrative of state-sanctioned discourse on prostitution and immigration.
Furthermore, I also consulted private and media archives to retrieve media coverage on the two above-intertwined topics. I consulted the private archives of Father Roger de La Pommeraye, an abolitionist priest who worked at the organization Les Amicales du Nid. The latter was a prominent Catholic abolitionist organization that offered support to women selling sex who wanted to leave their situation. These archives give an overview of written media coverage on sex work in the 1960s and 1970s in the mainstream and abolitionist press. Moreover, I consulted the private archives of professor and deputy Mayor of Marseille Jean Chélini, who had a prominent voice in the Marseille area on issues of immigration. His archives include a collection of articles classified under 'the polemic on North African immigration after the murder of a bus driver in Marseille 1973'. I also examined the online archives of the local neighbourhood newspaper Goutte d'Or of the working-class immigrant neighbourhood of the same name in Paris, which aimed to represent the perspective of its residents on local issues. I also consulted feminist media at the Bibliothèque Marguerite Durand [Marguerite Durand Library]. Even though these archives are not exhaustive, they enable an analysis of a range of media that was discussing the interconnections between prostitution and North African migration from different perspectives.
To understand the racial formations in France and elsewhere, Ann Laura Stoler has made a case for the necessity to 'ask who and what are made into "problems", how certain narratives are made "easy to think", and what "common sense" such formulations have fostered and continue to serve' (Stoler, 2011). This is because the workings of race and racism are made implicit rather than explicit in the logic and rhetoric of colourblind universalism in France (Beaman and Petts, 2020). Connecting this to regulation and legislation, I build on work of the legal historian R A Gordon who has argued that the 'power of legal regimes' lies not necessarily in the disciplining of violators, but rather in 'its capacity to persuade that the world described in its image and categories is the only attainable world' (Gordon, 1984: 109). Therefore, I look at the regulation of prostitution not as (only) imposing legislation, but rather, as presenting it as the 'natural order of things'.
To this end, I build upon critical interventions in archival studies to approach the archives not as historical facts but as creators of the narratives they conserve. Subaltern studies, and postcolonial theory scholars more generally (Chakrabarty, 2015;Chakrabarty, 2000;Guha, 1983), have called for interrogation of the archive as a technology of power that creates rather than reflects forms of knowledge. Following these interventions, researchers have argued for reading the archives 'against the grain' to uncover the unsaid, or a 'reading along the grain' to trace rationales that structure colonial governance, rather than a more conventional 'mining' of the archives (Stoler, 2002;Burton, 2006). In order to do this, the tracing of inconsistencies, silences, visibilities, assumptions, and self-evidences is essential to an understanding of the underlying governmental rationales and to uncover the construction and force of race.
Besides tracing racialization in the archives, archival research on sex and commercial sex should be critical of the identification of the object of study in the archive. The law and law enforcement transform commercial sex into the problem of prostitution (Mainsant, 2013b;Scoular, 2015). What is defined and understood as 'prostitution' is dependent on the politicized and moralistic meaning of the term which makes it impossible to uncover the truth about prostitution and the lives and work of sex workers (Gilfoyle, 1994). In this article, I am not so much looking into the nature of commercial sex and its lived experiences, but rather at how forms of commercial sex are made into either problems or solutions.
Migrant Workers and Sex Workers
Before investigating legislation and law enforcement, I will first set out the societal context in which the policing of prostitution took place at the time. Whereas the histories of the reconfigurations of sexual and gender norms are often treated separately from the histories of immigration in Europe, this paper recognizes that these developments were entangled (Shield, 2017;Gordon, 2012). Spearheaded by the societal upheaval of May '68, different societal groups, such as the feminist and gay movements, renegotiated gender and sexual norms over the course of the 1970s (Ross, 2008;Fishman, 2017). At the same time, the authorities started to increase immigration control to limit labour migration. The oil crisis of the 1970s and the subsequent economic downturn led to widespread unemployment, including of migrant workers, and this in turn fuelled a backlash against immigration.
The historian Todd Shepard (2018) demonstrated that media outlets, politicians and policymakers articulated anti-immigration sentiments through discursive constructions that smeared North African men as threats to French women and, accordingly, the French population. During the Algerian war of independence, discourse on Algerians (re)activated gendered and sexualized imaginaries that had served to legitimize colonial hierarchies in the colonial context (Shepard, 2017;André, 2016). These stereotypes framed North African men as sexually deviant: both as hyper-masculine threats and also as overly 'feminine' men who transgressed heteronormative gender roles through homo-social behaviour (Blanchard, 2008;Blanchard, 2012). These discursive constructions were thus colonial continuities that outlived the Algerian war and became increasingly mainstream in the 1970s.
Various societal groups and political figures from the left of the political spectrum criticized the framing of sexually dangerous North African men based on arguments on so-called sexual misery. The sexual revolution had opened up the discussion on recreational sex, especially among those on the (far) left, making them more attuned to arguments on sexual misery. As a result, they raised concerns about the unfulfilled so-called sexual needs of North African migrant workers, as explained in the book La plus haute des solitudes. The Moroccan writer and researcher Ben Jelloun (1977) argued that North African male migrant workers were victims of an alienating migration experience resulting in loneliness and sexual frustration. He explicitly situated his argument in an antiracist discourse because he understood sexual exclusion to be a 'colonial wound'. In line with this argument, some left-wing commentators criticized white French women for refusing to sleep with Arab men, calling the refusal a form of sexual racism (Shepard, 2017;Shepard, 2012;Bourg, 2009). The argument on sexual misery is thus explicitly anti-racist but at the same time constructs a male entitlement to sex.
Arguments on sexual misery also functioned to justify the problematization of North African male migrants' presence in the French metropole. This was, for example, illustrated in a television broadcast that aired in 1975 which discussed 'The Problem of Racism in the South' in the aftermath of racist attacks against Algerians in the South of France (Mise au Point, 1975). The broadcast stated that 'Arabs are systematically singled out as perpetrators of all sexual assaults committed against women […] We think they live amongst men and are therefore in a state of sexual misery' (Mise au Point, 1975). The discourse on both sexual misery and sexual violence reiterated both the sexual exclusion and sexual deviancy of North African single men.
In this context, prostitution emerged as an issue that linked concerns about migrant workers to anxieties about sexual norms and morality. Politicians and commentators discussed female prostitution both as a consequence and as a solution to the sexual misery of North African men and the risks posed by their sexual aggression. Generally, the provision of commercial sex for migrant workers was considered the lowest and most inhumane form of sex work. Media and commentators referred to it as the 'prostitution de la misère' [prostitution of misery], alluding to both migrant workers and sex workers living in misery. In media outlets, cheap brothels that had, or were considered to have, a majority of North African clientele and where women had a daily 'passage' [number of clients] of 150 men were called 'maison d'abattage ' [slaughterhouses]. This was a term used in the 18 th century to designate fast and cheap brothels and was widely used in the colonial context in North Africa (Taraud, 2003). In discussions of commercial sex, the media labelled certain neighbourhoods with many migrant residents, such as La Goutte d'Or in Paris, as the ultimate hotbeds of prostitution. These brothels and neighbourhoods exemplified the problem of prostitution in France in its most exploitative form.
Whereas politicians, journalists, the abolitionist movement, and other actors continuously underlined the migrant identity of sex buyers, they did not systematically discuss or even mention the migrant status of a proportion of the sex workers. The majority of sex workers in the metropole in the 1960s and 1970s were white French, or other Europeans. 1 During the Algerian war (1956)(1957)(1958)(1959)(1960)(1961)(1962), police, politicians and commentators focused on the suppression of North African sex workers because they worried that Algerian networks of prostitution in metropolitan France were funding the Algerian nationalist movement Front de Liberation National [National Liberation Front] (FLN) (André, 2016;André, 2017;Gobin, 2017). After Algerian independence, however, the existence of North African sex workers was no longer discussed in public discourse and politics; even though police reports still mentioned the existence of North African sex workers. 2 Authorities and social commentators were not concerned with the marginalized position of such individuals as they were considered disposable. Instead, discussions about sex work focused on North African migrant men and white women.
Moreover, white sex workers themselves referred to the discourse on North African men and interracialized sex to affirm their own belonging to French society (Shepard,218). During the sex workers' protests of 1975, a group of 150 sex workers organized to claim their rights and protest about arbitrary arrests, corruption, harassment, and prosecution (Mathieu, 1999;Aroney, 2018). This offered some space in societal debates for sex workers, who claimed their agency by arguing that they could reject North African clients. An article published in an abolitionist magazine of the time noted the 'racist attitudes of prostitutes' but also asserted that they were no more racist than the rest of the French population (Leconte-Souchet, 1977). Moreover, sex workers contextualized their refusal of North African clients: they argued that they had to turn away these men because having such a clientele would deter French clients who pay more (Mathieu, 2003;Shepard, 2018). The rejection of North African clientele was both symbolic and economic. White sex workers could thus claim respectability by not engaging in interracialized commercial sex while at the same time being criticized for sexual racism if they did not do so. Having set out the discursive connection between migrant workers and sex workers, I will now turn to legislation and law enforcement.
Policing Prostitution
The law did not criminalize sex work directly but criminalized soliciting and procuring prostitution. The implementation of the so-called Marthe Richard law in 1946 abolished the state administration of prostitution. It criminalized all sorts of procuring under the penal code, including soliciting and keeping brothels, abolished the administrative registry of sex workers and established 're-education centres for prostitutes'. The legislation was brought in just after the end of World War II because its proponents connected the administration of prostitution with the Vichy government and the recent German occupiers and therefore argued for the necessity of reorganizing the moral order (Corbin, 1990). The brigade des moeurs, [vice squad], renamed the brigade de repression du proxénétisme [brigade for the repression of pimping] in 1975, was and still is mandated with the enforcement of laws that are related to 'public morality'. Even though sex work was not a direct offence, the legislative and regulatory framework of the sex industry was abolitionisti.e. aimed at eradicating prostitution.
The brigade des moeurs had a relatively wide discretionary power to carry out its mandate (Mainsant 2012), giving it the ability to target specific populations and activities. Critical race researchers on policing have argued that policing practices constitute a form of 'internal colonialism' to the extent that they involve the exertion of power over racialized populations with the aim of control (Gutiérrez, 2004). With particular relevance to France, they have argued that the police treat North African migrants as internal enemies, employing colonial techniques of policing that outlived independence (Rigouste, 2014;Prakash, 2013). Especially during the Algerian war, the French police in the metropole employed extensive violence and torture against Algerians (House, 2004;House and MacMaster, 2006). And in the abovementioned television broadcast about 'The Problems of Racism in the South', two prosecutors stated that 'most policemen have a hatred towards North Africans' and that some police were racist (Mise au Point, 1975). It follows therefore that research on law enforcement should explore the racialized enforcement of broad legislation through the discretionary power of the police.
Even though legislation gives an impression of rational and orderly regulation of commercial sex, legal regimes are characterized by the discretionary power of local police who determine who is policed and when (Agustín, 2008;Mainsant, 2013b). I reviewed trimestral reports on the 'fight against procuring' in the Bouches-du-Rhône department between 1972 and 1978 to the director of the National Police of the Ministry of Interior. In these reports, the police attributed the presence of sex workers to the population of North Africans: It is to be feared that the increase in the floating population, due to the infrastructure of the industrial area of Fos-Sur-Mer, and the large North African cell of the Berre region will have the effect of maintaining or even raising the figures for prostitution and pimping. 3 Whereas the reports contained detailed statistics of the numbers of arrest, cases of syphilis, and other quantifiable indicators concerning policing commercial sex, no numerical proof was given for the a priori assumed relationship between the occurrence of sex work and the presence of North African immigrants. This finding indicates the selective problematization of North Africans as sex buyers.
Furthermore, the selective problematization was racialized: concerns were mostly if not exclusively directed at North African migrants. The semesterly activity reports show that the police were explicitly worried about North African migrants in their efforts to enforce the legislation concerning prostitution. In the reports, police identified commercial sex for North African migrants under the header 'difficulties stemming from particular problems'. Both police powers of discretion and their discourse on suspicion enable racial profiling (Delgado, 2018;Longazel, 2013). The selective focus is based on the discursive construction of migrant workers as sex buyers, rather than on suspicion of the criminal behaviour of sex workers. Even though police at times used the category 'foreigner' in the reports, they used 'North African' in many other passages while not mentioning any other racialized groups of sex buyers specifically. Police action thus focused on North African male migrants as a specific racialized group, even when the legislation did not. This formed the basis of particularistic enforcement, as I demonstrate below.
The regulation of prostitution in the context of North African migrants should be understood within a longer history of the regulation of prostitution under French colonialism. Across the French colonial field, French colonial administrations regulated prostitution to protect colonial hierarchies within the system of state-administered prostitution (Taraud, 2003;Shepard, 2018). Making commercial sex available to fulfil white men's so-called sexual needs while at the same time discouraging formal unions (such as marriage) was one of the ways through which the colonial order was upheld (Staszak, 2014). Moreover, the regulation of prostitution allowed the colonial government to ensure that non-white men did not go to white sex workers (Staszak, 2014). Even after abolishing state administration of prostitution in 1946, the French administration still controlled and regulated military brothels to ensure that colonial troops did not have interracialized transactional sex (Taraud, 2003). Regulation of prostitution in the colonial field thus revolved around protecting gendered racial hierarchies. With the presence of former colonial migrants in the metropole, racialized regulation was also implemented in the postcolonial metropolitan context.
Selective Tolerance
In the metropole in the 1970s, police failed to universally enforce the legislation on prostitution. Instead, they selectively tolerated soliciting and brothels in neighbourhoods where many North African workers lived and visited, based on the selective problematization of a racialized group of sex buyers. Local residents' letters of complaint illustrate the selective nature of this tolerance: residents of La Goutte d'Or in Paris complained to the local politicians about the selective tolerance of brothels for North Africans, calling on the authorities to address the problem (Goutte d'Or 1979a). Similarly, a residents' group of the neighbourhoods of the city centre of Marseille complained to the police about their tolerance towards brothels with a North African client base, demanding increased police surveillance. 4 Selective tolerance was spatial: neighbourhoods racialized as 'North African' were spaces in the city where commercial sex was available, establishing 'zones of degeneracy' with different 'social and legal conventions' (Razack, 1998). This reinforced the public image of these districts as dangerous and morally transgressive spaces, marked by the presence of racialized men and sex workers. News outlets, for example, often referred to La Goutte d'Or as 'La Casbah' or 'La Medina', which were colonial terms used to signify unruly 'indigenous' neighbourhoods. The local neighbourhood newspaper Goutte d' Or (1979a: 19) accused police of 'having tolerated these activities, if not having nurtured them, in order to deform the reputation of the neighbourhood that is very connected to immigration'. The selective tolerance contributed to the framing of neighbourhoods where North African men lived and visited as morally transgressive and unruly colonial dislocated spaces marked out by the presence of North African men and sex workers in the French postcolonial metropolitan city.
The vilification of certain neighbourhoods as dangerous in turn legitimized the policing of migrants: the Prefect of the Bouches-du-Rhône, for example, wrote back to the residents' committee of a district in Marseille that the authorities would reinforce the fight against brothels and 'increase surveillance of North Africans living in and frequenting this area'. 5 By contrast, however, as I will illustrate below, the police continued to tolerate brothels. The Goutte d'Or journal (1979b: 1) argued that the 'excuse' of 'increasing security in the neighbourhood' enabled stop-and-search practices to be imposed which were aimed at expelling migrants from French territory. The tolerance of prostitution thus enabled police to increase surveillance and control of migrants.
The spatial dimension of selective tolerance also contributed to marking out these places as dangerous to white women. In one illustrative example from research on 'public opinion on North Africans' carried out by the national police, the authors presented anxieties about possible sexual aggression on the part of North African migrants towards white women being common knowledge regarding North African men's presence in the metropole: [There is] a psychosis of fear in the neighbourhoods close to the 'casbah's' [..] usually celibate or having left their wives behind, North Africans display sexual aggression in the form of rape, indecent assault or, more often than not, obscene verbal provocation, which […] creates a feeling of fear among French women who have to go through or visit the 'Arab neighbourhoods'. 6 This shows how North Africans were considered to be a particular problem in police strategies against prostitution and procurement because of anxieties about sexual violence and sexual needs.
These anxieties about sexual violence motivated police to tolerate procuring and soliciting and, selectively, certain brothels. This impacted upon the police stance on prostitution of both high-ranked and local officers. In 1956 the Director of the judicial police service had already argued that legislation concerning prostitution necessitated tolerance: [P]rostitution then becomes a lesser evil because it avoids more serious crimes, such as rapes and assaults on young girls or children, as we had to lament in some cities with a high concentration of North African or foreign workforce. 7 This reasoning continued throughout the 1970s. The Préfet ['head'] of the police of the Bouches-du-Rhône wrote in a semesterly report in 1972 that it was not desirable to repress prostitution in its entirety in neighbourhoods where many single foreign workers lived, because 'the majority represent a potential danger of sexual violence'. 8 Police thus considered North African and other migrant workers sexual threats to womenand even childrenfor which the solution was to tolerate commercial sex.
Whereas police thought of North African men as sex buyersa particular issue of concern that necessitated selective tolerancethey did not in a similar way selectively tolerate North African female sex workers. The police categorized North African women separately but did not explicitly count North African sex workers in crime statistics for soliciting. For example, in the last trimester of 1972, the police reported the rise of a 'new form of prostitution' in 'North African neighbourhoods', which they described as Moroccan and Algerian female 'tourists working as prostitutes'. They noted that these women were unknown to them.
The following year, the police force undertook targeted action and surveillance of young North African sex workers to expel them from French territory.
Given that law enforcement selectively tolerated brothels and soliciting for North African migrant male sex buyers, it is telling that the authorities at the same time expelled North African female sex workers. As I have argued, politicians, media outlets and commentators did not discuss the presence of North African female sex workers in the societal discourse on commercial sex. In fact, police could have decided to selectively tolerate North African female sex workers to respond to North African men's sexual needs. Instead, they expelled North African women who worked as sex workers in order to banish 'undesirable migrants'. This shows a discontinuity within the colonial context, in which the administration was attempting to ensure that North African men had access to North African sex workers. Instead, law enforcement in the postcolonial metropolitan context was concerned with protecting a sexual order that viewed white sex workers a societal sacrifice, as I illustrate below.
The Sexual Order
To understand how police tolerance of commercial sex indirectly regulated interracialized intimacies, it is important to explore how the police argument on the necessity of prostitution was part of a wider discourse in French society. Politicians, journalists and public commentators also considered the availability of commercial sex as a solution to the presence of North African migrant workers, and specifically to the sexual violence allegedly perpetrated by North Africans. For example, the deputy Mayor of Marseille, Professor Chélini, argued in an interview on the 'problem of North African immigration' in the left-leaning newspaper Le Provençal (1973) that reopening brothels was 'a delicate issue. Personally, I would give priority to a married man to immigrate to France.
[…] However, for single men, the question is there. The issue is moral, but also social. We cannot evade the question.' He argued that matrimony between French women and Algerian men was not a viable possibility, stating that marriages were few in number and almost always failed. Chélini did not explain why this should be so, assuming that the reasons were self-evident. He proposed family migration and commercial sex to respond to the so-called sexual misery of North African men, but not mixed marriage. The arguments for the 'necessity of prostitution' revolved around interracialized intimacies because their proponents did not consider consensual and non-commercial relationships between migrant men and French women a possibility.
Abolitionist groups, both Catholic and feminist, denounced arguments on the necessity of prostitution. The prominent Catholic abolitionist movement Les Amicales du Nid was aimed at abolishing prostitution. Its members saw both North African men and sex workers as victims of their economic, social, and psychological situation, but also framed sex workers as victims of North African men. 9 They invoked the failure of relationships between French women and North African men as one of the causes of women 'falling into prostitution'. For example, a course for Christian social workers, given at the Christian university by the head of Les Amicales du Nid, discussed different scenarios of 'women falling into prostitution'. In these scenarios, cohabitation and marriage with North African men were illustrated as a gateway into prostitution, 10 thus perpetuating the notion that prostitution was caused by migrant men.
At the same time, from a position of Catholic, leftist working-class solidarity, Catholic abolitionist groups underlined the necessity of alleviating the solitude of migrants. Articles in their magazine Femmes et monde argued that prostitution was not the fault of North African migrants because these men themselves were victims of solitude and racism (Leconte-Souchet, 1977). For example, the magazine featured an interview with Ben Jelloun as an expert on the issue, in which he argued that prostitution could only relieve the sexual needs, not the affective needs (Delorme, 1977). As a solution, Ben Jelloun proposed family migration, thereby encouraging marriage endogamy as the dominant sexual and domestic paradigm.
Some feminists criticized the conflation between sexual misery and sexual violence and the proposed necessity of prostitution as the solution. The second-wave feminist movement of the 1970s was concerned with themes that touched upon sexuality and bodily integrity but was divided on the issue of commercial sex: some argued for abolition, others supported sex workers' rights. Indeed, abolitionist feminists rejected arguments on sexual misery and sexual racism. For example, a feminist wrote the following in the feminist magazine Choisir: Also, in the name of an ambiguous humanism, to say the least, we suddenly think of migrant workers. However, could we not bring their families instead of granting them five minutes of sex per week and organizing these meetings of the rootless and the excluded? (Go for a walk on rue St Denis [a street infamous for sex work]!) (1980). The author criticized the toleration of prostitution as a means of preventing sexual crime by North Africans against white women and thereby catering to the sexual needs of migrants. Instead, she made the case for the possibility of family reunification for all migrants. Yet, she still did not envisage non-commercial and consensual interracialized intimacies as a possible way to respond to the problematization of North African men's presence.
Thus, the various commentators, activists and politicians promoted either commercial sex or racialized endogamy through the call for family reunification. These discussions constructed and perpetuated a sexual order that placed French sex workers and North African migrant men at the lower end of the gender, race, and class hierarchies. These hierarchies distinguished sex workers from 'respectable' French women and girls. This is crystallized in the argument made by the Prefect of the department of the Rhône during a press conference in 1972. He stated that North African men should have sexual relations with 'prostitutes' rather than running the risk that they rape 'our girls'. 11 Discursively, sex workers were characterised as 'sacrifices': in this way, the 'prostitute' was to be sacrificed for the sake of respectable French women, and by extension of French public morality.
The belief of prostitution as sacrifice stands in a long historical tradition. In Thomas Aquinas' work, an added footnote drew the parallel of the necessity of prostitution as similar to the necessity of 'the sewer in the palace' (Ditmore, 2006). Sex work was the lesser evil solution to protect the public order of heteronormative society: this constructed a social order that needed both sex workers and migrant workers for their labour while legally and discursively excluding them from state protection and care. This is illustrated in a television programme entitled 'Is Prostitution Necessary?' aired in February 1976, in which a sex worker appeared as a guest When she argued that no one cares about 'the death of a prostitute', the host intervened to relativize the marginalization of sex workers (De Vive Voix, 1976). He claimed that if a migrant worker gets run over by a car, no one cares either. Notably, other guests on the television show did not react. The message therefore seems clear: sex workers and North African men are sit together at the lowest end of the social hierarchy within the sexual order. Migrant workers were building the palace; sex workers were holding it up from below in the sewer.
The discussions on the necessity of sex work constructed and reproduced a sexual order and political economy that required sex workers' labour to fill the so-called sexual needs of dehumanized migrant workers in a migration context that did not want durable family migration from the African continent. The sexual political economy served the French interest that necessitated cheap migrant labour to do low-paid menial work, while, at the same time, the French administration deterred the men who provided this labour from settling and making a life in France (Silverman, 2002;Weil, 1995).
The sexual order was patriarchal and heteronormative: stereotypes of Arab men as both rapists and victims of sexual misery present male sexuality as uncontrollable and innate. This perpetuated the understanding that women's bodies are either at men's disposal to alleviate this pressure or under constant threat of male sexuality. Moreover, arguments for the tolerance of prostitution assume that women's sexuality is subordinate to men's. The construction and protection of the sexual order contributed to avoiding other forms of more durable and productive interracialized intimacies, such as marriage. This resulted in the sexual order and political economy being racialized, heteronormative and gendered: thus, serving French capitalist interests.
Racialization and Colour-Blindness
In tolerating the criminalized acts of procurement, soliciting and the running of brothels for North African workers as sex buyers, law enforcement was primarily concerned with the protection of the sexual order. In taking this approach, the particularistic enforcement of the legislation revolved around the regulation of interracialized intimacies and was therefore racial. Yet, the semesterly activity reports of the administrative police describe police action and the underlying motivations in a seemingly universalist way: There is a large colony of immigrant workers of North African origin in Marseille, mainly located in the so-called 'cage' district. The expansion of these neighbourhoods is quite sensitive, mainly towards La Canebière.
Tolerance in prostitution seems difficult to apply because of the principle of equality of citizens in criminal law; a principle to which the population of Marseille is particularly sensitive. Indeed, many protest letters were sent to the Service asking why such establishments are not the subject of legal proceedings and demanding an explanation for such discrimination.
The only possible tolerance is practical and meets the state of necessity: the workforce does not allow the conduct of large-scale operations. The Section is therefore obliged to make choices based on the notion of public disorder or social or human considerations. 12 This passage illustrates how law enforcement was concerned with protecting the so-called 'public order' rather than the French legal principle of equality. The police approach to anti-procuring and anti-soliciting took the meanings of 'public order' and 'social or human considerations' for granted, and the police did not explain what they might look like and why they should be that way. Instead, it was implied that the authorities considered the selective tolerance of procuring, soliciting and the operation of brothels as a solution to the problem of the presence of single North African migrant men in certain neighbourhoods in the French metropole.
Even though selective tolerance was not in line with the republican legislation that called for, on paper, universal law enforcement, it was motivated by particularistic implementation through arguments on public order. In the context of empire, the language of universalism was contingent on geographical particularism that allowed for racial differentiation (Zevounou, 2021b). In the postcolonial context, former colonial subjects had moved into European-France as migrants, making geographical particularism obsolete in the metropole. The legislation in question did not stipulate the selective tolerance of prostitution for racialized groups of sex buyers, as this would constitute discrimination. However, to justify racially particularistic implementation, police tolerance for racialized groups ex post facto was justified as a 'practical decision' and necessary for public order. This negates the saliency of race while simultaneously articulating and justifying racialized concerns in colourblind language.
The legislative framework enabled the authorities to regulate the racialized, classbased, and gendered sexual order because it allowed for the police's selective tolerance, even though this was not formulated in law. Mainsant (2013a) argues that law is made 'from below' in the regulation of sex work, which means that 'law in the book' differs from 'law in action'. Police action gives meaning to the legislation through discretionary surveillance and discretionary tolerance (Mainsant, 2012). This paper has argued that police attributed meaning to anti-procurement and anti-soliciting legislation in a racialized way to regulate interracialized intimacies: an approach that is thus not contrary to the law, but rather, a function stemming from it.
Conclusion
In this article, I have shown how the police approach to commercial sex during the period in question was invested in protecting the racialized sexual order. Societal and political discourse on labour migration from North Africa constructed and perpetuated a sexual order that encouraged commercial sex over consensual and non-transactional forms of interracialized intimacies. Set against this context, I have argued that police enforced universalist legislation in a racially particularistic way to ensure the availability of commercial sex for North African labour migrants. The selective tolerance of procurement, soliciting, and the running of brothels was made possible through discretionary police power and racialized knowledge of North African migrants as sex buyers. This demonstrates that seemingly colourblind legislation could be enforced in a racialized way to indirectly regulate interracialized intimacies.
The insights from this paper show that paying attention to racialized categorizations and problematizations of sex buyers can help understand racialized regulation of commercial sex. Whereas research on sex work and prostitution tends to focus on the impact of the race and migration status of sex workers on legislation and law enforcement, this paper has built upon insights from the colonial context to argue that looking at workers and buyers together shows that commercial sex is also about the regulation of interracialized intimacies. By making commercial sex not only an allowable intimacy but also a preferable intimate relationship, the regulation of commercial sex was one of the ways in which the authorities could control interracialized intimacies. To understand the regulation of interracialized intimacies, attention to tolerance can contribute to revealing which intimate relationships are favoured over others and how. Pursuing this line of inquiry helps shed light on how the regulation of commercial sex is concerned with intersections of race, gender and migration status in more complex ways.
This article has given a historically specific analysis of the construction of race and racialized logics in French law and law enforcement. This contributes to critical race studies in Europe in general, and France in particular, with insights from feminist scholarship on the colonial context to show how the regulation of commercial sex was integral to the protection of the racial sexual order. As colonial migrants moved to the French metropole before and after the independence of the former colonies, the French authorities and the white French community increasingly articulated racial concerns through sexual anxieties. These anxieties were mitigated through police action on commercial sex. The specificity of constructions of race in universalist and colourblind legislation in France reveals the saliency of colonial continuities in the postcolonial context. This shows how historical and context-specific particularities can help us more fully understand the working of colour-blindness in both a local and global perspective. | 2022-04-24T15:14:19.276Z | 2022-04-21T00:00:00.000 | {
"year": 2022,
"sha1": "18535131b5c436b485aaac9f8faa689c7d076ef4",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/09646639221094754",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "c4817c595b71ae7243a70d948b0e759218ea21fa",
"s2fieldsofstudy": [
"History",
"Sociology",
"Law"
],
"extfieldsofstudy": []
} |
219311121 | pes2o/s2orc | v3-fos-license | Assessing the Risks of Topically Applied dsRNA-Based Products to Non-target Arthropods
RNA interference (RNAi) is a powerful technology that offers new opportunities for pest control through silencing of genes that are essential for the survival of arthropod pests. The approach relies on sequence-specificity of applied double-stranded (ds) RNA that can be designed to have a very narrow spectrum of both the target gene product (RNA) as well as the target organism, and thus allowing highly targeted pest control. Successful RNAi has been reported from a number of arthropod species belonging to various orders. Pest control may be achieved by applying dsRNA as foliar sprays. One of the main concerns related to the use of dsRNA is adverse environmental effects particularly on valued non-target species. Arthropods form an important part of the biodiversity in agricultural landscapes and contribute important ecosystem services. Consequently, environmental risk assessment (ERA) for potential impacts that plant protection products may have on valued non-target arthropods is legally required prior to their placement on the market. We describe how problem formulation can be used to set the context and to develop plausible pathways on how the application of dsRNA-based products could harm valued non-target arthropod species, such as those contributing to biological pest control. The current knowledge regarding the exposure to and the hazard posed by dsRNA in spray products for non-target arthropods is reviewed and suggestions are provided on how to select the most suitable test species and to conduct laboratory-based toxicity studies that provide robust, reliable and interpretable results to support the ERA.
RNA interference (RNAi) is a powerful technology that offers new opportunities for pest control through silencing of genes that are essential for the survival of arthropod pests. The approach relies on sequence-specificity of applied double-stranded (ds) RNA that can be designed to have a very narrow spectrum of both the target gene product (RNA) as well as the target organism, and thus allowing highly targeted pest control. Successful RNAi has been reported from a number of arthropod species belonging to various orders. Pest control may be achieved by applying dsRNA as foliar sprays. One of the main concerns related to the use of dsRNA is adverse environmental effects particularly on valued non-target species. Arthropods form an important part of the biodiversity in agricultural landscapes and contribute important ecosystem services. Consequently, environmental risk assessment (ERA) for potential impacts that plant protection products may have on valued non-target arthropods is legally required prior to their placement on the market. We describe how problem formulation can be used to set the context and to develop plausible pathways on how the application of dsRNA-based products could harm valued non-target arthropod species, such as those contributing to biological pest control. The current knowledge regarding the exposure to and the hazard posed by dsRNA in spray products for non-target arthropods is reviewed and suggestions are provided on how to select the most suitable test species and to conduct laboratorybased toxicity studies that provide robust, reliable and interpretable results to support the ERA.
INTRODUCTION
RNA interference (RNAi) is a mechanism of gene silencing present in most eukaryote organism to regulate gene expression (Hannon, 2002). The silencing effect can be triggered by double-stranded RNA (dsRNA), is RNA sequence-specific, and makes use of the core RNAi machinery to degrade complementary RNA molecules. RNAi thus provides a tool that can be designed to affect and control insect pests in a highly specific manner by targeting genes that are essential for the survival of the species (Xue et al., 2012;Burand and Hunter, 2013;Zhang et al., 2017;Liu et al., 2020). In an agricultural context the technology may also be deployed to increase the sensitivity of pests or vectors to chemical insecticides (e.g., Killiny et al., 2014;Bona et al., 2016) or to protect beneficial species from viral diseases (Vogel et al., 2019).
For application as a pest control tool, the active dsRNA molecule has to enter and affect the target pest. This can be achieved by two main ways of application. First, dsRNA can be produced in planta, which requires genetic engineering (GE) of the plant. The first product of that kind has recently been approved by US regulators in June 2017 1 . This particular GE maize event (MON87411) produces a dsRNA targeting the Snf7 protein in the Western Corn Rootworm, Diabrotica virgifera virgifera (Coleoptera: Chrysomelidae), which is crucial for the transport of transmembrane proteins. Suppression of the Snf7 gene leads to increased larval mortality and consequently to reduced root damage (Bolognesi et al., 2012). The RNAi trait is combined with the Cry3Bb1 protein for improved target pest control and resistance management (Levine et al., 2015;Head et al., 2017). Second, the dsRNA molecules can be applied externally, for example in irrigation water or through trunk injections (Hunter et al., 2012;Li et al., 2015a;Niu et al., 2018;Kunte et al., 2020), in food-baits (Zhou et al., 2008;Zhang et al., 2010), by using delivery systems such as micro-organisms, viruses, nanocarriers (Kunte et al., 2020;Vogel et al., 2019), or topically as spray applications (San Miguel and Scott, 2016).
Two major challenges have been identified for implementing the RNAi-based technology in pest control. First, the target organisms have to ingest intact and biologically active dsRNA molecules in order to trigger an RNAi response. While RNAi has been observed in a number of insect species belonging to various orders, the effectiveness of dietary RNAi (derived from ingested dsRNA) is less clear (Baum and Roberts, 2014). Second, there is evidence that resistance is not developed against a specific dsRNA molecule but to components in the dsRNA uptake machinery in the intestinal tract or in the dsRNA processing machinery. For example, Khajuria et al. (2018) demonstrated for D. v. virgifera, that resistance to dsRNA targeting Snf7, was due to the fact that cellular uptake was prevented.
Despite those challenges, effective dsRNA-based spray products that cause specific toxic effects on selected arthropod pest species are expected within the next few years (Hogervorst et al., 2018;Taning et al., 2020) and our perspective will focus on this method of application.
ENVIRONMENTAL RISK ASSESSMENT
As pesticides, dsRNA-based sprays are regulated stressors that have to pass an environmental risk assessment (ERA) before being commercially released to ensure that their use causes no unacceptable harm to the environment. Given the novel mode of action, the regulatory and data requirements are discussed internationally (Auer and Frederick, 2009;US EPA, 2014;Roberts et al., 2015).
Early in the ERA, in a step called "Problem Formulation, " the protection goals set by environmental policy need to be identified, and operational protection goals and plausible pathways on how the stressor of concern could harm those protection goals (i.e., pathways to harm) are defined (Raybould, 2006;Gray, 2012;Craig et al., 2017;Raybould et al., 2019). Based on these "Pathways to Harm, " testable risk hypotheses can be derived, existing relevant information is collected and required data are identified. The aim of this process is to ensure that any decision taken is made in a traceable and transparent manner. While experience has been gained with applying problem formulation to the ERA of GE plants, the concept is equally applicable to other stressors, including dsRNA-based pesticides (Devos et al., 2019;Raybould and Burns, 2020).
For plant protection products such as dsRNA-based sprays, "biodiversity" is an important environmental protection goal, which is found in policies of most jurisdictions. However, this term is very general and thus specific (operational) protection goals need to be defined that can then be addressed in the scientific risk assessment. Such operational protection goals delineate the components of the environment that are valued and should be protected, including details on the location, the exact time period, and the maximum tolerable impact (Nienstedt et al., 2012;Sanvido et al., 2012;Devos et al., 2015). In this respect, it has been proposed to categorize biodiversity in categories of valued ecosystem services ("ecosystem service concept") as defined for example in the Millennium Ecosystem assessment (Millennium Ecosystem Assessment [MEA], 2005;Gilioli et al., 2014;Devos et al., 2015;European Food Safety Authority Scientific Committee, 2016;Maltby et al., 2017a,b). In the case of arthropods this includes regulating services (e.g., biological pest control, pollination), cultural services (e.g., protected species), and supporting services (e.g., arthropods that contribute to nutrient cycling).
Once the components of the environment to be protected are identified, plausible pathways to harm can be constructed. In Figure 1 such pathways to harm are defined for the protection goal "biological pest control" that is provided by predators and parasitoids, which may be affected by the application of a dsRNAbased spray. For a spray product to cause harm to the protection goal, a line of events or steps has to occur. If one can conclude with high certainty that one or more of the steps are unlikely to happen, the pathway is interrupted, which allows to conclude that the risk to biological control is negligible (Raybould et al., 2019). Thus the different steps can be tested or assessed in the ERA to characterize the risk. In principle the steps either relate to exposure, the likelihood that non-target species actually ingest sufficient amounts of biological active dsRNA, or hazard, which relates to the sensitivity of the non-target species to dietary RNAi. These two aspects of the risk equation will be discussed in the following sections.
EXPOSURE OF NON-TARGET ARTHROPODS TO dsRNA IN SPRAY PRODUCTS
Non-target arthropod species could directly be exposed to dsRNA in spray products when consuming treated plant material in FIGURE 1 | Plausible pathways to harm. Steps on how the application of a dsRNA-based spray insecticide could cause harm to the protection goal of "biological pest control" by affecting arthropod natural enemies (predators and parasitoids).
the field or outside the field in case of spray-drift, through contact with soil and water or topical application and indirect when feeding on arthropods that have been exposed. While the plant cuticle and also the cell walls limit the uptake of spray-applied dsRNA into the plants, there is some evidence for uptake and transport in the vascular system of bioactive dsRNA (Koch et al., 2016), which can be further enhanced by high pressure spraying (Dalakouras et al., 2016) or particular carriers (Mitter et al., 2017).
In general, stability of naked dsRNA in the environment is very low. Degradation of dsRNA within 2 days has been reported for soil and aquatic environments (Dubelman et al., 2014;Fischer et al., 2016Fischer et al., , 2017Bachman et al., 2020) although partial adsorption to soil particles will also play a role (Parker et al., 2019). Degradation appears neither to be affected by dose (Dubelman et al., 2014) nor by length or structure of the dsRNA molecule (Fischer et al., 2016). There is some indication that degradation of dsRNA molecules is reduced on plant surfaces (Tenllado et al., 2004;San Miguel and Scott, 2016). The persistence of dsRNA in formulated spray products is difficult to predict since the active ingredient is likely to be stabilized to prevent abiotic and biotic degradation. For example, Mitter et al. (2017) recently demonstrated that pathogen-specific dsRNA targeting plant viruses could be detected for more than 30 days after application when loaded on layered double hydroxide clay nanosheets. Thus, the formulation in which the molecule is applied has to be considered in the exposure assessment (Bachman et al., 2020).
The routes and duration of non-target organism exposure to dsRNA in sprayed products will depend on a number of factors, including: (1) application rate of the active ingredient, (2) application timing, (3) application method, (4) number of applications, (5) off-site movement of applied dsRNA, and (6) stability and persistence of exogenously applied dsRNA following application (US EPA, 2014).
For predators and parasitoids we have identified three main routes of exposure (Figure 1). The first, and the most likely route is indirect, through their prey or hosts. Herbivores can be covered by the spray or ingest the dsRNA when feeding on the treated plants. It remains to be confirmed, however, that dsRNA ingested by a herbivore is still biologically active when passed on to the next trophic level. To our knowledge, cross-species transfer of biologically active dsRNA has only been reported in one study, i.e., between honey bees (Apis mellifera, Hymenoptera: Apidae) and parasitic mites, Varroa destructor (Acari: Varroidae) (Garbian et al., 2012). The second potential route of exposure of natural enemies is through the insects' integument. There is some evidence that dsRNA applied topically can penetrate the insect's body wall, i.e., via the inter-segmental membranes, and cause an RNAi response. The first case of this nature was reported for Aedes aegypti (Diptera: Culicidae) by Pridgeon et al. (2008). Penetration has also been demonstrated for larvae of Ostrinia furnacalis (Lepidoptera: Crambidae) using fluorescent dsRNA albeit at very high concentrations of 0.5 µl of 0.5 µg/µl fluorescent labeled dsRNA per larva (Wang et al., 2011). However, it is difficult in such topical application studies to rule out that the dsRNA molecules entered the body through the spiracles rather than through the integument. However, there is evidence that the penetration efficiency can be enhanced by altering the formulation in which the dsRNA is applied. For example, in the case of the soybean aphid Aphis glycines (Hemiptera: Aphididae) penetration efficiency was significantly enhanced using a nanocarrier in combination with an amphiphilic periphery detergent to increase the attachment of the droplets to the insect cuticula (Zheng et al., 2019). In a recent study, Niu et al. (2019) observed the uptake of dsRNA topically applied to Acyrthosiphon pisum (Hemiptera: Aphididae) within 12 min. As a third route of exposure, insects might also ingest the molecule during grooming after they have been covered by dsRNA after a spray application. While some predators also feed on green plant tissue when prey is scarce (Lundgren, 2009) we regard this route of exposure as negligible.
Dietary uptake of dsRNA, does not necessarily mean that the molecule is still biologically active. Extraoral digestion is know from many predatory arthropods including spiders, lacewing larvae and predatory bugs (Cohen, 1998;Zhu et al., 2016;Walter et al., 2017). According to Cohen (1995) at least 79% of predaceous land-dwelling arthropods use extra-oral digestion. For example, it has been demonstrated for the plant bug Lygus lineolaris (Hemiptera: Miridae) that dsRNA molecules are completely digested to monomers by endonucleases in the saliva prior to ingestion (Allen and Walker, 2012).
HAZARD POSED BY dsRNA
In principle, ingested dsRNA can pose a hazard to a non-target arthropod in two ways, i.e., sequence-specific and sequenceunspecific. Mechanisms that have been suggested as a cause of sequence-unspecific effects of ingested dsRNA are first, the induction of a general immune response since RNAi is a component of the innate antiviral immunity response and second, a saturation of the RNAi machinery, i.e., the dsRNA processing enzymes (Dillin, 2003;Christiaens et al., 2018a). While saturation of the RNAi machinery has been observed in animals (mice and cell cultures) at high doses (US EPA, 2014), it has not yet been reported in arthropods (Miller et al., 2012;Christiaens et al., 2018a). DsRNA-triggered general immune responses, e.g., the upregulation of dsRNAase, have been observed in honey bees (Apis mellifera, Hymenoptera: Apidae) (Flenniken and Andino, 2013;Brutscher et al., 2017), bumble bees (Bombus terrestris, Hymenoptera: Apidae) (Piot et al., 2015), and the silkworm . There is evidence from feeding studies that high doses of dsRNA can boost a sequence-unspecific response in ladybird beetles (Coleoptera: Coccinellidae) (Haller et al., 2019). But comparable doses (of the same construct) did not cause such effects in other arthropod species studied (Pan et al., 2016;Vélez et al., 2016). Sequenceunspecific effects have also been observed for dsGFP in honey bees, A. mellifera, in feeding and injection studies (Jarosch and Moritz, 2012;Nunes et al., 2013). In summary, while there is no evidence that dsRNA can cause a saturation of the RNAi machinery in arthropods, high doses of dsRNA may affect the fitness of non-target arthropod species in a sequence-unspecific way through a stimulation of the immune system. Consequently, from an ERA perspective, non-and off-target effects of the dsRNA that are sequence specific are of much more concern and will be the focus of the following description.
After ingestion of dsRNA molecules, a successful RNAi response depends on a variety of factors that will be discussed below, including: stability of dsRNA in the gut (affected by gut pH and nucleases), dsRNA length and concentration, target gene, arthropod species and the life-stage exposed (Katoch et al., 2013;Scott et al., 2013;Davis-Vogel et al., 2018;Cooper et al., 2019;Kunte et al., 2020).
Once an insect has ingested dsRNA and the molecule has been taken up by the cells, the endonuclease Dicer cuts the molecule into short interfering RNAs (siRNA) of a length of 20-25 bp that are integrated into the RNA-induced silencing complex (RISC) (Hannon, 2002). Subsequently RISC facilitates the targeting and the endonucleolytic attack on mRNAs with sequence identity to the dsRNA (Hannon, 2002). The pre-requisite for a successful RNAi response is thus sequence identity between at least some of the siRNAs derived from the dsRNA and the target mRNA of the insect pest (Scott et al., 2013). Consequently, length of the dsRNA affects the effectiveness of the RNAi response, as longer molecules yield larger populations of overlapping siRNA molecules ranging in size and sequence (Baum et al., 2007;Bolognesi et al., 2012;Miller et al., 2012;Li et al., 2015b;Nandety et al., 2015). An injection study with Tribolium castaneum (Coleoptera: Tenebrionidae) suggests that the size of the dsRNA molecule also affects the duration of the RNAi response, event though the mechanism involved remains unclear (Miller et al., 2012). There is evidence that contiguous sequence matches of ≥21 nt of the dsRNA to the target gene are necessary for dsRNA to be biologically active in insects (Bachman et al., 2013(Bachman et al., , 2016Roberts et al., 2015) and it has been reported that even a single 21 nt sequence match can induce effects (Bolognesi et al., 2012). It has to be noted, however, that RNAi has been demonstrated to occur at sequence length as short as 15 bp (Powell et al., 2017). Still uncertain is the extent of sequence mismatch that has to be present in order to prevent dsRNA-derived siRNAs. Because siRNA molecules can inhibit translation of transcripts even when mismatches occur, the threshold for concern about non-target effects could be less than 100% sequence identity (Scott et al., 2013). For providing the evidence that any observed effect is due to specific gene silencing, it is necessary to support the feeding assays by determination of transcript levels with RT-qPCR. This, however, poses the challenge of identifying suitable reference or housekeeping genes to calculate relative transcript levels. Furthermore, the effect of RNAi on the protein may not be well correlated to the level of transcript suppression (Scott et al., 2013).
While functional RNAi has been reported from a number of insect species belonging to various orders, the impact of dietary RNAi is more limited (Baum and Roberts, 2014). While many insects have been found to be susceptible to dietary RNAi (Belles, 2010), large differences in sensitivity have been reported across taxa (Whangbo and Hunter, 2008;Terenius et al., 2011;Cooper et al., 2019). For example, feeding studies where solutions containing dsRNA were provided demonstrated that many Coleoptera show a LC 50 at dsRNA concentrations from 1 to −10 ppb, while effects are seen in Diptera at 10-500 ppm, and in Lepidoptera/Hemiptera at > 1000 ppm (Baum and Roberts, 2014). It has to be noted, however, that sensitivity to dietary RNAi can vary significantly among even closely related species as has been demonstrated for sweetpotato weevils, Cylas spp. (Coleoptera: Brentidae) Prentice et al., 2017). It can even vary between strains/populations of a particular species as has for example been reported for Locusta migratoria (Orthoptera: Acrididae) (Sugahara et al., 2017) and T. castaneum (Kitzmann et al., 2013;Spit et al., 2017).
Degradation of the dsRNA after ingestion or uptake is a major factor affecting the exposure of non-target species to bioactive dsRNA molecules and thus the effectivity of RNAi . Gut pH is important as it affects the stability of the ingested dsRNA molecules. Since RNA is most stable at pH of 4.0-5.0, the slightly acidic midguts of Coleoptera and Hemiptera (pH around 5) support dsRNA stability. In contrast, stability is low in the alkaline guts of Orthoptera, Diptera and Hymenoptera and in particular in the highly alkaline guts of Lepidoptera (pH > 8.0) (Cooper et al., 2019). In addition, dsRNA can be degraded by nucleases in the insect guts as has for example been reported for Bombyx mori (Lepidoptera: Bombycidae) (Arimatsu et al., 2007;Liu et al., 2012Liu et al., , 2013 and the desert locust, Schistocerca gregaria (Orthoptera: Acrididae) (Wynant et al., 2014). Degradation of dsRNA in the gut also explains the relatively low sensitivity of Cylas puncticollis to dietary RNAi when compared to the closely related C. brunneus (both Coleoptera: Brentidae) Prentice et al., 2017). After uptake, dsRNA can be degraded by nucleases in the haemolymph as has for example been reported for Manduca sexta (Lepidoptera: Sphingidae) (Garbutt et al., 2013) and A. pisum (Christiaens et al., 2014).
To enhance the stability of the ingested dsRNA, to prevent degradation by nucleases and to enhance cellular uptake, various carriers have successfully been deployed (Yu et al., 2013;Christiaens et al., 2018b;Kunte et al., 2020;Vogel et al., 2019). This includes lipid-based encapsulations (Whyard et al., 2009;Taning et al., 2016;Lin et al., 2017), cell-penetrating peptides (Gillet et al., 2017), polymers (Zhang et al., 2010;Christiaens et al., 2018a), and other nanoparticles (He et al., 2013;Das et al., 2015). In addition the RNAi response can be enhanced by co-delivery of nuclease-specific dsRNA (Spit et al., 2017;Cooper et al., 2019). Thus, the formulation in which the dsRNA is provided also has to be considered when judging the hazardous potential of the molecule to non-target species.
SELECTION OF TEST SPECIES FOR NON-TARGET STUDIES
Since not all valued non-target arthropods present in the receiving environment that are potentially exposed to the dsRNAbased product can be tested, surrogate (test) species need to be selected for toxicity studies to support the non-target risk assessment. The following description focuses on the selection of test species to detect sequence-specific effects caused by the particular dsRNA molecule under consideration.
Non-target testing of chemical pesticides has a long history in Europe. At the initial stage, only 2 species are tested under worstcase exposure conditions, i.e., the predatory mite Typhlodromus pyri (Acari: Phytoseiidae) and the parasitic wasp Aphidius rhopalosiphi (Hymenoptera: Braconidae) (Candolfi et al., 2001). The two species were selected as indicators since sensitivity analyses revealed that they are the most sensitive species to most classes of pesticides (Candolfi et al., 1999;Vogt, 2000).
Consequently, by testing those species predictions of effects on other non-target arthropods can be made with high confidence (Candolfi et al., 1999). Only if adverse effects above a certain threshold are detected for those species and unacceptable risk can thus not be excluded additional tests with other beneficial species are indicated. These include Orius laevigatus (Hemiptera: Anthocoridae), Chrysoperla carnea (Neuroptera: Chrysopidae), Coccinella septempunctata (Coleoptera: Coccinellidae), and Aleochara bilineata (Coleoptera: Staphilinidae). These species were selected because they are commercially available, amenable to testing in the laboratory, reliable test protocols exist, they provide sufficient phylogenetic and functional diversity, and common in agricultural fields (Barrett et al., 1994;Candolfi et al., 2001). In addition to testing predators and parasitoids, most regulatory jurisdictions (e.g., European Commission [EC], 2002), require testing of honey bees (A. mellifera) and soil organisms [Folsomia candida (Collembola: Isotomidae) or Hypoaspis aculeifer (Acari: Gamasidae)], if exposure of the latter is anticipated.
This common set of surrogate test species, however, is not suitable to assess non-target effects caused by dsRNA-based spray products because the initial two indicator species were selected for their sensitivity to chemical pesticides but are unlikely to be the most sensitive species for the majority of dsRNA molecules. Consequently it would be more suitable to apply the approach for non-target risk assessment as is conducted for GE plants expressing insecticidal proteins, such as Bt crops expressing Cry or VIP proteins from Bacillus thuringiensis. The ERA for GE plants is conducted case-by-case and consequently the most appropriate non-target species can be selected for each plant/trait combination. It has been proposed to base the selection of test species for laboratory studies on three main criteria (Romeis et al., 2013): (i) Sensitivity: species should be the most likely to be sensitive to the stressor under consideration based on the known spectrum of activity, its mode of action, and the phylogenetic relatedness of the test and target species.
(ii) Relevance: species should be representative of valued taxa or functional groups that are most likely to be exposed to the stressor in the field. Organisms that contribute to important ecosystem service and are considered relevant have been identified for a number of field crops (e.g., Meissle et al., 2012;Romeis et al., 2014;Riedel et al., 2016;Li et al., 2017).
(iii) Availability and reliability: suitable life-stages of the test species must be obtainable in sufficient quantity and quality, and validated test protocols must be available that allow consistent detection of adverse effects on ecologically relevant parameters. Lists of above-ground, below-ground, and aquatic species that are available and amenable for testing have been published (e.g., Candolfi et al., 2000;Römbke et al., 2010;Romeis et al., 2013;Carstens et al., 2012;Li et al., 2017).
The above listed criteria are also key elements of other test species selection approaches that have for example been published by Todd et al. (2008) and Hilbeck et al. (2014).
While the criteria (ii) and (iii) are relative generic or cropspecific, criteria (i) needs to be addressed specifically for each stressor under consideration. To increase the robustness and reliability of the non-target risk assessment the species most likely to be sensitive (= affected) to a particular dsRNA should be selected. This includes considerations of the gene or gene family that is targeted and the knowledge about the sensitivity of certain taxa to dietary RNAi in general. The phylogenetic relationship of the non-target organisms to the target pest should also be considered, as there is evidence that, in general, species closely related to the target organism are more likely to be susceptible to the dsRNA than distantly related species (Whyard et al., 2009;Bachman et al., 2013Bachman et al., , 2016US EPA, 2014;Roberts et al., 2015).
Since the RNAi response is sequence specific, bioinformatics can help predicting the species most likely affected that could then be used in feeding studies (Bachman et al., 2013(Bachman et al., , 2016. However, it has to be recognized that the presence of sequence homologies between the dsRNA molecule and the genome of the non-target species does not necessarily indicate sensitivity of an organisms. For example, the springtail Sinella curviseta (Collembola: Entomobryidae) shares a total of six 21 nt long matches with the dsRNA targeting the vATPase A in D. v. virgifera. However, the organism was not adversely affected in laboratory feeding studies (Pan et al., 2016). In cases where for some reason (species that are rare, protected or difficult to rear), bioinformatics may, however, be the only way to "test" the species (Bachman et al., 2016). Bioinformatics could also help predicting off-target effects. However, currently we lack genomic data for most non-target species. It would be useful to have more genome data available for model non-target species that actually play a role in agricultural production systems to effectively apply bioinformatics to the NTO risk assessment (Casacuberta et al., 2015;Fletcher et al., 2020).
DESIGN AND IMPLEMENTATION OF NON-TARGET LABORATORY TOXICITY STUDIES
The established test protocols published by the West Palaearctic Regional Section of the International Society for Biological and Integrated Control (IOBC/WPRS; Candolfi et al., 2000) or by the European and Mediterranean Plant Protection Organization (EPPO) 2 for early-tier laboratory toxicity studies for chemical insecticides are based on contact toxicity. Those test protocols thus do not allow assessing the non-target effects of dsRNA for which oral uptake is the most important route of exposure. The lack of standardized test protocols addressing the oral route of exposure and to detect effects resulting from novel modes of action has recently been pointed out by the Panel on Plant Protection Products and their Residues of European Food Safety Authority (2015) even though RNAi was not specifically mentioned.
However, experience is available with gut-active insecticidal proteins such as the Cry and VIP proteins from B. thuringiensis. Guidance exists on how to design and perform laboratory feeding studies with such proteins to provide high quality, reliable and robust data (Romeis et al., 2011;De Schrijver et al., 2016). 2 https://pp1.eppo.int/standards/side_effects When designing a non-target laboratory study the following main criteria should be considered (Romeis et al., 2011): (i) Test substance characterization and formulation; (ii) Method of delivery; (iii) Concentration/dose; (iv) Measurement endpoints; (v) Test duration; (vi) Control treatments; (vii) Statistical considerations.
Since the formulation in which the dsRNA is provided has a strong effect on the dsRNA uptake and the strength of the RNAi response in arthropods (as discussed above) care should be taken that the test substance is provided in a realistic formulation.
It is generally considered that toxicity of insecticidal compounds such as chemical insecticides and Cry proteins from Bt increases with increasing concentration in which they are delivered. Thus safety is added to the non-target studies by testing unrealistically high concentrations of the stressor of concern to provide a margin of safety and to account for possible intraand interspecific variability from the use of a surrogate test species. Definition of the concentrations to be tested poses some challenges for different reasons. First, the length of the dsRNA affects the effectiveness to trigger an RNAi response (Bolognesi et al., 2012;Miller et al., 2012), thus the margins of safety may vary between constructs. Second, there is evidence that there is no clear dose-relationship but that RNAi is triggered from a specific threshold dose onward and might be maximal at an optimal dose (Turner et al., 2006;Niu et al., 2019). Third, high doses may cause sequence-unspecific effects as discussed above.
The endpoints to be recorded (lethal and sublethal) need to be selected based on the organism under investigation (and the reliability of the test system) and the gene that is targeted. While lethality is an obvious endpoint to be chosen, the consideration of sublethal endpoints such as growth or development time is recommended (Roberts et al., 2020). First, they may hint to unexpected off-target effects, second, they may cover for the fact that dsRNA is generally slow acting (Baum and Roberts, 2014) and that the process is typically not reaching 100% gene suppression (e.g., Bolognesi et al., 2012;Rangasamy and Siegfried, 2012), and third, they might address the fact that RNAi effects can be transgenerational, i.e., also affecting subsequent generations (Abdellatef et al., 2015). Sublethal endpoints are typically also recorded in the testing of chemical pesticides (e.g., Candolfi et al., 2000) and Bt proteins (De Schrijver et al., 2016;Roberts et al., 2020) even though mortality is the primary endpoint and often the results from testing sublethal endpoints are not reported in regulatory summaries. In any case, it is important to set decisionmaking criteria for every endpoint that is recorded. The duration of the study needs to be selected so that the measurement endpoints show a response should the test substance have an effect. Given the slow RNAi response, test probably need to be extended in duration compared to Bt Cry proteins (e.g., Bachman et al., 2013Bachman et al., , 2016. A key element of every laboratory study is the inclusion of a negative control treatment that allows to separate effects caused by the test system (e.g., the fitness of the test organisms, the suitability of the diet) from those caused by the test substance. Ideally, the negative control consists of a dsRNA molecule that targets a heterologous sequence absent from the insect's genome and that does thus not lead to specific gene silencing in the test species. This would control for any impact caused by a trigger of the RNAi cascade (sequence unspecific effects). Typical examples that have been used for this purpose include dsRNA targeting the green fluorescent protein (GFP) and βglucuronidase (GUS). However, there is some evidence, that dsGFP causes adverse effects in arthropods when applied orally at very high doses (Nunes et al., 2013;Haller et al., 2019) or when injected (Jarosch and Moritz, 2012).
Positive controls, i.e., the addition of dsRNA molecules that are designed to silence a gene in the test insects can further help to interpret the study results as they provide evidence that the test system can detect a response and that the test species is sensitive to dietary RNAi. Positive controls have for example been deployed by Haller et al. (2019) when testing the effect of dsRNA targeting the vATPase-A of D. v. virgifera in two non-target ladybird beetles (Coleoptera: Coccinellidae). The data confirmed that two species of ladybirds are sensitive to dietary RNAi but that the non-target dsRNA molecule only had a weak effect. Another study using the same test substance in honey bees did not detect any effects in the positive control treatment raising doubts about the sensitivity of honey bees to dietary RNAi in general (Vélez et al., 2016).
CONCLUSION
In order to assess whether dsRNA-based pesticide sprays adversely affect valued non-target species in the agroecosystem, three questions need to be addressed: (1) Are the non-target arthropods exposed to biologically active dsRNA? (2) Do the non-target arthropods possess the RNAi machinery for dsRNA to trigger a response? and (3) are there sufficient sequence matches between the dsRNA molecule under consideration and the genome of the non-target arthropods to cause a sequencespecific effect.
While it is possible to make some generalizations regarding the level of exposure, potential uptake of dsRNA and the sensitivity to dietary RNAi for common non-target species in field crops, some open questions remain. For example it is still unclear to what extent the bioactive dsRNA molecule is transferred through the arthropod foodweb and whether penetration through the arthropod body wall is a relevant route of exposure for non-target species. Furthermore, it would be useful to evaluate whether the risk for certain arthropod taxa can be considered negligible because they digest dsRNA prior to ingestion and are thus unlikely to be exposed.
Concerning the hazard posed by dsRNA, it would be important to evaluate whether there are species or taxa that can be considered safe because they are insensitive to dietary RNAi in general (e.g., because they lack the dsRNA uptake mechanism). Also, uncertainty still exists regarding the sequence mismatches (and number thereof) between the targeted mRNA and the dsRNA that still allows for an RNAi response. There is evidence that genome information can help assess non-target effects. However, bioinformatics information is still lacking for most valued non-target arthropods. This information would help assist to predict non-target effects and select the most suitable (i.e., potentially sensitive) species to conduct feeding studies in the laboratory. Related to this, the power of bioinformatics for predicting non-target effects still needs to be further investigated before this information can be used to draw a conclusion about safety.
Consequently, it is essential to conduct feeding studies to assess whether the ingestion of dsRNA molecules poses a hazard to relevant non-target species. However, when planning the studies to be conducted in the laboratory with dsRNA-based pesticides, it would be necessary to add flexibility to the nontarget risk assessment framework used for chemical pesticides to allow a case-by-case assessment as is done for GE plants. A challenge remains the selection of the most appropriate negative and positive control treatments to ensure a robust interpretation of the study results and to minimize false negative and false positive results.
The main concern, however, is the fact that the carrier to which the dsRNA is bound or the formulation in which it is applied will be of ample importance as it not only affects the level at which non-target arthropods will be exposed, i.e., the stability and distribution of the active compound in the environment and in the insect gut and body, but also the extent of the RNAi response.
While there is a lot to profit from the experience with chemical pesticides and GE plants producing insecticidal proteins, insecticidal sprays based on dsRNA still pose some specific challenges to the non-target risk assessment.
AUTHOR CONTRIBUTIONS
JR and FW wrote and approved the manuscript.
FUNDING
JR and FW were funded by institutional funds. JR received funding from the OECD to participate in the workshop in Paris.
ACKNOWLEDGMENTS
This paper was given at the OECD Conference on Regulation of Externally Applied dsRNA-based Products for Management of Pests which took place at the OECD in Paris, France, on 10-12 April 2019, and which was sponsored by the OECD Co-operative Research Programme: Biological Resource Management for Sustainable Agricultural Systems whose financial support made it possible for JR to participate in the workshop. This manuscript summarizes *..*'s contribution during the OECD Conference on RNAi-based Pesticides, which was sponsored by the OECD Co-operative Research Programme: Biological Resource Management for Sustainable Agricultural Systems whose financial support made it possible for the author to participate in the conference. | 2020-06-05T13:07:24.837Z | 2020-06-04T00:00:00.000 | {
"year": 2020,
"sha1": "d29ddd3649931306065caeedfbd61d5143e0d15c",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2020.00679/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d29ddd3649931306065caeedfbd61d5143e0d15c",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
253839450 | pes2o/s2orc | v3-fos-license | HYSTEROSCOPIC MYOMECTOMY
Leiomyomas are the most common pelvic tumors. Submucosal fibroids are a common cause of abnormal bleeding and infertility. Hysteroscopic myomectomy is the definitive management of symptomatic submucosal fibroids, with high efficacy and safety. Several techniques have been introduced over time and will be covered in depth in this manuscript. Advances in optics, fluid management, electrosurgery, smaller diameter scopes, and tissue removal systems, along with improved training have contributed to improving the safety and efficiency of hysteroscopic myomectomy.
Introduction
Direct visualization of the uterine cavity via hysteroscopy for diagnosis and management is essential for the care of all women with abnormal uterine bleeding, infertility, and suspected intra-uterine pathology. The technologic advances in optics, scope diameters, fluid management, tissue removal systems, and electrosurgery have enabled this minimally invasive approach for the conservative management of intra-uterine pathology and has expanded the ability to perform hysteroscopy in the outpatient setting, making many blind procedures less favorable [1]. This manuscript will focus on advances in hysteroscopic myomectomy for the safe and effective management of submucosal myoma.
Leiomyomas
Uterine leiomyomas arise from the myometrial smooth muscle and the fibroblasts. The immature cells of the myometrium are stimulated by the upregulation of the steroid receptors, leading to the growth of leiomyomas [2,3]. Studies have also shown increased aromatase activity in leiomyoma tissues, leading to increased growth and development [4].
The growth or regression rate of myoma size varies significantly. Peddada et al., on a premenopausal women survey, found that the myomas varied widely in their growth rates; they ranged from shrinkage of 89% to growth of 138% per 6 months. The median fibroid growth rate for both black and white women was 9% per 6 months. Solitary myomas appear to grow faster than multiple myomas [5].
Epidemiology
Fibroids are the most common tumor diagnosed in the reproductive organs. However, prevalence cannot be accurately assessed due to the asymptomatic and underdiagnosed nature in many patients. Fibroid tumors were found in 77 of 100 uteri after hysterectomy and 84% of the specimens contained multiple fibroids, supporting the extremely high prevalence in most women [6]. African American women have been shown to have fibroid tumors more frequently than Caucasian women. Studies have found increased levels of aromatase mRNA in the leiomyoma tissue of African American females [7]. The rate of growth and the likelihood of rapid expansion of a fibroid decrease with age in Caucasian women but not in African American women [5].
Family history, as well as obesity, alcohol intake, soybean product consumption, red meat consumption, hypertension, and vitamin D deficiency have been associated with increased prevalence of uterine fibroids [8][9][10][11].
Symptomatology
Many patients with fibroid tumors can be asymptomatic and incidentally diagnosed, and they should be reassured about the benign nature of fibroids in most cases and educated about their trajectory, possible symptoms, treatment options, and outcomes, along with red flags that should prompt additional care.
Approximately 70% of uterine fibroids lead to abnormal uterine bleeding, which is the most common indication for hysteroscopic myomectomy. Submucosal fibroids are the most implicated in abnormal uterine bleeding [12]. Plausible explanations include distortion of the uterine cavity and increase in the endometrial surface area. In addition, contractility of the myometrium can be impaired by the intervening fibroids [13]. Hysteroscopic resection of leiomyomas provides symptomatic relief in 70-99% of cases [14].
Other indications for hysteroscopic myomectomy include subfertility, dysmenorrhea, and pelvic pain [15]. The American Society for Reproductive Medicine currently states that resection should take place for cavity-distorting myomas to improve pregnancy rates and decrease risk of early pregnancy loss [16].
Preoperative Assessment
Patients with uterine fibroids may complain of abnormal uterine bleeding, infertility, or discomfort from compression of other organs or increased abdominal girth.
Submucosal fibroids are more related to abnormal uterine bleeding and infertility, as they are usually symptomatic even before reaching large volumes [17]. In the presence of bleeding, other causes of it should be investigated, from hematological, functional, or neoplastic causes.
The surgical procedure can be conservative or not of the uterine matrix; for this, it will be important to counsel the patient and evaluate the cases well so that the complexity of the conservative surgery, the myomectomy, can be evaluated.
During initial assessment, the history will bring up important information about the issues mentioned above and the desire for future pregnancy, which would lead to consideration of conservative surgery.
The physical examination, especially the bimanual exam, will provide information about the dimensions and presence of other uterine fibroids. Upon exam, the perception of intramural or subserous myoma, as well as submucous myomas, should prompt transvaginal ultrasound, as the classification (LASMAR) is based on the total volume of the nodules [18].
A proper physical examination is essential to rule out other causes for AUB. Vaginal atrophy should be addressed and gross lesions of the cervix and the vagina, such as polyps or a prolapsed myoma, should be evaluated [11]. During the physical examination, patients' tolerance to exam is assessed to aid in decision making of whether they can be candidates for office hysteroscopy.
Patients with AUB should undergo a complete blood count [19]. It is appropriate to evaluate kidney function and consider imaging the kidneys and ureters if fibroids are felt to impact the urinary tract. A pregnancy test should be performed for all reproductiveage women presenting with abnormal bleeding and before any intra-uterine procedures.
Infertility specialists should be part of the evaluation for patients with fibroids who have been trying to conceive.
Methods that investigate the submucous myoma in the uterine cavity are more accurate in relation to the myoma, confirming its presence, number, location, and correlation with the myometrium. These are hysterosalpingography and hysteroscopy.
The methods that allow visualization of the uterine cavity and the entire uterine wall are transvaginal and pelvic ultrasound, hysterosonography, and MRI of the pelvis.
Hysterosalpingography has the advantage, in patients with infertility, of concomitant evaluation of tubal patency and configuration. It signals the presence of myoma, but its location in relation to the myometrium is not efficient.
Hysteroscopy, as a method of direct visualization of the uterine cavity, offers all possible information about the intracavitary portion of the submucous myoma and a good assessment of the portion of the myoma, which is found in the myometrium, intramural portion. Thus, with hysteroscopy, it is possible to classify the submucous myoma and assess the need for other imaging methods. Another important function of hysteroscopy is to rule out other intrauterine causes of bleeding and to carry out an anatomopathological study of the endometrium or of the identified lesions, so it should, whenever possible, be indicated in the investigation ( Figure 1). evaluate kidney function and consider imaging the kidneys and ureters if fibroids to impact the urinary tract. A pregnancy test should be performed for all reprod age women presenting with abnormal bleeding and before any intra-uterine proce Infertility specialists should be part of the evaluation for patients with fibroids wh been trying to conceive.
Methods that investigate the submucous myoma in the uterine cavity are mor rate in relation to the myoma, confirming its presence, number, location, and corr with the myometrium. These are hysterosalpingography and hysteroscopy.
The methods that allow visualization of the uterine cavity and the entire uterin are transvaginal and pelvic ultrasound, hysterosonography, and MRI of the pelvis Hysterosalpingography has the advantage, in patients with infertility, of co tant evaluation of tubal patency and configuration. It signals the presence of myom its location in relation to the myometrium is not efficient.
Hysteroscopy, as a method of direct visualization of the uterine cavity, offers a sible information about the intracavitary portion of the submucous myoma and assessment of the portion of the myoma, which is found in the myometrium, intra portion. Thus, with hysteroscopy, it is possible to classify the submucous myoma a sess the need for other imaging methods. Another important function of hysterosc to rule out other intrauterine causes of bleeding and to carry out an anatomopatho study of the endometrium or of the identified lesions, so it should, whenever possi indicated in the investigation ( Figure 1). Ultrasonography (USG), especially transvaginal ultrasound (TVUS), is the r exam and is usually the first one performed; it has good accuracy, easy access, an cost, but it has a limited role in the presence of a large uterus or multiple nodules, terior acoustic shadowing makes it difficult to evaluate and count them. It is impor the evaluation of the intramural component of the myoma and the free myometrial up to the serosa, but it is operator-dependent ( Figure 2). Ultrasonography (USG), especially transvaginal ultrasound (TVUS), is the routine exam and is usually the first one performed; it has good accuracy, easy access, and low cost, but it has a limited role in the presence of a large uterus or multiple nodules, as posterior acoustic shadowing makes it difficult to evaluate and count them. It is important in the evaluation of the intramural component of the myoma and the free myometrial mantle up to the serosa, but it is operator-dependent ( Figure 2). Hysterosonography, an ultrasound procedure performed with the uterus distended with saline solution for greater contrast and detailing of the uterine cavity, is more accurate than TVUS in identifying the uterine cavity and myometrial mantle ( Figure 3). Magnetic resonance imaging of the pelvis (MRI) is indicated in uteri with a volume greater than 375 cm 3 or with more than four fibroids [20]. With excellent definition regarding the number, location, size of nodules, and proximity to other myomas, it is used to diagnose adenomyosis and adenomyoma, rule out non-fibroids and sarcomas, and to measure the myometrial mantle. The myometrial mantle refers to the distance between Hysterosonography, an ultrasound procedure performed with the uterus distended with saline solution for greater contrast and detailing of the uterine cavity, is more accurate than TVUS in identifying the uterine cavity and myometrial mantle ( Figure 3). Hysterosonography, an ultrasound procedure performed with the uterus distended with saline solution for greater contrast and detailing of the uterine cavity, is more accurate than TVUS in identifying the uterine cavity and myometrial mantle ( Figure 3). Magnetic resonance imaging of the pelvis (MRI) is indicated in uteri with a volume greater than 375 cm 3 or with more than four fibroids [20]. With excellent definition regarding the number, location, size of nodules, and proximity to other myomas, it is used to diagnose adenomyosis and adenomyoma, rule out non-fibroids and sarcomas, and to measure the myometrial mantle. The myometrial mantle refers to the distance between Magnetic resonance imaging of the pelvis (MRI) is indicated in uteri with a volume greater than 375 cm 3 or with more than four fibroids [20]. With excellent definition regarding the number, location, size of nodules, and proximity to other myomas, it is used to diagnose adenomyosis and adenomyoma, rule out non-fibroids and sarcomas, and to measure the myometrial mantle. The myometrial mantle refers to the distance between the deepest portion of the myoma in the myometrium and the serosa, being of unique importance in hysteroscopic myomectomy, since confirmation of transmural myoma (the one that reaches the serosa) contraindicates the hysteroscopic approach due to the high probability of uterine perforation during the procedure (Figure 4). the deepest portion of the myoma in the myometrium and the serosa, being of unique importance in hysteroscopic myomectomy, since confirmation of transmural myoma (the one that reaches the serosa) contraindicates the hysteroscopic approach due to the high probability of uterine perforation during the procedure ( Figure 4). OFFICE HYSTEROSCOPY-Significant advances have been introduced to facilitate office hysteroscopy for diagnostic and therapeutic purposes, such as smaller diameter scopes, flexible hysteroscopes, the miniresectoscope, and the tissue removal systems. This can be very valuable for surgical planning and patient education. Small, 1 to 2 cm type 0 submucosal myomas can potentially be removed in the office setting using hysteroscopic scissors or tissue removal systems. A prospective study of patient outcomes after hysteroscopic myomectomy found higher successful completion rates when the fibroids were up to 3 cm in size [21]. However, this size may be difficult for patients undergoing an office procedure. Incision of the pseudocapsule during office hysteroscopy may allow the protrusion of the fibroid into the uterine cavity, improving the likelihood of complete resection during subsequent hysteroscopic myomectomy [21].
Preoperative Classification
As hysteroscopic myomectomy is performed within the uterine cavity (limits of movement and approach), it needs a liquid medium to distend it (risk of intravasation) and, as it often advances into the myometrium (risk of bleeding and intravasation), prior assessment of the difficulty and possibility of hysteroscopic myomectomy is crucial. In addition to the surgeon's experience and the necessary instruments, and the patient's clinical conditions, fibroid classification is essential to minimize risks.
The classification of submucous myoma, standardizing it in levels, allows us to indicate the degree of difficulty and complexity of hysteroscopic myomectomy and the comparison of results. There are currently two main classifications: the ESGE, described by Wansteker et al. in 1993 [22], and the Lasmar-STEP-W, published in 2005 [18] (Tables 1 and 2). OFFICE HYSTEROSCOPY-Significant advances have been introduced to facilitate office hysteroscopy for diagnostic and therapeutic purposes, such as smaller diameter scopes, flexible hysteroscopes, the miniresectoscope, and the tissue removal systems. This can be very valuable for surgical planning and patient education. Small, 1 to 2 cm type 0 submucosal myomas can potentially be removed in the office setting using hysteroscopic scissors or tissue removal systems. A prospective study of patient outcomes after hysteroscopic myomectomy found higher successful completion rates when the fibroids were up to 3 cm in size [21]. However, this size may be difficult for patients undergoing an office procedure. Incision of the pseudocapsule during office hysteroscopy may allow the protrusion of the fibroid into the uterine cavity, improving the likelihood of complete resection during subsequent hysteroscopic myomectomy [21].
Preoperative Classification
As hysteroscopic myomectomy is performed within the uterine cavity (limits of movement and approach), it needs a liquid medium to distend it (risk of intravasation) and, as it often advances into the myometrium (risk of bleeding and intravasation), prior assessment of the difficulty and possibility of hysteroscopic myomectomy is crucial. In addition to the surgeon's experience and the necessary instruments, and the patient's clinical conditions, fibroid classification is essential to minimize risks.
The classification of submucous myoma, standardizing it in levels, allows us to indicate the degree of difficulty and complexity of hysteroscopic myomectomy and the comparison of results. There are currently two main classifications: the ESGE, described by Wansteker et al. in 1993 [22], and the Lasmar-STEP-W, published in 2005 [18] (Tables 1 and 2). The ESGE classification describes submucosal fibroids in three levels: level 0 = completely in the uterine cavity; level 1 = with its largest portion inside the uterine cavity; and level 2 = with its smallest portion in the uterine cavity. The Lasmar classification evaluates five parameters: nodule size, topography, extension of the base in relation to the affected wall, penetration into the myometrium, and affected wall, to signal the possibility, complexity, or impossibility of hysteroscopic surgery.
How to evaluate each parameter of the Lasmar classification: Size of the nodule. (SIZE)-It is the largest diameter of the myoma identified in one of the imaging tests. When the nodule measures up to 2 cm, it receives a score of 0; between 2 and 5 cm receives score 1, and measuring more than 5 cm receives score 2.
Location-(TOPOGRAPHY)-It is determined by the third of the uterine cavity where the myoma is located, with a score of 0 when it is located in the lower third, a score of 1 in the middle, and a score of 2 in the upper third.
Extension of the myoma base in relation to the affected wall (EXTENSION)-When the myoma base affects 1/3 or less of the uterine wall, it receives a score of 0; when the base of the nodule occupies 1/3 to 2/3 of the wall, the score is 1 and, when it affects more than 2/3 of the wall, the score is 2.
Penetration into the myometrium (PENETRATION) follows the same principle as ESGE in relation to penetration of the myoma into the myometrium: scores 0, 1, and 2.
Uterine wall (WALL)-Myoma of the anterior and posterior wall receives a score of 0, while the one located on the lateral wall scores 1.
Before the hysteroscopic myomectomy, other evaluations are important for the surgical procedure: the clinical evaluation of the patient, mainly blood count and coagulogram, since most of them have AUB; and the desire for a future pregnancy, due to the possibility of extensive surgeries and, consequently, uterine adhesions.
According to the Consensus Statement from the Global Congress on Hysteroscopy Scientific Committee, transvaginal ultrasound should be the first line to evaluate the number, size, and location of submucous myomas, with subsequent in-office hysteroscopy to allow, when feasible, a see-and-treat approach [23].
Hospital myomectomy will be indicated if outpatient myomectomy was not possible, determined from the patient's nontolerance to pain, lack of resources and qualification of the hysteroscopy specialist, and, mainly, the classification of the myoma.
The effect of nearby fibroids, represented by a submucous fibroid with another intramural fibroid next to it, is considered. In this case, the Lasmar classification starts to consider the set as a single node, thus changing the final classification. This is due to the complexity, risk, and surgical result, since, when the myomectomy of the submucous myoma is concluded, another intramural fibroid will be found, which will probably be close to the serosa or have a subserosal component ( Figure 5).
The effect of nearby fibroids, represented by a submucous fibroid with another intramural fibroid next to it, is considered. In this case, the Lasmar classification starts to consider the set as a single node, thus changing the final classification. This is due to the complexity, risk, and surgical result, since, when the myomectomy of the submucous myoma is concluded, another intramural fibroid will be found, which will probably be close to the serosa or have a subserosal component ( Figure 5). Thus, with the new classification, it will be possible to modify the surgical approach for myomectomy. In these cases, hysteroscopic myomectomy associated with laparoscopic myomectomy is indicated.
Hysteroscopic Myomectomy Techniques
Myomectomy, whether laparotomic or laparoscopic, is a well-established procedure, widely performed with the goal of uterine preservation. In both approaches, the myomectomy technique is the same: incision of the serosa up to the pseudocapsule, identification of the myoma, traction and movement of the nodule, assistance in dissecting the plane of the pseudocapsule within the myometrium, and enucleation of the myoma from the uterine wall. This fibroid enucleation technique is known and performed by all gynecologists. When the pseudocapsule is reached, the chance of preserving the uterus will be greater, with less bleeding and less myometrial damage, which differs from adenomyosis resection, which does not have a pseudocapsule [21] (Figure 6). Thus, with the new classification, it will be possible to modify the surgical approach for myomectomy. In these cases, hysteroscopic myomectomy associated with laparoscopic myomectomy is indicated.
Hysteroscopic Myomectomy Techniques
Myomectomy, whether laparotomic or laparoscopic, is a well-established procedure, widely performed with the goal of uterine preservation. In both approaches, the myomectomy technique is the same: incision of the serosa up to the pseudocapsule, identification of the myoma, traction and movement of the nodule, assistance in dissecting the plane of the pseudocapsule within the myometrium, and enucleation of the myoma from the uterine wall. This fibroid enucleation technique is known and performed by all gynecologists. When the pseudocapsule is reached, the chance of preserving the uterus will be greater, with less bleeding and less myometrial damage, which differs from adenomyosis resection, which does not have a pseudocapsule [21] (Figure 6).
The presentation of the techniques will make the presentation of this text more didactic, as we basically have two techniques, which can be associated or isolated, each one having its own indication of excellence. These are the enucleation technique and the slicing or myolysis technique, all of which can be performed in outpatient and inpatient hysteroscopy. The enucleation technique was described by Mazzon in 1995 [24] and Lasmar in 2001 [25]. Both techniques have the same basis for enucleation of the nodule, but Mazzon fragments the nodule until it reaches its intramural portion and then uses a "cold loop" to mobilize the fibroid, while Lasmar enucleates the entire fibroid and then slices it. The technique is to incise the endometrium around the submucosal myoma to reach the pseudocapsule (Lasmar) or to reach this plane by slicing the myoma close to the myometrium (Mazzon). Arriving at the pseudocapsule, some fibrous beams must be sectioned. The mobilization of the myoma, from the outside to the center, from front to back, progressively frees it from the myometrium, without significant bleeding and without thermal damage, with a lower risk of intravasation, as it does not cut the myometrial vessels. The mobilization of the myoma with its enucleation can be performed by all instruments, without energy, the use of scissors or tweezers being more appropriate in the outpatient clinic and Collins loop or "cold loop" in hospital hysteroscopy. The slicing technique is based on the progressive cutting of the submucosal portion of the myoma, maintaining the fragmentation of the intramural portion, leading, in most cases, to greater removal of the endometrium and myometrium, with greater thermal damage and risk of intravasation [26]. Fragmentation of the myoma can be performed with a semi-circle loop, with mono or bipolar energy, LASER fiber or morcellator. Thus, the technique of excellence in hysteroscopic myomectomy is the enucleation of the intramural portion of the submucosal myoma, mobilizing the nodule and separating it from the wall of the uterus, while fragmentation would deal with the removal of the myoma from the uterine cavity. The presentation of the techniques will make the presentation of this text more didactic, as we basically have two techniques, which can be associated or isolated, each one having its own indication of excellence. These are the enucleation technique and the slicing or myolysis technique, all of which can be performed in outpatient and inpatient hysteroscopy. The enucleation technique was described by Mazzon in 1995 [24] and Lasmar in 2001 [25]. Both techniques have the same basis for enucleation of the nodule, but Mazzon fragments the nodule until it reaches its intramural portion and then uses a "cold loop" to mobilize the fibroid, while Lasmar enucleates the entire fibroid and then slices it. The technique is to incise the endometrium around the submucosal myoma to reach the pseudocapsule (Lasmar) or to reach this plane by slicing the myoma close to the myometrium (Mazzon). Arriving at the pseudocapsule, some fibrous beams must be sectioned. The mobilization of the myoma, from the outside to the center, from front to back,
Outpatient Hysteroscopic Myomectomy
Outpatient hysteroscopic myomectomy is a safe procedure, immediately treating the lesion at the same time as diagnosis, reducing the patient's concern and anxiety, as well as complaints. It has a lower cost compared to surgery in a hospital environment and, for the hysteroscopic surgeon, the pleasure of performing the best technique and art of hysteroscopy. However, there are limits to be respected. The limits for outpatient hysteroscopic myomectomy depend on some factors, which, when combined, increase the difficulty of performing the procedure. These are related to the patient, the fibroid, the applied technology, and the hysteroscope.
As for the patient, the main limiting factor is her sensitivity to discomfort, which may allow for a diagnostic examination but precludes outpatient surgery.
The size of the fibroid, its location, fundic or cornual, and the greater penetration into the myometrium are determining factors to hinder or prevent an outpatient myomectomy (Lasmar classification). The combination of factors increases the difficulties in performing an outpatient myomectomy [21].
The instruments used and the type of energy can increase the possibility of performing outpatient surgery. Myoma mobilization is more useful in fibroids with greater penetration into the myometrium, while morcellation techniques are favorable in larger fibroids. Fundic and cornual fibroids can be challenging regardless of the technique.
The experience of the hysteroscopist is crucial for performing an outpatient hysteroscopic myomectomy.
For those new to outpatient surgery, it is advisable to start hysteroscopic myomectomy in the smallest fibroids, 1 to 2 cm, entirely in the uterine cavity, not worrying about immediate extraction or late expulsion of the nodule.
In our service, the most performed technique is using the 5 Fr tweezers or scissors. Initially, the endometrium is incised around the nodule until accessing the plane of the pseudocapsule; then, with the forceps or the body of the hysteroscope, entering between the nodule and the myometrium, the release is initially performed, laterally first and then centrally, until its complete release (Figure 7). At the end, the nodule will be loose in the cavity and can be fragmented or completely removed with grasping forceps. In cases of difficulty in removing the nodule from the cavity, the patient should be instructed to return in 7 to 10 days, during which time either the nodule will be spontaneously expelled by the patient-she should be oriented about At the end, the nodule will be loose in the cavity and can be fragmented or completely removed with grasping forceps. In cases of difficulty in removing the nodule from the cavity, the patient should be instructed to return in 7 to 10 days, during which time either the nodule will be spontaneously expelled by the patient-she should be oriented about this possibility-or it will have drastically decreased in size, allowing its removal.
When using instruments with energy, we can use the bipolar Collins loop of a miniresectosope system [26] (Figure 8) or the LASER fiber to incise the endometrium around the myoma. However, all mobilization is performed mechanically with forceps, a loop, or the resectoscope itself. The size of the fibroid can make it difficult to approach the base of the nodule fibroids can be resected more easily using an energized loop, which allows for a re in the nodule and greater ease of approaching the base for mobilization.
When performing an outpatient myomectomy, frequently, the nodule is lar the internal os, making it impossible to remove it from the uterine cavity at the procedure. As mentioned before, it is safe to leave the nodule in the cavity. In th bility of a resectoscope, LASER, or morcellator, the slicing of the lesion is perform its complete removal.
Hospital Hysteroscopic Myomectomy
Hospital myomectomy is a procedure in which the patient is also assisted by esthesiologist in a hospital environment. It is indicated when the myoma class signals a complex hysteroscopic myomectomy, in patients with low tolerance to patient procedure, and when the hysteroscopist does not have instruments or ex in outpatient myomectomy. Compared with outpatient myomectomy, hospital m The size of the fibroid can make it difficult to approach the base of the nodule. Larger fibroids can be resected more easily using an energized loop, which allows for a reduction in the nodule and greater ease of approaching the base for mobilization.
When performing an outpatient myomectomy, frequently, the nodule is larger than the internal os, making it impossible to remove it from the uterine cavity at the time of procedure. As mentioned before, it is safe to leave the nodule in the cavity. In the availability of a resectoscope, LASER, or morcellator, the slicing of the lesion is performed with its complete removal.
Hospital Hysteroscopic Myomectomy
Hospital myomectomy is a procedure in which the patient is also assisted by the anesthesiologist in a hospital environment. It is indicated when the myoma classification signals a complex hysteroscopic myomectomy, in patients with low tolerance to the outpatient procedure, and when the hysteroscopist does not have instruments or experience in outpatient myomectomy. Compared with outpatient myomectomy, hospital myomectomy generally has a longer operative time, with the possibility of bleeding and intravasation, risks inherent to complex myomectomy with a more difficult approach, and, therefore, should be performed under anesthesia and in a surgical center [26].
The advantages of hospital myomectomy, in addition to the patient not feeling any discomfort or pain, are: safety in patient monitoring, and bleeding and fluid balance control. This control is essential, as these are the cases of greater complexity and risk of complications. Hospital hysteroscopic myomectomy is a highly complex procedure, being associated with the risks of bleeding, uterine perforation, incomplete surgery, pelvic organ injuries, and intravasation [20].
Anesthesia can be sedation in myomectomies with shorter operative time and spinal block in those with a longer time, so that there is greater control of the patient's level of consciousness and less use of medications. In this way, each surgical team will decide the type of anesthesia according to the technique and technology used, operative time, surgeon's experience, and complexity of the case. As previously reported, myomectomy can be divided into fibroid enucleation and fragmentation or myolysis of the fibroid, without the use of energy or with different energy modalities.
It is important to emphasize that the technique and systems influence the complete removal or not of the myoma, but two factors are decisive: the surgeon's experience and the classification of the myoma.
Even in the hospital environment, the use of scissors or tweezers can also be effective, especially in smaller and more intracavitary fibroids. The technique is the same as described for outpatient myomectomy: access to the pseudocapsule and mobilization of the base with the enucleation. This is a simple technique and does not need dilation of the cervix, just the operative canal and good training for ambulatory operative hysteroscopy [26].
For the introduction of the resectoscope, dilation of the cervix is frequently necessary, except for the miniresector, with its 16 Fr diameter, which can be attached to the same hysteroscope for diagnosis.
The technique with the resectoscope, regardless of the type of energy, is the same, with planned loop movements always in the fundus-cervical direction, with the angulation of the resectoscope axis to define the degree of resection depth. These two movements have to be thought out and prepared before activating the energy so that only the myoma is resected, avoiding resection of the myometrium and the risk of perforation, and so that the penetration of the cut is as desired, without risk ( Figure 9). It is important to emphasize that the technique and systems influence the complete removal or not of the myoma, but two factors are decisive: the surgeon's experience and the classification of the myoma.
Even in the hospital environment, the use of scissors or tweezers can also be effective, especially in smaller and more intracavitary fibroids. The technique is the same as described for outpatient myomectomy: access to the pseudocapsule and mobilization of the base with the enucleation. This is a simple technique and does not need dilation of the cervix, just the operative canal and good training for ambulatory operative hysteroscopy [26].
For the introduction of the resectoscope, dilation of the cervix is frequently necessary, except for the miniresector, with its 16 Fr diameter, which can be attached to the same hysteroscope for diagnosis.
The technique with the resectoscope, regardless of the type of energy, is the same, with planned loop movements always in the fundus-cervical direction, with the angulation of the resectoscope axis to define the degree of resection depth. These two movements have to be thought out and prepared before activating the energy so that only the myoma is resected, avoiding resection of the myometrium and the risk of perforation, and so that the penetration of the cut is as desired, without risk ( Figure 9). The movement of the resection loop with energy can only be moved in the funduscervix direction, but, without energy, it can be driven in any direction, even the cervixfundus, as it will have mechanical action. This nonenergy loop movement, called a cold loop, is often used to mobilize and enucleate the submucosal fibroid.
Slicing Technique
The principle of the slicing or slicing technique is the partial and progressive removal of the myoma, in fragments, starting at its surface and gradually working towards its base. Slices of myoma are removed with the semicircle loop in the mono or bipolar resectoscope, moving it energized, from the fundus to the cervix. The distension medium is different according to the type of energy; with monopolar energy, non-electrolytic media are used, which are 1.5% glycine, mannitol, and mannitol/sorbitol, while, with bipolar energy, the The movement of the resection loop with energy can only be moved in the funduscervix direction, but, without energy, it can be driven in any direction, even the cervixfundus, as it will have mechanical action. This nonenergy loop movement, called a cold loop, is often used to mobilize and enucleate the submucosal fibroid.
Slicing Technique
The principle of the slicing or slicing technique is the partial and progressive removal of the myoma, in fragments, starting at its surface and gradually working towards its base. Slices of myoma are removed with the semicircle loop in the mono or bipolar resectoscope, moving it energized, from the fundus to the cervix. The distension medium is different according to the type of energy; with monopolar energy, non-electrolytic media are used, which are 1.5% glycine, mannitol, and mannitol/sorbitol, while, with bipolar energy, the electrolytic media, physiological solute 0.9%, and ringer lactate are used [27] (Figure 10).
loop, is often used to mobilize and enucleate the submucosal fibroid.
Slicing Technique
The principle of the slicing or slicing technique is the partial and progressive removal of the myoma, in fragments, starting at its surface and gradually working towards its base. Slices of myoma are removed with the semicircle loop in the mono or bipolar resectoscope, moving it energized, from the fundus to the cervix. The distension medium is different according to the type of energy; with monopolar energy, non-electrolytic media are used, which are 1.5% glycine, mannitol, and mannitol/sorbitol, while, with bipolar energy, the electrolytic media, physiological solute 0.9%, and ringer lactate are used [27] (Figure 10). Due to the crowding of myoma fragments in the uterine cavity, it is necessary to interrupt the procedure with emptying of the cavity, so that the vision of the cavity and the myoma is recovered.
It has the advantage of being able to surgically treat larger nodules, removing the myoma fragments from the cavity, performing volumetric reduction, and performing hemostasis at the same time. As a disadvantage, there is greater bleeding in the procedure (the myoma vessels are superficial), greater possibility of intravasation, especially in myomas with a greater intramural component, greater risk of perforation, and frequent interruption of surgery to remove fragments, in addition to greater endometrial and myometrial damage adjacent to the myoma.
There is a possibility of incomplete myomectomy because, when there are signs of massive fluid absorption and risk of intravasation syndrome, or long operative time and risk of perforation, the procedure is interrupted for a new one around three months later.
The regulation of the electrosurgical generator is cut-coagulation and blend determined by the surgeon's need for each case and according to each generator, varying cut from 60 to 120 W and coagulation from 40 to 60 W.
Remember that the speed of movement of the loop can also determine the action of more cutting or more coagulation; the faster moving loop cuts more and coagulates less, while the slower one coagulates more than it cuts.
Morcellator Technique (Hysteroscopic Mechanical Tissue Removal)
Hysteroscopic Tissue Removal Systems (TRS) perform fragmentation and suction of endometrial pathology, such as polyps and fibroids. There are three main brands currently available on the market (e.g., Myosure, Truclear, and Symphion) and they are mainly used for types 0 and 1 intrauterine leiomyomas. A rapidly rotating blade resects small portions of the fibroid, and these are suctioned into a tissue trap for pathologic evaluation. This technique alleviates the need for removal of fibroid "chips" from the cavity and it has been shown to be faster for trainees.
Hysteroscopic tissue removal systems introduced an efficient, easy-to-use tool for hysteroscopic myomectomy. However, there are limitations, such as the high cost of the disposable element, as well as the difficulty resecting fundal fibroids and deep type 2 fibroids. In addition, dense and calcified fibroids can be very challenging to resect with these devices. One study showed that switching to a resectoscope in these cases allowed for finalization of the procedure [28]. However, another meta-analysis showed statistically significant improvement in complete resection of pathology when tissue extraction devices were used [29]. The surgeon should tactfully choose the best tool based on the pathology, its size, location, the patient's goals, and the surgeon's expertise.
The technique of surgery at the hospital is the same as that of the outpatient clinic, also with the physiological solute as distension media, with hospital surgery being more indicated for submucosal myoma. Due to the difficulty in fragmenting fibroids with smallercaliber blades, surgery in the operating room, with instruments of greater caliber and power, makes myomectomy feasible with less operative time and improved efficiency [30,31].
Morcellators have expanded their use, with good acceptance, especially by those who are starting hysteroscopic surgery, due to practicality of use, short learning time, and non-use of energy (only mechanics), with good performance in the treatment of intracavitary lesions. Its limits are more intramural lesions, and lesions in cornual and fundic regions (Figure 11).
LASER Technique
The application of LASER will lead to myolysis with total destruction of the myoma or a late expulsion of it after volume reduction and ischemia. Therefore, it can be considered for nodules with a greater intramural component, in which surgery with a resectoscope could pose risks. The most common type of LASER used in hysteroscopy is the diode laser device, with a 5 Fr fiber, capable of mixing two different wavelengths, 980 nm and 1470 nm. A 980 nm wavelength is more absorbed by hemoglobin, leading to a higher coagulation effect. At 1470 nm, we will have a higher vaporization effect due to affinity to water. This mixing capacity allows combined effects that can be adjusted for each tissue and or surgery. In this deeper approach to the myometrium, the control of the free myometrial mantle should be monitored by Doppler ultrasound in order to avoid thermal damage to neighboring organs.
With the LASER fiber, the enucleation technique can be performed, incising the endometrium until reaching the pseudocapsule and then mobilizing the nodule mechanically (with another instrument), or waiting for its spontaneous expulsion or a second hysteroscopy with the resectoscope in 1 to 2 months, with intracavitary myoma (the OPPIuM Technique) [32] (Figure 12).
Radiofrequency Ablation Technique
This is also a myolysis technique, which is applied through ultrasound guidance. The system uses a handpiece for radiofrequency ablation, connected to an intrauterine ultrasound probe, forming a single integrated device so that the rods penetrate the myoma.
LASER Technique
The application of LASER will lead to myolysis with total destruction of the myoma or a late expulsion of it after volume reduction and ischemia. Therefore, it can be considered for nodules with a greater intramural component, in which surgery with a resectoscope could pose risks. The most common type of LASER used in hysteroscopy is the diode laser device, with a 5 Fr fiber, capable of mixing two different wavelengths, 980 nm and 1470 nm. A 980 nm wavelength is more absorbed by hemoglobin, leading to a higher coagulation effect. At 1470 nm, we will have a higher vaporization effect due to affinity to water. This mixing capacity allows combined effects that can be adjusted for each tissue and or surgery. In this deeper approach to the myometrium, the control of the free myometrial mantle should be monitored by Doppler ultrasound in order to avoid thermal damage to neighboring organs.
With the LASER fiber, the enucleation technique can be performed, incising the endometrium until reaching the pseudocapsule and then mobilizing the nodule mechanically (with another instrument), or waiting for its spontaneous expulsion or a second hysteroscopy with the resectoscope in 1 to 2 months, with intracavitary myoma (the OPPIuM Technique) [32] (Figure 12).
LASER Technique
The application of LASER will lead to myolysis with total destruction of the myoma or a late expulsion of it after volume reduction and ischemia. Therefore, it can be considered for nodules with a greater intramural component, in which surgery with a resectoscope could pose risks. The most common type of LASER used in hysteroscopy is the diode laser device, with a 5 Fr fiber, capable of mixing two different wavelengths, 980 nm and 1470 nm. A 980 nm wavelength is more absorbed by hemoglobin, leading to a higher coagulation effect. At 1470 nm, we will have a higher vaporization effect due to affinity to water. This mixing capacity allows combined effects that can be adjusted for each tissue and or surgery. In this deeper approach to the myometrium, the control of the free myometrial mantle should be monitored by Doppler ultrasound in order to avoid thermal damage to neighboring organs.
With the LASER fiber, the enucleation technique can be performed, incising the endometrium until reaching the pseudocapsule and then mobilizing the nodule mechanically (with another instrument), or waiting for its spontaneous expulsion or a second hysteroscopy with the resectoscope in 1 to 2 months, with intracavitary myoma (the OPPIuM Technique) [32] (Figure 12).
Radiofrequency Ablation Technique
This is also a myolysis technique, which is applied through ultrasound guidance. The system uses a handpiece for radiofrequency ablation, connected to an intrauterine ultrasound probe, forming a single integrated device so that the rods penetrate the myoma.
Radiofrequency Ablation Technique
This is also a myolysis technique, which is applied through ultrasound guidance. The system uses a handpiece for radiofrequency ablation, connected to an intrauterine ultrasound probe, forming a single integrated device so that the rods penetrate the myoma. This real-time ultrasound integration allows the physician to visualize and target as many fibroids as possible so they can be addressed [33].
Mazzon's Cold Loop Technique
Mazzon's technique was described in 1995 and is based on the resection of the submucosal component of the myoma using a resectoscope with a semicircle loop, with mono or bipolar energy, until reaching the intramural portion of the myoma. Upon reaching the pseudocapsule, the loop is changed to a more rigid one, which is not energized (cold loop), so that the myoma is mechanically mobilized until its enucleation. Then, a loop with energy returns to fragment and remove the myoma, which was left free in the uterine cavity. It has the advantage of approaching the myometrium without current, with lower risk of perforation and lower risk if perforation happens (thermal injury to other organs), with less thermal damage to the myometrium, less bleeding, and less intravasation [34] (Figure 13). This real-time ultrasound integration allows the physician to visualize and target as many fibroids as possible so they can be addressed [33].
Mazzon's Cold Loop Technique
Mazzon's technique was described in 1995 and is based on the resection of the submucosal component of the myoma using a resectoscope with a semicircle loop, with mono or bipolar energy, until reaching the intramural portion of the myoma. Upon reaching the pseudocapsule, the loop is changed to a more rigid one, which is not energized (cold loop), so that the myoma is mechanically mobilized until its enucleation. Then, a loop with energy returns to fragment and remove the myoma, which was left free in the uterine cavity. It has the advantage of approaching the myometrium without current, with lower risk of perforation and lower risk if perforation happens (thermal injury to other organs), with less thermal damage to the myometrium, less bleeding, and less intravasation [34] ( Figure 13).
Mobilization and Enucleation Technique Using the Pseudocapsule-Lasmar
The technique published by Lasmar in 2002 has the name of "direct mobilization of the myoma". It consists of incising the endometrium around the submucosal myoma using the resectoscope with the Collins loop until reaching the pseudocapsule, releasing the existing fibrous beams. Once the pseudocapsule is identified, with the same instrument, a movement similar to that performed in laparotomic and laparoscopic myomectomy is performed, separating the myoma from the myometrium in its entirety, causing it to slide into the myometrium.
As there is no traction, as in abdominal surgery, the base of the fibroid is released, starting at the lateral edges, entering with the Collins loop in the cervix-fundus direction without energy at the same time as slight mobilizations in the fibroid are made with the resectoscope assembly. The Collins loop is kept moving from the lateral to the central part of the myoma, parallel to the nodule and moving it with the hysteroscope, leading the myoma to progressively migrate to the uterine cavity until its complete release from the uterine wall. This is facilitated by the decompression of the myometrium, which, compressed by the growth of the nodule, progressively returns to its normal position by releasing the pseudocapsule, causing the intramural lesion to become intracavitary. This technique, like all those that perform myoma enucleation, has the same advantages: lower
Mobilization and Enucleation Technique Using the Pseudocapsule-Lasmar
The technique published by Lasmar in 2002 has the name of "direct mobilization of the myoma". It consists of incising the endometrium around the submucosal myoma using the resectoscope with the Collins loop until reaching the pseudocapsule, releasing the existing fibrous beams. Once the pseudocapsule is identified, with the same instrument, a movement similar to that performed in laparotomic and laparoscopic myomectomy is performed, separating the myoma from the myometrium in its entirety, causing it to slide into the myometrium.
As there is no traction, as in abdominal surgery, the base of the fibroid is released, starting at the lateral edges, entering with the Collins loop in the cervix-fundus direction without energy at the same time as slight mobilizations in the fibroid are made with the resectoscope assembly. The Collins loop is kept moving from the lateral to the central part of the myoma, parallel to the nodule and moving it with the hysteroscope, leading the myoma to progressively migrate to the uterine cavity until its complete release from the uterine wall. This is facilitated by the decompression of the myometrium, which, compressed by the growth of the nodule, progressively returns to its normal position by releasing the pseudocapsule, causing the intramural lesion to become intracavitary. This technique, like all those that perform myoma enucleation, has the same advantages: lower risk of perforation and risks associated with perforation (thermal injury to other organs), less thermal damage to the myometrium, and less bleeding and intravasation [35].
With the myoma totally in the cavity or almost totally, the nodule is sliced, using the Collins loop, in the longitudinal direction to remove it in large fragments, improving efficiency ( Figure 14). risk of perforation and risks associated with perforation (thermal injury to other organs), less thermal damage to the myometrium, and less bleeding and intravasation [35].
With the myoma totally in the cavity or almost totally, the nodule is sliced, using the Collins loop, in the longitudinal direction to remove it in large fragments, improving efficiency ( Figure 14). Sometimes, in the presence of large fibroids, it is difficult to mobilize the fibroid, and its release from the myometrium is not complete. In these cases, this great intracavitary Sometimes, in the presence of large fibroids, it is difficult to mobilize the fibroid, and its release from the myometrium is not complete. In these cases, this great intracavitary portion of the myoma ends up touching the opposite wall, leaving no more space for progression, making it impossible to move. In these cases, fragmentation is necessary, even though the nodule is not completely free, but, even so, the safety level of the procedure is increased, since the largest portion of the nodule is already in the uterine cavity, with its migration from the myometrium deep to the surface.
With this technique, the limit of hysteroscopic myomectomy can be extended in relation to the measurement of the myometrial mantle before surgery from 10 to 3 mm [36].
The different techniques are important because not all hospitals have all the possibilities but knowing them, knowing how to use them, and knowing the best indication and limits are fundamental for those who are qualified in hysteroscopic surgery.
Regardless of the technique, some fibroids will not be removed in a single operative time; some procedures should be interrupted for safety, reinforcing the importance of preoperative assessment of the patient's clinical conditions and classification of the fibroid, decisive data for knowledge, and risk prevention [20].
In incomplete myomectomy, a GnRH analogue can be prescribed for 2 to 3 months to cause the migration of the residual intramural component to the uterine cavity and, before the new surgical intervention, a second outpatient hysteroscopy and tests are performed to classify the myoma. In many cases, in the outpatient hysteroscopy itself, a myomectomy can be completed or the uterine cavity can be seen to be normal, as the myoma has been expelled [26].
Mainly in patients with infertility, outpatient second-look hysteroscopy is indicated 45 to 60 days after surgery to review the uterine cavity and lyse the adhesions, which may appear with the procedure and will be easily lysed with scissors or with the simple passage of the hysteroscope.
As operative bleeding is one of the most frequent risks in hysteroscopic myomectomy, the patient with severe anemia should not undergo surgery until the anemia has been corrected. Some treatments may be used preoperatively, mainly to block menstruation and hematologically recover the patient and others in the perioperative period to reduce intraoperative and postoperative bleeding.
Complications of Hysteroscopic Myomectomy
Among hysteroscopic surgeries, myomectomy is the one with the highest incidence of complications.
a-Laceration of the cervix can occur at the time of dilation due to the positioning of the Pozi forceps and with the Hegar dilators, with dilation difficulty, especially in those who used GnRH before the procedure and in older patients. Revision of the laceration site, with tamponade and/or suturing of the area, has excellent results.
b-Uterine perforation can occur at the time of cervical dilatation or during surgery; when perforation occurs without the use of energy, only clinical observation, with the patient hospitalized for a few hours, is sufficient, as there will rarely be a need for surgical intervention. With the impossibility of uterine distention, the procedure must be suspended and the patient returns to the operating room in 3 months. However, if energy was used at the time of perforation, regardless of which, the indication for investigation of the pelvic and abdominal cavity is imperative, even with the great possibility of being negative. Laparoscopy or laparotomy may rule out bowel and/or bladder injuries. Bladder injury may be suspected in the presence of hematuria, as the patient with complex myomectomy has bladder catheterization for fluid balance. Hematuria will only happen when the anterior wall of the uterus is perforated, but it can happen lightly when the bladder catheter is moved, which should be evaluated with cystoscopy before considering laparoscopy.
Intestinal injury makes it more difficult to suspect without laparoscopy/laparotomy, especially thermal injuries that may take three days or more to fistulize, with potentially serious consequences, such as peritonitis and sepsis.
Vascular lesions can be suspected with hemodynamic instability. Uterine perforation should be suspected in hysteroscopic myomectomy when there is very accelerated negative fluid balance (rapid escape of the distending medium) and vision of the uterine cavity cannot be established.
Attention is needed because "negative laparoscopy" may be justified, but the undiagnosed and untreated complication is not.
To reduce the possibility of uterine perforation at the time of cervical dilatation, some precautions should be taken: 1-Perform the bimanual exam to assess size, version, and uterine flexion. 2-Perform previous diagnostic hysteroscopy to identify the path and start dilatation with Hegar's dilator # 4.
3-Remove the speculum after clamping the cervix with the Pozzi forceps and facilitate the rectification of the path.
4-Use dilators with a 0.5 cm diameter progression. 5-Limit with your index finger how much of the dilator will progress into the uterine cavity-the dilation is for the internal os only; there is no need to advance the Hegar dilator to the fundus of the uterus.
c-Uterine bleeding can happen due to the superficial vessels of the myomas or from the myometrial bed. The treatment, as previously described, consists of vessel coagulation, anti-hemorrhagic drugs, oxytocin, and placement of an intracavitary Foley catheter, with a well-distended balloon, for 4 to 12 h, always with patient monitoring.
d-Fluid Overload
Strict fluid balance is important, with great care from 1000 mL of negative balance, avoiding reaching 2000 mL. Rapid and massive absorption of fluids can lead to pulmonary edema, heart failure, encephalopathy, brain damage, seizures, coma, and death. When the distention medium is 1.5% glycine, massive absorption initially causes nausea, vomiting, and dizziness. Excess fluid in the intravascular space can lead to hemodilution, overload and heart failure, hyponatremia and increased ammonia, with encephalopathy, brain damage, and death. The severity of complications is directly associated with the volume absorbed in a short period of time. Prolonged surgical time can also increase absorption of the distention medium [40,41].
Some researchers use vasopressin and oxytocin to decrease the chance of intraoperative intravasation and bleeding, still awaiting further studies proving the effectiveness [42,43].
With the mannitol-sorbitol solution, massive absorption of fluid also causes hemodilution, which can lead to heart failure. As the condition is due only to hyperhydration, with no increase in plasma ammonia, encephalopathy is less frequent and less severe. It should be avoided in diabetic patients due to the possibility of hyperglycemia.
The use of saline and ringer lactate combined with bipolar current eliminates the possibility of electrolyte complications but not the risk of fluid overload and, consequently, heart failure. e-Infection is not frequent in hysteroscopic surgeries; in myomectomy, it is possible due to the presence of residues, which could become infected, and the occlusion of the internal os, leading to the formation of hematometrium and pyometra.
f-Air embolism is rare but it can be serious and fatal. Ambient air may be responsible, penetrating the venous circulation, during dilation of the cervical canal or through a solution of continuity in the myometrium, with greater risk with the patient in the Trendlenburg position, where the heart is below the level of the uterus. The risk of air embolism is similar in hysteroscopic myomectomy and other types of hysteroscopic surgery. The gas produced in bipolar vaporization with physiological solute is similar to that in monopolar vaporization with 1.5% glycine and does not appear to be responsible for causing embolism [44].
g-Late complications
Some late complications can occur, such as adhesions and placenta accreta, especially in areas of large resections. Some authors describe the incidence of adhesions after hysteroscopic myomectomy ranging from 1 to 13% [45]. Authors suggest the post-surgery hyaluronic acid intrauterine gel others the placement of a nonhormonal intrauterine device. What all services recommend is a review of the uterine cavity at 45 to 60 days postoperatively to review the uterine cavity and lysis of adhesions, especially in the patient who desires to conceive [46].
Clinical Outcomes of Hysteroscopic Myomectomy
Abnormal Uterine Bleeding (AUB): Success rate of hysteroscopic myomectomy has been reported to be as high as 70-99%. Factors determining the success rate include the size, number, and location of the fibroids, in addition to the surgeon's expertise and whether resection was complete or incomplete [47].
Fertility outcomes: Although submucosal fibroids are frequently implicated in patients with subfertility and hysteroscopic myomectomy is commonly recommended, the literature is currently inconsistent in this regard. The Practice Committee of the American Society for Reproductive Medicine (ASRM) created guidelines that elaborate on the correlation between fertility and leiomyomas [16]. The heterogeneity of the study populations, type, location, number and size of fibroids, and the inclusion and exclusion criteria lead to difficulty drawing accurate conclusions that can guide practice recommendations [48][49][50].
Final Considerations
Hysteroscopic myomectomy is the most difficult and complex surgery among hysteroscopic surgeries, with potentially serious risks and complications. Its performance can be safe and efficient for the treatment of intrauterine disease, being the best therapeutic option for submucous myomas.
Safety in myomectomy is at two distinct moments, in the preoperative evaluation and in the operative act. In the preoperative period, it consists of the hemodynamic evaluation of the patient, the knowledge of the desire for a future pregnancy, and the classification of the uterine fibroid. | 2022-11-16T16:49:41.876Z | 2022-11-01T00:00:00.000 | {
"year": 2022,
"sha1": "c75e55e53506b80a9d5553b1d6fcdafec646fcca",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1648-9144/58/11/1627/pdf?version=1668155412",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7779777a8de01c95e6fa5e457bd46f05275aa8ca",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
229281540 | pes2o/s2orc | v3-fos-license | A prognostic nomogram incorporating red cell distribution width for patients with intracerebral hemorrhage
Abstract Intracerebral hemorrhage (ICH) is the second most common subtype of stroke with higher mortality and morbidity, and it lacks effective prognostic markers. The aim of this research is to construct newly valuable prognostic nomogram incorporating red blood cell distribution width (RDW) for ICH patients. We retrospectively analyzed 953 adult patients with ICH. The impacts of RDW on short-term mortality and functional prognosis were calculated using Akaike information criterion (AIC), Bayesian information criteria (BIC) and the area under the curve (AUC) respectively, which could be used to compare with Glasgow coma scale (GCS) and ICH score. The independent factors of prognosis were identified by univariate and multivariate logistic regression analysis. A nomogram based on RDW for nerve functional prognosis was further constructed and validated. Its clinical value was subsequently explored utilizing decision curve analysis. Cumulative clinical results were retrieved for 235 inpatients from Jan 2012 to June 2017. In 30-day mortality sets, GCS and ICH score had better prognostic performance than RDW (AUC: 0.929 and 0.917 vs 0.764; AIC: 124.101 and 134.188 vs 221.372; BIC: 131.021 and 141.107 vs 228.291). In 30-day functional prognosis sets, the consequences of evaluation systems were inconsistent. GCS was the best parameter for predicting outcome using AIC (262.350 vs 276.392 and 264.756) and BIC (269.269 vs 283.311 and 271.675). However, RDW was higher than GCS and ICH score considering AUC (0.784 vs 0.759 and 0.722). Age, GCS, RDW, platelet distribution width, and surgery were independent prognostic factors by multivariate logistic regression analysis, and those coefficients were used to formulate a nomogram. This nomogram can provide accurate prediction with the concordance index of 0.880 (95% CI, 0.837–0.922) higher than Harrell's concordance index of GCS system 0.759 (95% CI, 0.698–0.819) and RDW 0.784 (95% CI, 0.721–0.847). The calibration plots showed optimal consistency between bootstrap-predicted and the actual observed values of 30-day unfavorable prognosis. Decision curve analysis showed an increased net benefit for utilizing the nomogram. High RDW values are associated with an unfavorable outcome after ICH. The established nomogram incorporating RDW should be considered for a 30-day functional prognosis.
Introduction
Intracerebral hemorrhage (ICH), the second most common pathological type of stroke, remains a cause of morbidity and mortality and is associated with significant long-term disability. [1,2,3,4] Additionally, it comprises 10% to 15% of all strokes, with a global incidence rate of 24.6/100,000 and with a growing incidence related to the use of anticoagulation, antiplatelet drugs, and an aging population. [3,4] Despite the ongoing efforts to improve therapeutic interventions and risk-stratification, accurately predicting the therapeutic effect of treatments and the prognosis of ICH, remains unclear. Glasgow coma scale (GCS) is a simple neurological scale that is currently used to predict the clinical outcome of ICH. [5] However, several studies have demonstrated defects, including unsatisfactory prediction accuracy and omission of important prognostic factors, when using GCS. [6,7] There is, therefore, an urgent need for an accurate prognostic model, which can provide guidelines for treatment and rehabilitation.
Red blood cell distribution width (RDW) is a simple and cheap hematologic parameter with multiple clinical applications. [8] Observational Study Medicine ® OPEN RDW describes the heterogeneity of circulating erythrocytes volume (anisocytosis) and is primarily used for the differential diagnosis of anemias. Increased RDW indicates a higher proportion of either large or small erythrocytes, which can be attributed to numerous metabolic disorders such as inflammatory. [9] In the last decade, the number of studies investigating the correlation between RDW and human diseases has increased exponentially. [10,11] RDW has also been proposed as a robust predictive marker of negative clinical outcome. [12] High RDW indicates an increased incidence and all-cause mortality of cardiovascular disorders. [13,14] Interestingly, in acute cerebral infarction and subarachnoid hemorrhage, RDW has been associated not only with mortality but also functional outcomes. [15,16] Moreover, the inflammatory reaction has a crucial role as different RDW levels reflect the severity of ICH in patients during the initiation and progression of ICH. Altintas and his colleagues confirmed that initial RDW can provide an effective risk stratification of hematoma growth and its outcome. [17] To better identify significant predictors of poor outcome, we conducted a retrospective study to assess the prognostic value of RDW. Nomogram, a new algorithm for the prognostic model, allows for simultaneous consideration of multiple predictors including the established staging system that possesses a higher power efficiency. Subsequently, we established and validated a novel nomogram algorithm incorporating significant factors and compared it with GCS using the decision curve analysis (DCA).
Study population
The retrospective research consisted of consecutive patients admitted to the Tianjin Baodi Affiliated Hospital of Tianjin Medical University (Tianjin, China) from January 2012 to June 2017. The study was carried out in accordance with the Helsinki Declaration, based on a study protocol approved by the Ethical Committee of Tianjin Baodi Affiliated Hospital of Tianjin Medical University.
Inclusion and exclusion
Patients with clinical and laboratory data that met the following eligibility criteria were included: i) 18 years of age or older, ii) have a definite diagnosis of ICH verified by brain iconography, iii) admitted to the stroke unit within 24 hour for ICH, iv) primary cause of ICH (the primary reason that patients go to hospital to seek treatment is occurring ICH rather than other diseases) and v) possessing a complete quarterly follow-up data.
The exclusion criteria were as follows: i) not a primary cause of ICH, ii) underlying disease affecting hematopoiesis such as hematological system disorders, chronic inflammatory, liver cirrhosis, chronic renal disease, autoimmune disorders, tumors, and other malignant diseases, iii) use of anticoagulants and antibiotics, iv) lack of critical clinical or follow-up data and v) pre-stroke dependency (modified Rankin scale (mRS) score≥3).
Data extraction
Details were collected for all the selected patients. Demographics were obtained by a questionnaire survey, which included age, gender, and previous history of disease (e.g., diabetes, obesity, hypertension, and stroke). Clinical data on GCS, ICH volume, ICH score, and blood pressure on admission were obtained and confirmed by 2 independent clinical doctors. In addition, complete blood cell count was acquired during admission, which included hemoglobin, erythrocyte mean corpuscular volume, RDW, Neutrophil, Lymphocyte, neutrophil-to-lymphocyte rate (NLR), and platelet distribution width (PDW). Some serum biochemical parameters including creatinine, C-reactive protein and low-density lipoprotein cholesterol, then were collected. Surgery included the minimal traumatic evacuation of hematomas, traditional craniotomy, and decompression craniectomy. All the major indicators were defined by reviewing previous relative studies mentioned in the section of our instruction. All participants were followed up for 30 days with physical and neuroimaging examinations and questionnaires regarding neurological function recovery. MRS was used as a neuro-functional evaluation scale for measuring the degree of disability or dependency with ICH. [18] The 30-day mortality rate also was calculated. Details of patient selection and study development are illustrated in Figure 1. The study-enrolled patients were analyzed and divided into 2 groups according to their 30-day mortality and 30-day functional prognosis, respectively. Grouping strategies were used as follows: 30-day mortality sets (survivors vs nonsurvivors cohort) and 30-day functional prognosis sets (favorable cohort [mRS<3] vs unfavorable cohort [mRS≥3]).
Statistical analysis
We summarized continuous variables with medians and quartile ranges and used Kolmogorov-Smirnov to test for a normal distribution. Data that met a normal distribution were described as the mean ± 1 standard deviation, whereas non-normal distribution data was described by the median and quartile ranges. Student t test was used when normality (and homogeneity of variance) assumptions were satisfied, otherwise the equivalent to the Mann-Whitney U test was used. Categorical variables were expressed by frequencies/percentages, and the x2 test or Fisher exact test was used for comparing different groups. All were conducted using SPSS version 24.0 software (IBM SPSS Statistics, Chicago, IL).
Prognostic performance of RDW
The receiver operating characteristic (ROC) was graphically calculated to evaluate the RDW value of the prognostic prediction and compare it with GCS and ICH score. Three methods were used to assess the comparative superiority and inferiority of various models from different aspects. The first method, Akaike information criterion (AIC) is an estimator of the relative quality of statistical models and provides means for model selection. [19] The second method is the Bayesian information criteria (BIC), a useful algorithm used for the evaluation of models. [20] The third method included using ROC and the area under the curve (AUC) to compare the comprehensive performance of different models. A low AIC and BIC indicate a better model fit and a high AUC indicates an effective discrimination ability for the prognostic prediction. We calculat- ed the AIC, BIC, and AUC values of the RDW, GCS, and ICH score using the formulated statistical models and compared their prediction performance of 30-day mortality and functional prognosis in ICH patients.
Construction of the nomogram
As a graphical and quantitative rating prediction tool, the nomogram allows for simultaneous consideration of multiple variables including the established staging system that possesses a higher power efficiency. First, the univariate and multivariate logistic regression analyses were used to identify risk factors related to outcomes, including the 30-day mortality and functional prognosis. Variables were included in the second step of the multivariable logistic analysis regression model with backward selection (likelihood-ratio test) if they were found significantly associated with our outcomes in the first step of univariate logistic regression analysis. The above analyses were performed using SPSS version 24 (SPSS, Chicago, IL). P value <.05 was indicated a statistically significant difference. Second, a novel prognostic nomogram based on RDW was established for predicting the 30-day functional prognosis of ICH patients using the R software version 3.3.4 (Institute for Statistics and Mathematics, Vienna, Austria; www.r-project.org).
Validation of nomogram
For internal validation, 1000 bootstrap re-samples were adopted to decrease the over-fit bias. The discriminative ability of the nomogram was summarized by ROC and Harrell's concordance index (C-index). Larger the C-index and AUC, the more accurate was the prediction ability of the nomogram. The calibration curve was used to analyze the agreement between the nomogram and the ideal observation. Calibration plots on the slope of the 45-degrees line were considered as an excellent model.
DCA
DCA is a useful statistical tool and increasingly being used in cancer researches to determine the clinical value of prediction models. To measure the benefit of the prediction nomogram, DCA was conducted to compare the clinical usefulness of the nomogram compared to GCS and RDW. This was done by calculating the net benefits for a range of threshold probabilities. DCA was performed by R software 3.3.4. P < .05 was regarded as statistically significant.
Baseline characteristics of the study
In total, 235 subjects were included in this study (median age of 64.5 years, IQR 20-90 years; 156/235 males). Fifty-two (22%) patients succumbed to ICH, and 143 (60.8%) patients were included in the functional outcome cohort within 30-day. The clinical, anamnestic, demographic, and laboratory data of the patient cohort was stratified according to the different clinical outcomes ( Table 1). The median RDW on admission was 13.8 (12.4-15.2). Two types of clinical outcomes were analyzed: patents in good prognosis cohort (survivor and favorable outcome) had a lower RDW level (13.5 ± 1.3 vs 14.7 ± 1.2, P < .001; 13.0 ± 1.1 vs 14.2 ± 1.3, P < .001) relative to the bad prognosis cohort (non-survivor and unfavorable outcome), as well as for age, hematoma size, WBC, neutrophil, NLR, low-density lipoprotein cholesterol, and creatinine levels. For the prognostic score, GCS and ICH scores divided cases into groups with highly statistically significant differences in mortality and functional outcome. However, the 30-day non-survival rates and the occurrence of unfavorable neurological outcomes were significantly higher in patients who underwent surgery during admission.
3.2.
Comparing the prognostic impact of RDW, GCS, and ICH score We calculated the AIC, BIC, and AUC values to compare the riskfactors of the prognostic value, as shown in Table 2 and Figure 2.
Constructing a nomogram for 30-day functional prognosis
In the multivariate logistic regression analysis for the ICH score, GCS, and NLR were found significantly associated with the 30-day mortality (Table 3). Patient's age (older), lower GCS score, higher RDW or PDW, and history of surgery were related to an unfavorable prognosis (Table 4). A more accurate prognostic nomogram was proposed by integrating all afore mentioned 5 key factors (Fig. 3). RDW, GCS, and PDW were the 3 most important parameters within the nomogram. The estimated probability of 30day unfavorable outcome can be estimated by locating and adding the scores on the total score scale. For example, the predicting probability of an unfavorable outcome for an ICH patient of 75years, GCS=12, RDW=14.0, PDW=13, and no surgery is 72%. How did we calculate that? First, corresponding scores of these factors are located from the nomogram as 30 for "75-year-old", 14 for "GCS=12", 40 for "RDW=14.0", 30 for "PDW=13" and 0 for "non-surgery". The total score, therefore, is 114. Second, total score of 114 is equivalent to a probability of approximately 72% for an unfavorable outcome.
Validation for nomogram
Validation of the nomogram was performed using 1000 bootstrap. The C-index was 0.880 (95% CI, 0.837-0.922) higher than the C-index of GCS 0.759 (95% CI, 0.698-0.819) and RDW 0.784 (95% CI, 0.721-0.847). Furthermore, overall predictive performance was verified by means of ROC curves (Fig. 4). Altogether, this suggests that this model was reasonably accurate. The predicted survival and actual survival from the nomogram are represented by the x-axis and the y-axis, respectively. The calibration plot unraveled an adequate fit of the nomogram predicting the actual-risk of an unfavorable outcome (Fig. 5).
DCA of nomogram and GCS
Clinical usefulness was evaluated as the last component of nomogram performance. DCA showed, across the entire range of threshold probability, that using the nomogram-assisted decisions to assess the 30-day unfavorable outcome provides a significant net benefit in clinical decision-making, compared to the net benefit of GCS-and RDW-assisted decisions (Fig. 6).
Between the threshold probabilities of 0% to 60%, the net benefit of nomogram is clearly better than the GCS score and RDW alone. Bold figures indicate statistical significant P < .05. Surgery includes minimally traumatic evacuation of hematomas, traditional craniotomy and decompression craniectomy. BP = blood pressure, CRP = C-react protein, DBP = diastolic blood pressure, GCS = Glasgow coma scale, LDL-C = low-density lipoprotein cholesterol, MAP = mean arterial pressure, MCV = erythrocyte mean corpuscular volume, mRS = modified Rankin scale, NLR = neutrophil-to-lymphocyte rate, PDW = platelet distribution width, RDW = red blood cell distribution width, SBP = systolic blood pressure, WBC = white blood cell.
Discussion
As the least treatable subtype of stroke, ICH has been studied intensively in order to find more powerful prognosis staging systems. RDW has gained considerable attention as its prognostic ability outperforms in several mortal-diseases. [21,22] However, few studies exist concerning the risk-stratified model of ICH based on RDW. Therefore, we conducted this study using various algorithms (AIC, BIC, and AUC) in order to compare RDW with GCS and ICH score, and we subsequently identified independent factors of prognosis. A nomogram based on RDW for functional prognosis was established and validated, and its net benefit was also explored by DCA, compared with GCS score and RDW alone.
In this consecutive series of individuals with ICH, RDW was a significant prognostic factor in the univariate logistic regression analysis for short-term mortality and confirmed as an independent risk factor for functional prognosis. In the 30-day mortality sets, GSC and ICH exhibited a higher predictive accuracy. In functional prognosis sets, age, GCS, PDW, and surgery were also demonstrated as independent significant risk factors in the univariate and multivariate logistic regression analyses. Based on those predicting parameters, we constructed a nomogram for RDW, an integral part of the red blood cell automated hematology analysis without any additional costs, has built up key significant link with the adverse prognosis of life-threatening disorders including cardiovascular diseases. [23][24][25] In 2007, Felker et al first demonstrated that RDW was a significant prognostic biomarker associated with mortality in heart failure by gathering and analyzing data from the CHARM Program and the Duke Databank. [26] Several other studies on RDW have also concentrated on its prognosis prediction in cardiac-cerebral vascular diseases. [27][28][29] A study followed 1796 patients with acute coronary syndromes (ACS) in a coronary care unit. Patients with a high RDW had a higher risk of 6-month death for ACS. [30] Moreover, it was demonstrated that RDW correlates with not only short-and long-term mortality but also with functional prognosis in subarachnoid hemorrhage and ischemic stroke. [15,16,31] Interestingly, a study of ICH found that an increased RDW (i.e., > 13.85) was a significant predictor of hematoma growth, relative to 3-months mRS, during an average follow-up of 2 years. [17] However, there were several limitations: Table 4 Univariate and multivariable analysis to identify the independent predictors of functional prognosis of intracerebral haemorrhage. 95% CI = 95% confidence interval, GCS = Glasgow coma scale, ICH = intracerebral haemorrhage, LDL-C = low-density lipoprotein cholesterol, NLR = neutrophil-to-lymphocyte rate, OR = odds ratio, PDW = platelet distribution width, RDW = red blood cell distribution width, WBC = white blood cell. i) the virtual relation of RDW for ICH prognosis was not elucidated, and ii) the conclusion was not credible due to the small sample size (60 individuals).
Univariate analysis
In our study, we found that certain outcomes were consistent with former studies. Our work strongly supports that high RDW (AUC: 0.764 and 0.784) significantly correlates with short-team outcome (30-day mortality and unfavorable prognosis) in ICH. This also serves as an important reminder to clinicians, who should have adopted this treatment algorithm to treat those ICH patients with higher RDW level. In addition, a nomogram cooperating RDW with acceptable discrimination (C-index 0.880) and calibration was established for predicting an unfavorable outcome, and it seems to possess more power efficiency than currently utilized prognostic tools.
Despite the association between RDW and clinical outcome in ICH, the exact mechanisms are only partially understood. It still recognizes whether anisocytosis is only a participant, an onlooker, or both, in various types of vascular disease. Anisocytosis can result in an RDW change through a variety of pathogenic mechanisms, such as inflammation, oxidative stress, and nutrition deficiency. [8] A high RDW may be a marker of inflammation. Elevated RDW values are correlated with sepsis, autoimmune disorders or cardiovascular disease. On 1 hand, inflammatory is frequently encountered during the development of ICH among individuals.
On the other hand, inflammatory mediators may impede red cell maturation, via reduced erythropoietin production and iron bioavailability, as well as induce myelosuppression of erythroid precursors. A recent study demonstrated that a strong relationship exists between RDW and conventional inflammatory biomarkers. Allen et al (2010) elucidated that the raised RDW was closely linked to (Interleukin 6) IL-6, which strongly supports that RDW is an important marker of the inflammation. [32] In addition, oxidative stress may play a role in both the process of ICH and increased RDW. Erythrocyte homeostasis and survival were affected by oxidative stress. [33] More specifically, low antioxidant defenses, not only have been inversely associated with RDW but also are an independent risk factor for all-cause mortality, notably in ICH. It is well known that hematoma enlargement, hypoxia, and oxidative stress are the key factors affecting the recovery of nerve function. Moreover, Nutrition (e.g., Iron, vitamin B12, or folate) deficiency which is a common marker of impaired red cell generation, [34] maybe the mechanism underlying the association between RDW and functional decline of ICH. Patel et al (year) showed that a raised RDW is positively associated with reduced erythrocyte deformability. [35] Likewise, a raised RDW can inhibit endothelium-dependent nitric oxide-mediated vasodilation. [36] The above 2 factors reduce the oxygen supply to damaged brain tissue and diminish the capacity for nervous system repair and recovery. Hence, anisocytosis may be an important cause of early functional decline after acute ICH. In our study, age, GCS, RDW, PDW, and surgery were determined as independent function prognostic factors, and the ICH score had a significant association with the 30-day mortality but not the functional outcome. Currently, various prognostic tools have been proposed for the prognosis prediction after acute ICH. [37][38][39] Age and GCS are the most consistent outcome predictors in existing forecasting models and may improve prediction efficiency through grading score, in combination with other independent outcome predictors. Parry-Jones A R et al elucidated that a model integrating age and GCS score was capable of identifying negative outcome. [40] The AUC was up to 0.897, and GCS was proven as a high net benefit for threshold probabilities of 10% to 95% by DCA. Rost NS et al, by analyzing 629 consecutive patients with ICH, also reported that age and GCS were associated with functional prognosis. [39] Our study found that GCS was a robust predictive factor relative to both 30-day mortality (AUC: 0.929) and functional prognosis (AUC:0.759), consistent with previous studies. Age alone was a comparatively weak predictor of mortality, but a significant prognostic factor participating in functional nomogram construction. PDW is not only a marker of platelet activation but also an important predictor of impaired reperfusion and inflammatory response, which may directly contribute to adverse functional outcome in patients with ICH. PDW is regarded as a useful prognostic factor in numerous disorders, [36] especially in a stroke. Our study found similar results. Surgery is regarded as a double-edged sword and its application is controversial in ICH. We found that patients who had surgery, tended to have an unfavorable outcome. Surgery has certain risks and complications and the damage to physical function and the immune system may be led to an increase in the rate of disability and mortality. Moreover, patient with an indication for surgery may have a more serious condition. Our study suggests that surgeons should be more cautious in their understanding of surgical indications.
The nomogram in this study is innovative and has certain advantageous. First, we generated and internally validated a novel nomogram that integrated routine clinical score, laboratory variables, and treatment. The nomogram can be employed to predict early functional decline with high accuracy (AUC 0.880). Second, the 3 different statistical methods (AIC, BIC, and AUC) were used to evaluate the performance of the new model. Third, the advantage of nomogram over previous studies resides in its clinical value. Finally, our nomogram incorporates commonly accessible parameters that do not require any additional expense.
Our study was not without its limitations. First, the clinical valuation of this study may be attenuated by its retrospective nature. Given the intrinsic limitations, the effects of potential confounding on the RDW could not be assessed. Therefore, the association between RDW and ICH must be confirmed in further studies. Second, RDW is an acute-phase reactant, which may be significantly detected before ICH rather than after. However, it is difficult to access complete RDW data because of ICH unpredictability. Third, all enrolled individuals came from a single medical center. The 30-day mortality rate of the present study was 22.4%, which is similar to other oriental populations (15%-25%) [37,41] and lower than western populations (31.9%, 45%). [37,[42][43][44] The reason might be attributed to racial and socioeconomic differences, suggesting multi-nation and multicenter research to eliminate the potential bias. [45][46][47][48] Fourth, the data of therapeutic intervention was unavailable. As we know, medicine treatment plays a crucial role in ICH patients, notably patients without surgery. In this study, there were 34 participants in total excluded for drop-outs during follow-up. They refused therapy or withdrew in the follow-up time due to economic reasons, or other complications. As these patients accounted for a very small part of the candidates and most of their demographic characteristics matched, the influence of the exclusion on the result was minute and can nearly be ignored. Other related variables, such as "Body Mass Index", "Diabetes or other dietary intakes", or "Hypertension", should also be collected and adjusted to verify the result in a larger sample size. Fifth, though we have successfully constructed a newly nomogram to help people to predict the probabilities of occurring ICH, we only performed the internal validation and lacks the external validation. We should collect another validation cohort in the future.
Conclusions
High RDW value shows an association between RDW and poor clinical outcome in patients with ICH. The established nomogram incorporating RDW should be considered for a 30-day functional prognosis. | 2020-12-17T05:07:51.491Z | 2020-12-11T00:00:00.000 | {
"year": 2020,
"sha1": "5419ad13b0f1d691d7cbd0838e0b10952cdc003f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1097/md.0000000000023557",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5419ad13b0f1d691d7cbd0838e0b10952cdc003f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
195719974 | pes2o/s2orc | v3-fos-license | Assessing the Multiple Impacts of Extreme Hurricanes in Southern New England, USA
: The southern New England coast of the United States is particularly vulnerable to land-falling hurricanes because of its east-west orientation. The impact of two major hurricanes on the city of Providence (Rhode Island, USA) during the middle decades of the 20th century spurred the construction of the Fox Point Hurricane Barrier (FPHB) to protect the city from storm surge flooding. Although the Rhode Island / Narragansett Bay area has not experienced a major hurricane for several decades, increased coastal development along with potentially increased hurricane activity associated with climate change motivates an assessment of the impacts of a major hurricane on the region. The ocean / estuary response to an extreme hurricane is simulated using a high-resolution implementation of the ADvanced CIRCulation (ADCIRC) model coupled to the Precipitation-Runo ff Modeling System (PRMS). The storm surge response in ADCIRC is first verified with a simulation of a historical hurricane that made landfall in southern New England. The storm surge and the hydrological models are then forced with winds and rainfall from a hypothetical hurricane dubbed “Rhody”, which has many of the characteristics of historical storms that have impacted the region. Rhody makes landfall just west of Narragansett Bay, and after passing north of the Bay, executes a loop to the east and the south before making a second landfall. Results are presented for three versions of Rhody, varying in the maximum wind speed at landfall. The storm surge resulting from the strongest Rhody version (weak Sa ffi r–Simpson category five) during the first landfall exceeds 7 m in height in Providence at the north end of the Bay. This exceeds the height of the FPHB, resulting in flooding in Providence. A simulation including river inflow computed from the runo ff model indicates that if the Barrier remains closed and its pumps fail (for example, because of a power outage or equipment failure), severe flooding occurs north of the FPHB due to impoundment of the river inflow. These results show that northern Narragansett Bay could be particularly vulnerable to both storm surge and rainfall-driven flooding, especially if the FPHB su ff ers a power outage. They also demonstrate that, for wind-driven storm surge alone under present sea level conditions, the FPHB will protect Providence for hurricanes less intense than category five. After discussing the methods used for generating the storm surge, the hurricane wind fields, and the river inflow, we discuss the synthetic storm, Hurricane Rhody. We then present the results
Introduction
Although the southern New England coast of the United States has experienced a number of extreme hurricanes since the arrival of European settlers, there have been none in recent years. A The extensive damage to Providence from these hurricanes spurred the construction of the Fox Point Hurricane Barrier (FPHB), which was completed in 1966. The FPHB is made up of several elements ( Figure 2): three movable Tainter gates spanning the upper Providence River (the northern portion of Narragansett Bay), dikes to the east and the west of the river, vehicular gates allowing access through the eastern dike, and a pumping station designed to discharge runoff from upstream of the Barrier when the Tainter gates are closed (see Figures 1 and 3 for the location of the Barrier). The design height of the FPHB is 25 feet (7.62 m) above the (now superseded) NGVD29 vertical datum [1]. Using the National Oceanic and Atmospheric Administration's (NOAA) VDatum conversion tool (https://vdatum.noaa.gov), the elevation at the top of the barrier is 7.37 m above NAVD88. The extensive damage to Providence from these hurricanes spurred the construction of the Fox Point Hurricane Barrier (FPHB), which was completed in 1966. The FPHB is made up of several elements ( Figure 2): three movable Tainter gates spanning the upper Providence River (the northern portion of Narragansett Bay), dikes to the east and the west of the river, vehicular gates allowing access through the eastern dike, and a pumping station designed to discharge runoff from upstream of the Barrier when the Tainter gates are closed (see Figures 1 and 3 for the location of the Barrier). The design height of the FPHB is 25 feet (7.62 m) above the (now superseded) NGVD29 vertical datum [1]. Using the National Oceanic and Atmospheric Administration's (NOAA) VDatum conversion tool (https://vdatum.noaa.gov), the elevation at the top of the barrier is 7.37 m above NAVD88. Since its construction, the FPHB has not faced a severe test, although it has been closed for several weaker tropical cyclones and winter storms. None of these events were severe enough to produce impacts that came close to compromising the Barrier. The purpose of this paper is to present results of storm surge and rainfall runoff modeling forced by an extreme yet plausible tropical cyclone strike in the southern New England region in order to raise awareness in the Rhode Island region and to evaluate the robustness of the FPHB. This hurricane, dubbed Rhody, is a hypothetical strong category three storm with characteristics similar to those of a number of historical hurricanes that have impacted the southern New England region. In addition to the strong category three Rhody, which is slightly stronger than any of the major hurricanes to strike the New England region during the last 100 years, we also present storm surge simulations for category four and five versions of Rhody. The purpose of this is to demonstrate, under present sea level, the limits of the FPHB in protecting the city of Providence.
There have been a number of related studies using coupled storm surge and wave models to examine the interaction of storm-driven surge with both natural and manmade barriers. The robustness of the Dutch Maeslant barrier to possible damage sustained during its operation was investigated by statistically estimating the recurrence time of instances in which water levels requiring the barrier's closure occurred twice within a certain time period (e.g., one month) [2]. For the three-barrier system protecting Venice, Italy, another study focused on determining the effect of various operation strategies (e.g., closure of only two barriers) on water levels in Venice Lagoon [3]. Storm surge modeling was also utilized in assessing the potential performance of proposed storm surge barriers in a Danish fjord [4] and at the mouth of the Mississippi River in the southern US [5]. Finally, several studies used a modeling approach to retrospectively evaluate the effects of barrier islands on storm surge elevation and water quality in the shallow back bays behind the barrier islands [6,7].
After discussing the methods used for generating the storm surge, the hurricane wind fields, and the river inflow, we discuss the synthetic storm, Hurricane Rhody. We then present the results The ADCIRC model was forced with winds at 10 m height and surface atmospheric pressure. The wind stress was computed from the wind speed at 10 m height using the quadratic drag law of Garratt [15]. The drag coefficient in the Garratt model increases linearly with 10 m wind speed. Because recent research suggests that the drag coefficient approaches a constant value or even decreases at higher wind speeds [16], we applied an upper limit of 2.8 × 10 −3 to the drag coefficient, corresponding to wind speeds above 31 m/s. At the eastern open boundary, located on the 60°W meridian, the model was forced with 8 tidal constituents (M2, S2, N2, K2, K1, O1, P1, Q1) interpolated from the TPXO7.2 global inverse model solution [17,18] (http://volkov.oce.orst.edu/tides/global.html). For all simulations, the tides were spun up by performing a run of 2-3 weeks in duration following a 2-day ramp-up period. An ADCIRC hotstart file containing the full model state written at the end of this spinup period was used as the initial condition for the hurricane forced simulations presented in this paper.
The FPHB is represented in ADCIRC as a natural internal barrier boundary consisting of a thin strip with paired nodes on the front and the back sides of the strip. If the model water surface elevation is higher than the barrier height and is not equal on both sides, water flow across the barrier occurs between paired nodes. The flow is computed using the formulae for flow across a broad crested weir [19], where the computation assumes either subcritical or supercritical flow depending on the relative heights of the water on either side of the barrier.
Inflows from the two rivers, Woonasquatucket and Moshassuck, flowing into upper Narragansett Bay north of the FPHB (see Figure 3) were implemented as normal flux boundary conditions at nodes along the upland boundary of the model mesh. The time varying normal flux was applied as an essential boundary condition. For simplicity, the combined volume flux from the rivers was applied as a normal flux boundary condition distributed over three nodes located where the Woonasquatucket River entered the model mesh (see Figure 3). Since its construction, the FPHB has not faced a severe test, although it has been closed for several weaker tropical cyclones and winter storms. None of these events were severe enough to produce impacts that came close to compromising the Barrier. The purpose of this paper is to present results of Geosciences 2019, 9, 265 4 of 21 storm surge and rainfall runoff modeling forced by an extreme yet plausible tropical cyclone strike in the southern New England region in order to raise awareness in the Rhode Island region and to evaluate the robustness of the FPHB. This hurricane, dubbed Rhody, is a hypothetical strong category three storm with characteristics similar to those of a number of historical hurricanes that have impacted the southern New England region. In addition to the strong category three Rhody, which is slightly stronger than any of the major hurricanes to strike the New England region during the last 100 years, we also present storm surge simulations for category four and five versions of Rhody. The purpose of this is to demonstrate, under present sea level, the limits of the FPHB in protecting the city of Providence.
There have been a number of related studies using coupled storm surge and wave models to examine the interaction of storm-driven surge with both natural and manmade barriers. The robustness of the Dutch Maeslant barrier to possible damage sustained during its operation was investigated by statistically estimating the recurrence time of instances in which water levels requiring the barrier's closure occurred twice within a certain time period (e.g., one month) [2]. For the three-barrier system protecting Venice, Italy, another study focused on determining the effect of various operation strategies (e.g., closure of only two barriers) on water levels in Venice Lagoon [3]. Storm surge modeling was also utilized in assessing the potential performance of proposed storm surge barriers in a Danish fjord [4] and at the mouth of the Mississippi River in the southern US [5]. Finally, several studies used a modeling approach to retrospectively evaluate the effects of barrier islands on storm surge elevation and water quality in the shallow back bays behind the barrier islands [6,7].
After discussing the methods used for generating the storm surge, the hurricane wind fields, and the river inflow, we discuss the synthetic storm, Hurricane Rhody. We then present the results of simulations of the storm surge resulting from the passage of an historical hurricane, Carol (1954), in order to demonstrate the skill of the modeling system. Subsequently, the storm surge and the river runoff impacts from Hurricane Rhody are presented.
Storm Surge Model
Storm surge impacts were computed using the ADvanced CIRCulation (ADCIRC) model (version 52) coupled with the Simulating Waves Nearshore (SWAN) model. ADCIRC is a finite element model that, in the two-dimensional mode employed here, solves for water level using the generalized wave continuity equation (GWCE) and for depth-averaged current using the shallow water momentum equations [8]. SWAN is a third-generation, phase-averaged wave model for simulating wind waves in coastal and open ocean regions [9]. ADCIRC and SWAN are coupled by passing the wave radiation stress computed from the SWAN wavefield to ADCIRC and passing the water levels, the currents, and the frictional parameters from ADCIRC to SWAN [10]. Both models are run on the same unstructured mesh using triangular elements.
The model mesh, consisting of 1,577,981 elements and 803,549 nodes, covers the northwestern Atlantic, the Caribbean Sea, and the Gulf of Mexico with an open boundary at longitude 60 • W ( Figure 1a). The mesh element size is coarse over the open ocean (50-100 km) and becomes finer over the continental shelf and near the coast. The element size in the Narragansett Bay region is less than 1 km and significantly less near the coast there, where elements are of the order of 30 m in size ( Figure 1b). The mesh in the Rhode Island region extends upland to approximately the 10 m elevation contour to allow for overland flooding in the model. The model mesh includes the FPHB as a fixed weir ( Figure 3) of height 7 m above mean sea level (MSL) using the results of the 2011 Rhode Island statewide LIDAR survey of the regional topography (http://www.rigis.org/pages/2011-statewide-lidar) rather than the design height referred to above.
The topography/bathymetry in the Rhode Island region was obtained from the Rhode Island Geographic Information System (RIGIS). The RIGIS statewide digital elevation model was derived from recent LIDAR surveys of land and historical bathymetric surveys, all converted to NAVD88 reference using NOAA's VDatum tool. Conversion to MSL reference was done using a spatially uniform value, taken to be the NAVD88-MSL difference of 0.093 m at the National Ocean Service (NOS) tide gauge at Newport, RI (NOS station 8452660). The bathymetry/topography for the southern New England region outside the RIGIS area were obtained from NOAA's 30 m resolution coastal DEM. Bathymetry in the remainder of the mesh was interpolated from historical soundings obtained from the National Ocean Service database (C. Fulcher, personal communication). Note that in the Narragansett Bay region, MSL is very close to NAVD88 (NAVD88-MSL = 0.07 m at Providence and 0.09 m at Newport), thus the difference in elevation between these datums is insignificant for this work.
ADCIRC was run in fully non-linear, two-dimensional mode with element wetting and drying enabled [11]. The GWCE solution was implemented using a spatially variable weighting parameter (τ 0 ), where τ 0 is specified as functions of the mesh element size and the local water depth [12]. Bottom friction was parameterized using a quadratic formulation where the drag coefficient (C f ) is a function of water depth via the Manning form, e.g., [13,14]: where g is the gravitational acceleration, n is Manning's roughness, H is the undisturbed water depth, and η is the water surface elevation. A constant Manning roughness of 0.03 was specified, resulting in a depth dependent quadratic drag coefficient, as given by Equation (1). To avoid the extremely low values in deep water that would be provided by Equation (1) using constant n, a minimum drag coefficient of 3 × 10 −3 was specified. This minimum value was achieved in water depth of approximately 25 m. The horizontal eddy viscosity was spatially uniform and set to 2 m 2 /s. The ADCIRC model was forced with winds at 10 m height and surface atmospheric pressure. The wind stress was computed from the wind speed at 10 m height using the quadratic drag law of Garratt [15]. The drag coefficient in the Garratt model increases linearly with 10 m wind speed. Because recent research suggests that the drag coefficient approaches a constant value or even decreases at higher wind speeds [16], we applied an upper limit of 2.8 × 10 −3 to the drag coefficient, corresponding to wind speeds above 31 m/s.
At the eastern open boundary, located on the 60 • W meridian, the model was forced with 8 tidal constituents (M 2 , S 2, N 2, K 2 , K 1 , O 1 , P 1 , Q 1 ) interpolated from the TPXO7.2 global inverse model solution [17,18] (http://volkov.oce.orst.edu/tides/global.html). For all simulations, the tides were spun up by performing a run of 2-3 weeks in duration following a 2-day ramp-up period. An ADCIRC hotstart file containing the full model state written at the end of this spinup period was used as the initial condition for the hurricane forced simulations presented in this paper.
The FPHB is represented in ADCIRC as a natural internal barrier boundary consisting of a thin strip with paired nodes on the front and the back sides of the strip. If the model water surface elevation is higher than the barrier height and is not equal on both sides, water flow across the barrier occurs between paired nodes. The flow is computed using the formulae for flow across a broad crested weir [19], where the computation assumes either subcritical or supercritical flow depending on the relative heights of the water on either side of the barrier.
Inflows from the two rivers, Woonasquatucket and Moshassuck, flowing into upper Narragansett Bay north of the FPHB (see Figure 3) were implemented as normal flux boundary conditions at nodes along the upland boundary of the model mesh. The time varying normal flux was applied as an essential boundary condition. For simplicity, the combined volume flux from the rivers was applied as a normal flux boundary condition distributed over three nodes located where the Woonasquatucket River entered the model mesh (see Figure 3).
Wave Model
The SWAN model is a third generation, phase-averaged wave model that is based on shallow water wave physics. It solves the wave action equation for the wave frequency-direction spectrum using an implicit numerical scheme [9]. SWAN operates on the same unstructured mesh as is utilized by ADCIRC, which facilitates the coupling between the two models and eliminates the inter-mesh interpolation errors that would arise from the use of separate meshes [10,20].
For our wave simulations, the frequency space was discretized into 40 bins ranging from 0.03 to 1.42 Hz with a logarithmic increment factor of 1.1, and direction space was discretized into 36 bins [21]. A first order upwind scheme in geographic space, the default SWAN setting, was used as the propagation scheme. For the wind input term and the whitecapping term, the Rogers et al. [22] improvement to the Komen [23] formulation was used. In the coupled ADCIRC-SWAN system utilized here, wind stress information was passed to SWAN from ADCIRC. The parameterization used for bottom friction was based on the eddy-viscosity model of Madsen et al. [24]. With the spatial Manning's roughness coefficient and the water depth values passed from the ADCIRC model, this option enabled the computation of spatially and temporally varying bottom friction in SWAN [25]. For depth-induced breaking in the surf zone, the Battjes and Janssen [26] parameterization with default settings was chosen.
The coupling between SWAN and ADCIRC enabled a varying water level as well as current in the wave model, which could modify the behavior of waves in shallow water by altering the bottom friction, the depth-induced breaking, and the wave-current interaction. The wave impact on the storm surge was produced by the wave-induced force, the spatial gradient of the radiation stress, which was calculated from the SWAN-simulated wave spectrum. The ADCIRC/SWAN coupling was achieved by exchanging the above quantities every 600 s. It should be noted that the wave effects computed by the coupled SWAN/ADCIRC models were limited to wave setup arising from the radiation stress. Wave runup and wave overtopping of the FPHB, both of which could contribute to local flooding, were not simulated in the phase-averaged SWAN model.
River Runoff Model
The hydrological model, the Precipitation-Runoff Modeling System (PRMS), is a deterministic, distributed-parameter, physical process-based modeling system developed by the United States Geological Survey (USGS) to evaluate the response of various combinations of climate and land use on stream flow and general watershed hydrology [27]. PRMS's modular design allows users to selectively couple the modules in the module library or even to establish a self-design model. It has been widely applied in the research of rainfall-runoff modeling and has been demonstrated to be a reliable hydrological model. The model simulates the hydrologic processes of a watershed using a series of reservoirs that represent volumes of finite or infinite capacity. The PRMS model was applied to the simulation of rainfall runoff in the Taunton River Basin in Massachusetts and Rhode Island by Teng et al. [28] and showed good agreement with observations of stream flow during a storm event in 2010. Teng et al. [29] also investigated interactions between rainfall runoff and storm surge in Rhode Island's Woonasquatucket River.
Hurricane Wind and Rainfall Models
When hurricanes make landfall, the increased surface roughness of the land relative to that of the ocean produces changes in the spatial structure of the near-surface winds that are not captured by simple parametric models of hurricane winds. In order to account for these effects, the wind forcing fields for the storm surge simulations presented here were derived from a model of the hurricane boundary layer (HBL). The governing equations for the mean wind components were similar to those described in Gao and Ginis [30] but modified for a Cartesian coordinate system. The HBL model incorporated high vertical (30 m) and horizontal (1 km) resolutions combined with high-resolution information about topography and land use. At the upper boundary, the mean wind was assumed to be under the gradient wind balance. The spatial distribution of the gradient wind, Vg, was prescribed, and the pressure gradient force derived from the gradient balance equation was assumed vertically uniform. The turbulent viscosity was parameterized using a first-order scheme, e.g., [31], which was a function of the gradient Richardson number, the local strain rate, and a mixing length computed using the Blackadar form [32]. The Richardson number was computed using an imposed vertical profile of temperature that was constant in time [30]. At the lower boundary of the atmosphere, Monin-Obukhov similarity theory was used, where the roughness length over the ocean was parameterized as a function of wind speed [33] and over land as a function of land cover (https://landcover.usgs.gov/global_climatology.php).
The spatial distribution of the gradient wind, Vg, was derived from a parametric (vortex) model of hurricane winds driven by hurricane track and intensity parameters from the Tropical Cyclone Vitals Database (TCVitals) [34]. The parametric wind distribution was based on interpolation of the radial wind profiles derived from the TCVitals, as in the NOAA operational Hurricane Weather Research and Forecasting (HWRF) model [35].
Estimates of hurricane rainfall were derived from a rainfall climatology and persistence (R-CLIPER) model by Tuleya et al. [36]. This statistical, parametric model based on satellite-derived tropical cyclone rainfall observations assumed a symmetric rainfall distribution along the storm track. The rainfall distribution was parameterized as a function of storm intensity and size [36].
Hurricane Rhody
Hurricane Rhody is a hypothetical yet plausible hurricane scenario created to simulate the effects of a high-impact storm on the Rhode Island coast in order to provide state and local agencies with better understanding of the hazards associated with extreme hurricanes. The characteristics of the hurricane were not arbitrarily chosen but were based on those of several historical storms that have impacted the region. This ensures that, although it is artificial, Rhody is a potentially realizable storm for the region.
The storm forms near the Bahamas and propagates northward close to the US east coast. Similar to Hurricane Carol (1954), it tracks close to the coast, but it travels much more rapidly than Carol, moving at a rapid forward speed much like the 1938 New England Hurricane (Figure 4). The storm makes its initial landfall as a strong category 3 hurricane, with the eye crossing eastern Long Island and then eastern Connecticut ( Figure 4) to the west of Rhode Island, a track that results in the most severe impacts in the Rhode Island region. After the first landfall, the hurricane slows and executes a loop, similar to the behavior of Hurricane Esther (1961), with the eye passing over Boston and then east and south of Cape Cod ( Figure 4). It finally makes a second landfall, again to the west of Rhode Island, this time as a weaker, slower moving category 2 storm producing very heavy rainfall. Geosciences 2019, 9, x FOR PEER REVIEW 8 of 22 Maps of wind speed magnitude at intervals of 6 h during a time period bracketing the first landfall are shown in Figure 5. The storm approaches New England with a forward speed of approximately 20 m/s (39 knots) with maximum wind speed of approximately 57 m/s at landfall (Figure 5a,b). At landfall (Figure 5b), the effect of land is apparent in the large reduction in wind speed over the northern half of the storm. Six hours after landfall (Figure 5c), the storm slows considerably, and over the next six hours (Figure 5d), it starts to move slowly to the south, the beginning of the loop that will eventually bring a second landfall.
Simulation of a Historical Hurricane
We first present simulations of a historical hurricane, Hurricane Carol, which impacted the Rhode Island region. Predicted water surface elevations are compared with the limited observations that were available from this storm in order to demonstrate the general performance of the model in simulating hurricane-forced storm surge.
Hurricane Carol struck the southern New England coast in 1954. On 31 August, the hurricane reached category three status just prior to making landfall at Point O'Woods, Long Island, NY. The storm quickly crossed the island and made a second landfall at Old Saybrook, CT. Sustained winds between 80-100 mph were experienced over eastern Connecticut, all of Rhode Island, and nearly all In addition to the base Rhody case, we also created stronger versions of Rhody by scaling up the wind speeds such that the storm is of category 4 or 5 strength at landfall. The storm track and the forward speed are unchanged in these scenarios.
Simulation of a Historical Hurricane
We first present simulations of a historical hurricane, Hurricane Carol, which impacted the Rhode Island region. Predicted water surface elevations are compared with the limited observations that were available from this storm in order to demonstrate the general performance of the model in simulating hurricane-forced storm surge.
Hurricane Carol struck the southern New England coast in 1954. On 31 August, the hurricane reached category three status just prior to making landfall at Point O'Woods, Long Island, NY. The storm quickly crossed the island and made a second landfall at Old Saybrook, CT. Sustained winds between 80-100 mph were experienced over eastern Connecticut, all of Rhode Island, and nearly all of eastern Massachusetts. A gust of 135 mph was recorded at Block Island, RI-the highest ever recorded at that location [37]. In addition to strong winds, Hurricane Carol also produced a very high storm surge, the highest of which was experienced in Narragansett Bay, submerging a quarter of the downtown Providence, RI area. Because Carol struck the region prior to the construction of the FPHB, we present a simulation on a model mesh without the barrier present. Limited observations were available for this time period-measurements of surface elevation at the NOS Newport tide gauge (obtained from NOAA/NOS) as well as measurements at South Street Station in Providence, close to the location of the present-day Providence tide gauge. The latter observations covering the storm surge period only were obtained from a NOAA report [37]. The water surface elevations at Providence, referenced to Mean Lower Low Water (MLLW) in the report [37], were converted to be relative to MSL (1960-1978 epoch) using datum information from the NOAA/NOS Providence station (8454000).
The Hurricane Carol storm surge simulated by ADCIRC/SWAN exhibits a high degree of spatial variability in Narragansett Bay. Maximum water surface elevation (relative to MSL) during the storm is approximately 3 m along the coast south of Narragansett Bay, decreasing to about 2.5 m in the lower portion of the Bay near Newport and then increasing northward to around 5 m in northern Narragansett Bay ( Figure 6). This pattern is broadly consistent with a map of high water marks from Hurricane Carol presented in a US Weather Service report [38]. There is extensive flooding of land areas, especially in the northern Bay, but also in other low-lying areas around the region. Because the FPHB is not present, there is extensive flooding in Providence, as recorded during the actual storm. Comparison of the model and the observed time series of water surface elevation at Providence and Newport (denoted by the green and magenta circles in Figure 6) show that the model captures the maximum surge water level at both sites quite well (Figure 7). However, the model storm surge duration is much shorter than the observed surge duration. It is possible that the shorter surge duration in the model could result in underestimation of the surge height in small embayments and valleys with narrow connections to the main Bay.
Hurricane Rhody Simulations
Because our aim is to raise local awareness of the potential catastrophic impacts of a major hurricane strike in southern New England, we selected the time period in which to embed Rhody such that its impacts were maximized. A spring tide period occurring in September 2016 was selected as the simulation time period. The start time of the Rhody simulation was selected such that the storm surge at Providence (at the northern end of Narragansett Bay) following the first landfall would occur at the time of astronomical high tide. With these assumptions, the Rhody storm surge simulation was performed (after a 15 day spin-up with no wind forcing) over a three day period commencing at 01:15 UTC on 18 September 2016.
Effect of Hurricane Barrier
The effects of a Rhody category three simulation on a mesh without the FPHB and with no river inflow are presented first in order to show the regional storm surge response to a hurricane of this magnitude. Figure 8, depicting the maximum water surface elevation (which occurs during the first landfall), shows the spatial variability associated with storm surge response in Narragansett Bay. Storm surge elevations range from minimum elevations above MSL of approximately 2-3 m in the lower Bay around Newport to roughly 6 m in the far northern reaches of the Bay (northeast of Providence). The surge in Providence is approximately 5.5 m, and extensive overland flooding occurs there and elsewhere around the Bay. Time series of surface elevation (Figure 9) indicate that the storm surge during the first landfall (on 18 September) is significantly higher than during the second landfall (on 20 September). After the first landfall, on 19 September, when the storm center is located northeast of Narragansett Bay, water surface elevations in the Bay drop significantly below the normal low tide level in response to strong winds from the north (Figure 9). The shorter duration storm surge in the model could result from deficiencies in the specified wind forcing, where it is likely that the modeled wind does not adequately capture the far-field (far from the center) wind arising from larger scale meteorological processes. It is also possible that the poorly modeled forerunner surge [39] is due to unrealistically large bottom friction on the adjacent continental shelf that reduces the along-shelf wind driven velocity and the associated geostrophic setup. This possibility was tested by rerunning the Carol simulation but with the minimum drag coefficient set to 1 × 10 −3 (compared to the value of 3 × 10 −3 used for the base simulation). The results (not shown) indicate that this reduction leads to an insignificant increase in the surge height prior to the arrival of the main surge and no increase in the duration of the storm surge.
Hurricane Rhody Simulations
Because our aim is to raise local awareness of the potential catastrophic impacts of a major hurricane strike in southern New England, we selected the time period in which to embed Rhody such that its impacts were maximized. A spring tide period occurring in September 2016 was selected as the simulation time period. The start time of the Rhody simulation was selected such that the storm surge at Providence (at the northern end of Narragansett Bay) following the first landfall would occur at the time of astronomical high tide. With these assumptions, the Rhody storm surge simulation was performed (after a 15 day spin-up with no wind forcing) over a three day period commencing at 01:15 UTC on 18 September 2016.
Effect of Hurricane Barrier
The effects of a Rhody category three simulation on a mesh without the FPHB and with no river inflow are presented first in order to show the regional storm surge response to a hurricane of this magnitude. Figure 8, depicting the maximum water surface elevation (which occurs during the first landfall), shows the spatial variability associated with storm surge response in Narragansett Bay. Storm surge elevations range from minimum elevations above MSL of approximately 2-3 m in the lower Bay around Newport to roughly 6 m in the far northern reaches of the Bay (northeast of Providence). The surge in Providence is approximately 5.5 m, and extensive overland flooding occurs there and elsewhere around the Bay. Time series of surface elevation (Figure 9) indicate that the storm surge during the first landfall (on 18 September) is significantly higher than during the second landfall (on 20 September). After the first landfall, on 19 September, when the storm center is located northeast of Narragansett Bay, water surface elevations in the Bay drop significantly below the normal low tide level in response to strong winds from the north (Figure 9). The ADCIRC/SWAN simulation with Rhody category three forcing on a mesh that includes the FPHB shows that flooding in downtown Providence (north of the Barrier) is prevented by the Barrier (Figure 10). From Figure 9, it is apparent that the storm surge in Providence during the first landfall of Rhody reaches 5.5 m above MSL, significantly below the 7 m height of the FPHB. This indicates that the FPHB will be effective in protecting downtown Providence under present sea level conditions and a strong category three hurricane. Not surprisingly, due to the fact that the area of the flood plain north of the Barrier is small, maximum storm surge elevations south of the Barrier are essentially unaffected by its presence (Figure 10). The ADCIRC/SWAN simulation with Rhody category three forcing on a mesh that includes the FPHB shows that flooding in downtown Providence (north of the Barrier) is prevented by the Barrier (Figure 10). From Figure 9, it is apparent that the storm surge in Providence during the first landfall of Rhody reaches 5.5 m above MSL, significantly below the 7 m height of the FPHB. This indicates that the FPHB will be effective in protecting downtown Providence under present sea level conditions and a strong category three hurricane. Not surprisingly, due to the fact that the area of the flood plain north of the Barrier is small, maximum storm surge elevations south of the Barrier are essentially unaffected by its presence (Figure 10).
How Robust is the Fox Point Hurricane Barrier?
The results presented above show that the FPHB is effective in protecting the area north of the Barrier from storm surge during a strong category three hurricane. A key question for the city of Providence is what magnitude of hurricane could potentially overflow the FPHB (neglecting, as mentioned above, wave effects)? To answer this question, a series of storm surge model simulations were performed with forcing from scaled-up versions of the Rhody wind field. The track, the forward speed, and the size of the storm remained constant, with only the wind speeds increased. The strength of the hurricane was quantified by the maximum wind speed at the time of its mainland landfall. In this way, simulations of category four and five versions of Rhody were performed in addition to the base category three version. The results show that the maximum surge increases as an approximately linear function of maximum wind speed ( Figure 11). The maximum storm surge at Providence reaches the 7 m height of the FPHB under forcing from a weak category five hurricane. The results presented above show that the FPHB is effective in protecting the area north of the Barrier from storm surge during a strong category three hurricane. A key question for the city of Providence is what magnitude of hurricane could potentially overflow the FPHB (neglecting, as mentioned above, wave effects)? To answer this question, a series of storm surge model simulations were performed with forcing from scaled-up versions of the Rhody wind field. The track, the forward speed, and the size of the storm remained constant, with only the wind speeds increased. The strength of the hurricane was quantified by the maximum wind speed at the time of its mainland landfall. In this way, simulations of category four and five versions of Rhody were performed in addition to the base category three version. The results show that the maximum surge increases as an approximately linear function of maximum wind speed (Figure 11). The maximum storm surge at Providence reaches the 7 m height of the FPHB under forcing from a weak category five hurricane.
Effect of River Inflow
The large hurricane-forced storm surge during an extreme hurricane such as Rhody will likely be accompanied by massive regional power outages. Unless emergency power to the FPHB is available, the pumps that are designed to discharge river runoff from north of the Barrier will not be operable. This suggests that the area north of the FPHB may be susceptible to riverine flooding depending on the amount of rainfall during the hurricane. This possibility motivated the simulation of Hurricane Rhody impacts with the FPHB closed and with river inflows from the two rivers entering Narragansett Bay north of the Barrier.
Effect of River Inflow
The large hurricane-forced storm surge during an extreme hurricane such as Rhody will likely be accompanied by massive regional power outages. Unless emergency power to the FPHB is available, the pumps that are designed to discharge river runoff from north of the Barrier will not be operable. This suggests that the area north of the FPHB may be susceptible to riverine flooding depending on the amount of rainfall during the hurricane. This possibility motivated the simulation of Hurricane Rhody impacts with the FPHB closed and with river inflows from the two rivers entering Narragansett Bay north of the Barrier.
Parameterized rainfall from Rhody using the R-CLIPER model produced an accumulated 4.2 inches of rain on 18 September, 5.4 inches on 19 September, and 9.8 inches on 20 September. The combined discharge of the Woonasquatucket and the Moshassuck rivers, simulated by the PRMS model, is shown in Figure 12 (top). This discharge was imposed as a normal flow boundary condition at three ADCIRC mesh boundary nodes in the Woonasquatucket valley. Water surface elevation at a location just north of the FPHB (red dot in Figure 13) increases slowly until the first landfall of Rhody, when the elevation fluctuates slightly due to the effect of the hurricane wind stress on the water within the impoundment north of the FPHB (Figure 12, bottom). After this time, the elevation north of the FPHB slowly increases as the area fills up with river discharge, and eventually (late on 20 September) water begins spilling over the Barrier, after which time the elevation remains constant at just over 7 m.
inches of rain on 18 September, 5.4 inches on 19 September, and 9.8 inches on 20 September. The combined discharge of the Woonasquatucket and the Moshassuck rivers, simulated by the PRMS model, is shown in Figure 12 (top). This discharge was imposed as a normal flow boundary condition at three ADCIRC mesh boundary nodes in the Woonasquatucket valley. Water surface elevation at a location just north of the FPHB (red dot in Figure 13) increases slowly until the first landfall of Rhody, when the elevation fluctuates slightly due to the effect of the hurricane wind stress on the water within the impoundment north of the FPHB (Figure 12, bottom). After this time, the elevation north of the FPHB slowly increases as the area fills up with river discharge, and eventually (late on 20 September) water begins spilling over the Barrier, after which time the elevation remains constant at just over 7 m.
The spatial extent of the flooding resulting from Rhody rainfall at the end of the ADCIRC/SWAN simulation is shown in Figure 13. The elevation is approximately uniform everywhere north of the FPHB, and extensive areas in Providence are flooded. Although at this time storm surge flooding south of the barrier is not present, the effects of water flowing south across the barrier are seen in Figure 13 as the intermediate heights between the barrier and the shoreline south of the barrier.
Utilizing Hurricane Rhody Modeling for Improving Hurricane Preparedness
The catastrophic effects of rainfall from Hurricane Harvey in 2017 serve as a stark reminder that hurricanes may do damage through means that are not anticipated by the public or emergency managers and that may be very different from previously experienced storms. Through the use of high-resolution modeling, we can anticipate the possibility and the consequences of low probability but potentially catastrophic events. Our simulations of a hypothetical Hurricane Rhody illustrate the importance of considering the combined coastal and inland flooding. Southern New England is especially vulnerable to inland flooding, since the rivers are relatively short, and it is more likely that high river discharge resulting from hurricane rain will coincide with the storm surge.
The Hurricane Rhody scenario was used by Rhode Island Emergency Management Agency (RIEMA) and the FEMA Emergency Management Institute (EMI) to conduct an Integrated Emergency Management Course (IEMC) as part of a statewide preparedness exercise on June 19-22, 2017. The four-day exercise focused on the response to Hurricane Rhody while identifying key actions taken before, during, and after a hurricane. Outcomes from the course provided federal, state, and local decision makers with an opportunity to enhance overall preparedness while actively testing modeling outputs during various parts of the course. Figure 14 illustrates a Figure 13. Water surface elevation in the Providence area at the end of the Rhody simulation including river discharge and the presence of the FPHB. The coastline is represented by the black line, and colored areas shoreward of the coastline represent areas experiencing overland flooding. The green dot is the location north of the FPHB at which a time series of surface elevation is shown in Figure 12 (bottom).
The spatial extent of the flooding resulting from Rhody rainfall at the end of the ADCIRC/SWAN simulation is shown in Figure 13. The elevation is approximately uniform everywhere north of the FPHB, and extensive areas in Providence are flooded. Although at this time storm surge flooding south of the barrier is not present, the effects of water flowing south across the barrier are seen in Figure 13 as the intermediate heights between the barrier and the shoreline south of the barrier.
Utilizing Hurricane Rhody Modeling for Improving Hurricane Preparedness
The catastrophic effects of rainfall from Hurricane Harvey in 2017 serve as a stark reminder that hurricanes may do damage through means that are not anticipated by the public or emergency managers and that may be very different from previously experienced storms. Through the use of high-resolution modeling, we can anticipate the possibility and the consequences of low probability but potentially catastrophic events. Our simulations of a hypothetical Hurricane Rhody illustrate the importance of considering the combined coastal and inland flooding. Southern New England is especially vulnerable to inland flooding, since the rivers are relatively short, and it is more likely that high river discharge resulting from hurricane rain will coincide with the storm surge.
The Hurricane Rhody scenario was used by Rhode Island Emergency Management Agency (RIEMA) and the FEMA Emergency Management Institute (EMI) to conduct an Integrated Emergency Management Course (IEMC) as part of a statewide preparedness exercise on June [19][20][21][22]2017. The four-day exercise focused on the response to Hurricane Rhody while identifying key actions taken before, during, and after a hurricane. Outcomes from the course provided federal, state, and local decision makers with an opportunity to enhance overall preparedness while actively testing modeling outputs during various parts of the course. Figure 14 illustrates a three-dimensional (3-D) visualization of inundation effects in downtown Providence after the second Hurricane Rhody landfall that was used during the training course. Visualization tools such as these provide specific actionable outputs that are relevant to emergency and facility managers and can help decision makers to better prepare coastal communities for future risks during extreme weather events. three-dimensional (3-D) visualization of inundation effects in downtown Providence after the second Hurricane Rhody landfall that was used during the training course. Visualization tools such as these provide specific actionable outputs that are relevant to emergency and facility managers and can help decision makers to better prepare coastal communities for future risks during extreme weather events.
Discussion
The simulated effects of the hypothetical Hurricane Rhody presented here represent the worst-case scenario due to the imposed timing of the landfall. The wave/storm surge simulation was set within an oceanic spring tide period, and the first landfall was timed such that the maximum surge occurred at high tide. At the NOAA/NOS Providence tide gauge during the September 2016 spring tide period, the predicted tidal range is approximately 2 m (https://tidesandcurrents.noaa.gov/waterlevels.html?id=8454000). This indicates that a change in the timing of landfall by 6.2 h (one half the period of the dominant M2 tidal constituent) would result in total water level at the time of maximum storm surge approximately 2 m below the levels presented in this study. This indicates that storm surge impacts in the Providence region are strongly dependent on the timing of the wind-driven storm surge.
Comparison of simulations with and without the FPHB indicates that the Barrier is effective in mitigating the effects of all but the most extreme storm surge scenarios. Even though the Barrier is overflowed by the category five Rhody surge occurring at high tide, the area of flooding in Figure 14. A three-dimensional (3-D) visualization of inundation effects in Providence, RI during the hypothetical Hurricane Rhody after its second landfall. The rivers enter the region from the right, and Narragansett Bay proper lies to the left.
Discussion
The simulated effects of the hypothetical Hurricane Rhody presented here represent the worst-case scenario due to the imposed timing of the landfall. The wave/storm surge simulation was set within an oceanic spring tide period, and the first landfall was timed such that the maximum surge occurred at high tide. At the NOAA/NOS Providence tide gauge during the September 2016 spring tide period, the predicted tidal range is approximately 2 m (https://tidesandcurrents.noaa.gov/waterlevels.html? id=8454000). This indicates that a change in the timing of landfall by 6.2 h (one half the period of the dominant M 2 tidal constituent) would result in total water level at the time of maximum storm surge approximately 2 m below the levels presented in this study. This indicates that storm surge impacts in the Providence region are strongly dependent on the timing of the wind-driven storm surge.
Comparison of simulations with and without the FPHB indicates that the Barrier is effective in mitigating the effects of all but the most extreme storm surge scenarios. Even though the Barrier is overflowed by the category five Rhody surge occurring at high tide, the area of flooding in Providence north of the Barrier is much reduced compared to the simulation without the FPHB (not shown). Furthermore, the category three results shown in Figure 10 indicate that the effects of the FPHB in the region outside the protected area in Providence are negligible. Differences in maximum surge between Rhody simulations with and without the FPHB are less than 0.15 m (not shown), an insignificant difference in comparison to the 5-7 m storm surge. This was not unexpected, since the area north of the FPHB is small in comparison to the area of Narragansett Bay.
The storm surge simulations with varying strength Hurricane Rhody forcing show that, under present sea level, the FPHB should be robust in protecting downtown Providence for storms of category four strength and below. Because hurricanes approaching New England from the south cross the mid-Atlantic Bight continental shelf, which has very cold bottom water during late spring through autumn [40], they tend to weaken prior to landfall, as this so-called Cold Pool is mixed vertically, thus reducing the sea surface temperature [41]. Thus, the likelihood of occurrence of a category five hurricane would appear to be low under present climatic conditions. However, under a warming climate, if the mid-Atlantic Bight Cold Pool were to warm, its capability to weaken hurricanes due to sea surface cooling would be reduced, thus increasing the likelihood of an extreme hurricane strike on southern New England.
The simulated rainfall/runoff impacts in Providence from Hurricane Rhody are predicated on the closure of the FPHB throughout the three day event due to the large-scale power outages expected in a severe hurricane that could preclude the opening of the Barrier after the first landfall. If power is unavailable after the first landfall, the assumption is that the Barrier's pumps would be inoperable as well. Clearly, if backup power is available, either the Barrier's gates could be raised after the first landfall, or the Barrier's pumps could be operated to move water across the closed Barrier. In either of these situations, the riverine flooding that we simulated would be avoided entirely or strongly mitigated.
Conclusions
We used numerical simulations of storm surge and river runoff to demonstrate the potential impacts of a severe hurricane strike in the Narragansett Bay region. The hypothetical hurricane, Rhody, is a physically realizable storm with characteristics based on those of hurricanes making landfall in the area during the past 80 years. During its first landfall, Rhody is a strong category three storm, making it stronger than any historical storm impacting the region during the modern era. The regional impacts of Rhody are severe, with large areas around the periphery of Narragansett Bay flooded. Water levels due to wind-driven storm surge associated with the first landfall of Rhody reach 5.5 m (relative to mean sea level) at the head of Narragansett Bay in the area around Providence. The FPHB (7 m height) is not overflowed in this scenario and protects downtown Providence from flooding, such as what occurred during the 1938 hurricane and Hurricane Carol (1954). After the first landfall, the storm moves northeast and then executes a slow loop to the south that is followed by a second landfall in southern New England. The storm surge resulting from the (weaker) hurricane winds at this time is lower than that occurring during the first landfall. The major impact around the time of the second landfall arises due to the assumed closure of the FPHB and the inoperability of its pumps. The intense rainfall predicted to occur around the time of the second landfall produces extremely high discharge in the rivers entering Narragansett Bay above the FPHB, and this is shown to result in severe flooding in Providence. These results emphasize the need to ensure that, after a hurricane strike, either the Barrier can be opened or the Barrier's pumps can be operated to discharge the river inflow across the Barrier.
Simulations with forcing from varying strength versions of hurricane Rhody show that, under present sea level, the FPHB will not protect the downtown area of Providence from storm surge flooding resulting from a land-falling category five storm (maximum winds greater than 70 m/s). As sea level rises over the coming decades, the robustness of the FPHB will clearly be reduced, making the city of Providence more vulnerable to storm surge. | 2019-06-21T02:16:53.924Z | 2019-06-19T00:00:00.000 | {
"year": 2019,
"sha1": "34633541f921d1e8c6285018cfa1403c0a449609",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3263/9/6/265/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "4d3c411ec3ab3e62e98b9cc7f57c651fed304e89",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
222077558 | pes2o/s2orc | v3-fos-license | THE EFFECT OF PECTIN AND PECTIN NANOPARTICLES ON SOME QUALITY PROPERTIES OF MEAT SAUSAGE
he aim of this work was to investigate the possibility of using nanoparticles pectin as active colloidal systems in meat sausage. The effect of different concentrations of pectin and pectin nanoparticles as active colloid on the quality properties of the physicochemical, texture profile and sensory evaluation of meat sausage were investigated. The results showed that the particle sizes were decreased with decreasing the concentrations of pectin. The obtained results indicated that the best water holding capacity (lowest value) , plasticity, cooking loss, cooking yield, emulsion stability, texture profile analysis and sensory evaluation were recorded in the nanoparticles pectin sausage samples (P6,P5 and P4) when compared to the ordinary pectin sausage samples (P3,P2 and P1) and control samples (C).
INTRODUCTION
The meat sausages are one form of processed beef which is currently quite popular in the community. Sausages are also familiar as one of the ready-to-eat breakfast menu items among schoolchildren. The sausage meat is a food product derived from a mixture of delicate meat (containing meat not less than 60%) with flour or starch with or without the addition of seasonings and food additives as otherwise permitted and put into sausage casings mentions that the main component of the sausage consists of meat, fat, and water. In addition, the sausages also add additional ingredients such as salts, phosphates, preservatives, coloring, ascorbic acid, protein isolates, and carbohydrates, this lead to need to some material such as Hydrocolloids which have a wide array of functional properties in foods. These include thickening, gelling, emulsifying, stabilization in this products. (Dipjyoti and Suvendu 2010 and Badan Standardisasi Nasional.,1995).
Hydrocolloids have a wide array of functional properties in foods. These include thickening, gelling, emulsifying, stabilization (Dipjyoti and Suvendu 2010). An association colloid is a colloid whose particles are made up of even smaller molecules.
Used for many years to deliver polar, nonpolar, and amphiphilic functional ingredients (Bilska et al., 2009). Though all hydrocolloids thicken aqueous dispersions, only a comparatively few gums form gels. Also the gels thus formed vary widely in gel character and texture. Hence, knowledge of the conditions required for gelling of particular hydrocolloid dispersion, the characteristics of the gel produced and the T texture it confers are very important aspects to design a specific food formulation.
The important gums that find application in food as gelling agents include alginate, pectin, carrageenan, gellan, gelatin, agar, modified starch, methyl cellulose and hydroxypropylmethyl cellulose (Williams 2006).
Pectin is a natural, non toxic and amorphous carbohydrate present in cell wall of all plants tissue and is the secondary product of fruit juice, sunflower oil industries, therefore it is inexpensive, abundantly available, ecofriendly biodegradable product and most important it act as stabilizing agent. Pectin has ability to bind with some organic and inorganic substances via molecular interactions (Liu et al., 2003 'functional foods' where everyday foods carry medicines and supplements, and increased production and cost-effectiveness. In a world where thousands of people starve each day, increased production alone is enough to warrant worldwide support.
For the past few years, the food industry has been investing millions of dollars in
Preparation of Pectin nanoparticles .
Pectin nanoparticles were prepared by an ionic gelation of cation (CaCl2) and pectin as described by Praneet et al., (2008). Briefly, pectin was dissolved in distilled water at 80ºC up to completely dissolving and allowed to cool at room temperature to a concentration ranging from 1.0%, 3.0% and 5.0% solutions. The divalent cation (CaCl2) was dissolved in distilled water to a concentration of 1.0%, 3.0% and 5.0% solutions . Nanoparticles were prepared by drop-wise addition of pectin solution to a divalent cation (Ca Cl2) solution while the solution was stirred under magnetic stirring
Meat sausage preparation:
The meat was washed and cut then minced and mixed with other ingredients (minced meat 60%, fat 17.0%,water 15.0%, soy protein 5%, sodium tryployphosphate 0.3%, salt 1.5%, garlic powder 0.2% and spices 1.0%), then divided into seven treatments and the suitable units of water were added to the mixture and the same an unit was replaced with calcium pectinate 5% by different percentages, so that the percentage of pectin in the final mixture is showed in table (1). Each group of samples were mixed well then stuffing into sausage casing by the filling machine then packaged it in foam dishes then wrapping by polyethylene bags and stored at -18˚C until analysis.
Scanning Transmission Electron Microscopy(TEM):
The surface morphology of pectin nanoparticles was investigated using Transmission Electron Microscope (TEM). Polymer sample was suspended in acetone for 20 min, then a drop of the suspension was placed on a grid and the solvent evaporated prior to imaging .
Rheological properties .
Rheological parameters (shear rate and viscosity) of pectin and pectin nanoparticales were measured at different temperatures using Brookfield Engineering labs DV-III Ultra Rheometer. The samples were placed in a small sample adaptor and a constant temperature water bath was used to maintain the desired temperature.
The viscometer was operated between 10 and 50 rpm and shear stress, shear rate and viscosity data were directly obtained from the instrument, the SC4-21 spindle was selected for the measurement tests.
Physiochemical evaluation:
The water holding capacity (WHC) and plasticity were measured by filter press methods of Soloviev (1966). Cooking loss of samples was calculated as percentage of weight change from raw to cooked state Cooking yield was determined according to Osheba(2013) . pH value was determined to Aitken, et al.,( 1962).
Emulsion stability (ES) was determined using model systems, as described by Ockerman (1985) and Zorba et al., (1993). 10 g of emulsion was weighed into a centrifuge tube capped and immediately heated at 80 o C in a water bath for 30 min.
The tubes were centrifuged at 900 rpm for 15 min and the amounts of water and oil separated were measured, and ES was calculated using the following equations: SW= g of water separated x10 Texture Profile Analysis: Texture Profile Analysis was determined according to Bourne (2003).
Immediately after meat sausage manufacturing the samples were prepared by cooking in boiling water for 10 min and subjected to member̕ s trained sensory panel to evaluate color, odor, texture and overall acceptability of these formulas.
Statistical analysis:
The obtained data were exposed to analysis of variance followed by multiple comparisons between means (P≤0.05) applying LSD. The analysis was carried out using the PRO ANOVA procedure of Statistical Analysis System (SAS, 1996).
Scanning Transmission Electron Microscopy (TEM)
TEM is a powerful tool to understand the morphology as well as particle size of nanomaterials . Three different concentrations (1.0,3.0 and 5.0%) with the same ratio (1:1) of pectin /CaCl2 were used. Transmittance Electron Microscope instrument was used for the determination of the particle size and the morphological structure of the prepared polymer matrix. In general, the synthesized Ca-pectinate were spherical in morphology without forming any agglomerates. The average particle size at 1.0% of the Ca-pectinate solution concentration was 2.3 to 6.2 nm. At a Ca-pectinate concentration of 3.0%, The average particle size was between 2.86 to 7.4nm ,the average particle diameter increased to 9.5 nm the Ca-pectinate concentration was further increased to 5.0%, it is clear that at 5.0% Ca-pectinate concentration, the average particle size was 9.5 nm and the morphology was still comparable with those Usually, a high pH (~ 6.80) is closely related to high shear force or gel strength in meat products as found in Tables (2 and 3). (4) This may be due to that treatments which the nanoparticles pectin samples were
Results in table
The best in water holding capacity (i.e., lowest value) and plasticity as in tables (2 and 3). This result in less lost water amounts during cooking process when compared with the treatments the ordinary pectin samples and C.
CONCLUSION
It could be concluded from the results of this investigation that ordinary and nanoparticles pectin were found to have thixotropic behavior since apparent viscosity of samples contained pectin and pectin nanoparticles decreased with decrease the pectin concentration. The effect of different concentrations of pectin and pectin nanoparticles as linker the components of meat sausage, which improves the quality properties of the physicochemical and sensory products of investigated products. The results indicated also that the best water holding capacity (i.e., lowest value) and plasticity,cooking loss,cooking yield, emulsion stability, texture profile analysis and sensory evaluation was recorded in The treatments of the nanoparticles pectin samples compared to the treatments added of the ordinray pectin samples and C samples, respectively. This study suggests these nanoparticles pectin have a potential use as successful colloids.. | 2020-09-30T16:27:44.061Z | 2019-11-05T00:00:00.000 | {
"year": 2019,
"sha1": "3a4d89813b6bae12dac0af0e0bd2ebfed66656c4",
"oa_license": null,
"oa_url": "https://doi.org/10.21608/ejar.2019.111097",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "3722b0ac077df2988748b75429242df802a004c9",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
34457454 | pes2o/s2orc | v3-fos-license | The Relation between the Level of Serum Tumor Necrosis Factor – Alpha and Hemodialysis Adequacy in Diabetic and Non Diabetic Patients on Maintenance Hemodialysis
Background: Hemodialysis is still the most common renal replacement therapy (RRT) modality in end stage renal disease patients (ESRD), the first problem to be faced when choosing hemodialysis for patients with ESRD is the vascular access, dia¬lysis delivery should be adequate not only to improve quality of life but also to prolong sur¬vival, quality of life adjusted for life expectancy defined kt/v of 1.3 as the optimal cost-effective dialysis, An ideal access delivers a flow rate to the dialyzer adequate for the dialysis prescription, has a long use-life, and has a low rate of complications (eg, infection, stenosis, thrombosis, aneurysm, and limb ischemia). Of available accesses, the surgically created fistula comes closest to fulfilling these criteria, working fistula must have all the following characteristics; blood flow adequate to support dialysis which usually equates to blood flow greater than 600 ml/min, a diameter greater than 0.6 cm, with a location accessible for cannulation and a depth of approximately 0.6 cm (ideally between 0.5 and 1cm) from the skin surface. In hemodialysis patients with an arteriovenous fistula (AVF), access failure is primarily due to fistula stenosis, which predisposes to thrombosis and subsequent access loss. The risk for access failure differs individually, Fistula stenosis is histologically characterized by endothelial cell injury and intimal hyperplasia induced by factors like TNF-α, which could induce proliferation of vascular smooth muscles leading to subsequent intimal hyperplasia. Resulting in fistula stenosis and subsequent access failure. TNFalpha influences the risk for hemodialysis access failure in diabetic ESRD patients there is advanced calcified atherosclerosis which leads to frequently inadequate arterial inflow and eventually also to venous run-off problems. So ESRD patients with diabetes have worse access survival rates and hemodialysis adequacy.
Introduction
Hemodialysis is still the most common renal replacement therapy (RRT) modality in end stage renal disease patients (ESRD). The first problem to be faced when choosing hemodialysis for patients with ESRD is the vascular access, In diabetic ESRD patients there is advanced calcified atherosclerosis which leads to frequently inadequate arterial inflow and eventually also to venous run-off problems. So ESRD patients with diabetes have worse access survival rates and hemodialysis adequacy [1]. Dia¬lysis delivery should be adequate not only to improve quality of life but also to prolong sur¬vival [2]. The aim of dialysis is thus, to decrease morbidity, increase quality of life and prolong life span [2]. To achieve this dialysis must be performed effectively [3]. Inadequate dose of dia¬lysis increases duration of hospitalization and the overall cost of care [4]. One method of assessing dialysis adequacy is cal¬culation of kt/v. This index reflects the effi¬ciency of dialysis and correlates with mortality and morbidity rate of patients. Quality of life adjusted for life expectancy defined kt/v of 1.3 as the optimal cost-effective dialysis dose [4].
Vascular access is vital to delivering adequate hemodialysis therapy. The type of vascular access used in HD patients is recognized to have a significant influence on survival. The use of a tunneled cuffed catheter (TCC) is associated with a substantially greater risk of sepsis, hospitalization and mortality compared to the use of AVF [5][6][7][8]. An ideal access delivers a flow rate to the dialyzer adequate for the dialysis prescription, has a long uselife, and has a low rate of complications (eg, infection, stenosis, thrombosis, aneurysm, and limb ischemia). Of available accesses, the surgically created fistula comes closest to fulfilling these criteria [9,10]. The National kidney Foundation (NKF) issued the Kidney Disease Outcomes Quality Initiative (KDOQI) guidelines for Vascular Access in an effort to improve patient survival and quality of life, reduce morbidity, and increase efficiency of care [9].
Two primary goals were originally put forth in vascular access guidelines: a. Increase the placement of native fistulae.
In general, a working fistula must have all the following characteristics; blood flow adequate to support dialysis which usually equates to blood flow greater than 600 ml/min, a diameter greater than 0.6 cm, with a location accessible for cannulation and a depth of approximately 0.6 cm (ideally between 0.5 and 1cm) from the skin surface [9]. Access stenosis or thrombosis is a costly threat to patency in association with significant morbidity to the patient. Native fistula patency is significantly better than synthetic grafts and should be considered as the first method in maintaining long-term vascular access patency [11,12].
Studies investigating the pathophysiology of vascular access stenosis which predisposes to thrombosis suggest that the endothelial repair response to injury in the face of excess growth promoters, inflammation and oxidative stress leads to luminal hyperplastic intimal growth. In the presence of prothrombotic environment in the renal patient, vascular thrombosis can occur. The typical lesion of access thrombosis is new intimal vascular smooth muscle cell proliferation in the anastomotic draining vein, this can occur in response to endothelial injury due to repeated vein cannulation. Approximately 50-70% of lesions are within 3-5 cm of the vein anastomosis [13].
In hemodialysis patients with an arteriovenous (AV) fistula, access failure is primarily due to fistula stenosis, which predisposes to thrombosis and subsequent access loss. TNF-alpha influences the risk for hemodialysis access failure. The risk for access failure differs individually, Fistula stenosis is histologically characterized by endothelial cell injury and intimal hyperplasia induced by factors like TNF-α, which could induce proliferation of vascular smooth muscles leading to subsequent intimal hyperplasia. Resulting in fistula stenosis and subsequent access failure [14]. Vascular access dysfunction is a well-known cause for a reduction in delivered dialysis, although the prevalence of this problem as a cause for a fall in Kt/v is not known, Inadequate vascular access flow rate due to stenosis leads to mixing of blood from the venous side of the dialysis circuit into the arterial inflow line. This reduces the concentration gradient and reduces net removal for dialyzable solutes [15].
Aim of the study
The aim of this work is to study the relation between serum tumor necrosis factor-alpha and hemodialysis adequacy in diabetic and non-diabetic ESRD patients on maintenance hemodialysis by early detection of AVF dysfunction.
Methods & Subjects
The study will be conducted in accordance with the ethical guidelines of the 1975 Declaration of Helsinki and informed consent will be obtained from each patient. The study was carried out in Dialysis units in Armed
Results
Our subjects were divided into 3 main groups with 4 subgroups: Group I: 30 diabetic ESRD patients on HD divided to 2 subgroups: 1) Group Ia: 15 diabetic ESRD patients on HD with functioning AVF between 3 months and 6 months.
2) Group Ib: 15 diabetic ESRD patients on HD with functioning AVF more than one year.
Group II: 30 non-diabetic ESRD patients on HD divided to 2 subgroups: I. Group IIa: 15 non-diabetic ESRD patients on HD with functioning AVF between 3 months and 6 months.
II. Group IIb: 15 non-diabetics ESRD patients on HD with functioning AVF more than one year (Table 1).
Discussion
The present study was conducted on sixty individuals: thirty of them were diabetic End stage renal disease (ESRD) patients (group I) and thirty of them were non-diabetic End stage renal disease patients (group II) of matched age and sex. The main etiology of ESRD observed in our study was diabetic kidney disease (43%) or hypertensive nephropathy (35%) which is supported by what observed by Robert N & Allan J Collins [16] who found that (43.8%) of their patients had ESRD secondary to diabetic nephropathy and (26.8%) due to hypertensive nephropathy. An increase in the level of serum (TNF-α) in the diabetic group was observed in the present study in comparison to non-diabetic group and to healthy controls. In agreement with our results Lechleitner M et al. [17] found that TNF-alpha plasma levels are increased in type 1 diabetes mellitus and reveal a significant association with metabolic long-term control parameters, HbA1c.
Also Swaroop JJ et al. [18] suggest the possible role of TNF-α in the pathogenesis of type-2 diabetes mellitus and the importance of reducing obesity to prevent elevated levels of the cytokine and related complications.
Also Hu FB et al. [19] support the role of inflammation in the pathogenesis of type 2 diabetes. Elevated CRP levels are a strong independent predictor of type 2 diabetes and may mediate associations of TNF-alpha and IL-6 with type 2 diabetes. It was observed in this study that there was statistical significant higher incidence of history of arteriovenous fistula failure in diabetic patients in comparing with non-diabetic patients. Other studies supported our finding like Renan Nunes da Cruz et al. [20] and they found that diabetic patients had shorter mean duration of AVF patency and lower rate of access survival (Figure 1).
Huijbregts HJT et al. [21] found that hemodialysis patients with diabetes can be expected to have reduced primary functional native AVF patency rates with high failure rate. According to AVF vein diameter in this study it was observed that the vein diameter (arterialized) was statistical significant decreased in diabetic ESRD group in compared to the non-diabetic ESRD group, In agreement with our results Conte MS et al. [22] found that diabetes was a significant, negative predictor of venous remodeling over the 24-week study (P =. 02). The model-predicted change in lumen diameter from 2 to 24 weeks was -0.7 mm in diabetic patients (n = 11) and +2.4 mm in non-diabetic patients (n = 15), a difference of 3.1 mm. A significant decrease in the Kt/v in diabetic ESRD group in compared to non-diabetic ESRD group depending on high incidence of arteriovenous fistula stenosis in diabetic group, in agreement with our findings Robbin ML et al. [23] revealed that patients with diabetes were significantly less likely to have a well-functioning AVF than patients without diabetes which is important for adequate hemodialysis.
It was observed in our study that hemodialysis adequacy (kt/v) of the non-diabetic group with AVF duration 3-6 months (Group IIa) was statistically significantly higher in comparing with the other 3 groups. This is supported by Anees M et al. [24] who found that non-diabetic patients had a better quality of life (QOL) as compared to diabetic patients plus that duration of dialysis had a reverse correlation with the overall QOL. It was observed in the present study that the level of TNF alpha is significantly positively correlated with duration of dialysis in total patients, consistent with our findings Kir HM et al. [25] support that TNF-alpha was increased for all patients with chronic renal failure (CRF), both hemodialysis and peritoneal dialysis. It was observed in the present study that the level of TNF alpha is significantly positively correlated with duration of arteriovenous fistula in both diabetic & non-diabetic group, consistent with our finding Chang CJ et al. [26] demonstrated that the thrombosed arteriovenous fistula was characterized by marked inflammation.
It was observed in our study that there is TNF alpha is positively correlated with fasting blood glucose. Consistent with our finding Niti Agarwal et al. [27], suggests TNF-alpha rising with elevated fasting blood glucose. It was observed in our study that there is TNF-alpha is consistent negatively correlated with albumin. Consistent with our finding Undurti N Das et al. [28] found that Tumor necrosis factor alpha induces hypoalbuminemia and polyunsaturated fatty acid deficiency. It was observed in our study that TNF alpha is positively correlated with calcium consistent with our finding. Harry L Uy et al. [29] demonstrate that TNFalpha enhances PTHrP-mediated hypercalcemia. | 2019-01-09T14:05:54.753Z | 2016-04-04T00:00:00.000 | {
"year": 2016,
"sha1": "e78aa76e8cd9c5bda16f60931ade3e30c918e160",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.15406/unoaj.2016.03.00074",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ee6f5e76c90bb5c39f65a0f41d59a94a7966c150",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54201178 | pes2o/s2orc | v3-fos-license | DEVISING A METHOD FOR THE AUTOMATED CALCULATION OF TRAIN FORMATION PLAN BY EMPLOYING GENETIC ALGORITHMS
The process of accumulation of railcars and formation of trains plays a key role. Duration of the accumulation of one train depends on the capacity of railcar traffic of the given purpose. Basic indicators in the process of accumulation of trains are the total railcar-hour accumulation for the given direction per 24 hours, average downtime of one railcar during accumulation, mean time of train accumulation. To accelerate the process of accumulation of trains, the priority of dispatch of trains are used, which include the final groups of railcars. They also provide for arrival of large groups of railcars by the end of the process of train accumulation. They also form trains with increased weight. In addi12. Libby, J. The women’s empowerment principles – equality means business [Text] / J. Libby, U. Wynhoven, L. Mills, L. Gula. – UN Women and the UN Global Compact, 2011. – 16 p. 13. Miller, K. Organizational communication: approaches and processes [Text] / K. Miller. – 6-th ed. – Cengage Learning, 2011. – 336 p. 14. Ferguson, K. E. The Feminist Case Against Bureaucracy [Text] / K. E. Ferguson. – Philadelphia: Temple University Press, 1984. – 304 p. 15. Gonjukova, L. V. Gendermejnstriming v organizacii’ [Text]: mater. IV mizhnar. nauk.-prakt. konf. / L. V. Gonjukova, G. G. Fesenko // Genderna polityka mist: istorija i suchasnist’. – 2015. – Issue 5. – P. 40–42. – Available at: http://gc.kname.edu.ua/images/ docs/gengern_policy_2015/gend_policy_36.pdf 16. Gender Equality Principles [Electronic resource]. – Available at: http://www.genderprinciples.org/principles.php 17. Fesenko, T. G. Modeli i metody organizacii’ ofisu upravlinnja budivel’nymy proektamy [Text]: avtoref. dys. ... kand. tehn. nauk / T. G. Fesenko. – Kharkiv: HDTUBA, 2009. – 23 p. 18. Fesenko, T. G. Upravlinnja proektamy: teorija ta praktyka vykonannja proektnyh dij [Text]: navch. pos. / T. G. Fesenko. – Kharkiv: HNAMG, 2012. – 182 p. – Available at: http://eprints.kname.edu.ua/29445/1/2012%20Управл%20проект%20 Фесенко%20ТГ.pdf 19. Fesenko, T. G. Gendernyj ofis v informacijno-komunikatyvnij platformi organiv miscevogo samovrjaduvannja [Text]: III mizhnar. nauk.-prakt. konf. / T. G. Fesenko // Informacijni tehnologii’ ta vzajemodii’. – Kyiv: VPC «Kyivskyj universytet», 2016. – P. 111–112.
Introduction
The process of accumulation of railcars and formation of trains plays a key role.Duration of the accumulation of one train depends on the capacity of railcar traffic of the given purpose.Basic indicators in the process of accumulation of trains are the total railcar-hour accumulation for the given direction per 24 hours, average downtime of one railcar during accumulation, mean time of train accumulation.
To accelerate the process of accumulation of trains, the priority of dispatch of trains are used, which include the final groups of railcars.They also provide for arrival of large groups of railcars by the end of the process of train accumulation.They also form trains with increased weight.In addi-
The largest opportunities for reducing daily railcar-hour downtime are in rationalizing the organization of railcar traffic.That is why the development of efficient plan for the formation of trains (PFT) is the most effective measure to improve daily accumulation of railcar-hours.The calculation of PFT is, however, a complex combinatorial problem.Existing classical methods for calculating PFT may be effectively applied only at short polygons.In addition, they do not allow taking into account the limitations that are inherent to the real objects of railway infrastructure.Due to these reasons, at present, building a network PFT at the Ukrainian Railways is actually carried out by expert method.Creating a high-quality procedure for calculating PFT will make it possible to build an automated system for the calculation of PFT.Automated system will allow reassessment of PFT not only once a year but when need be.Such a need is predetermined by the high variability of railcar traffic under conditions of market economy.
Literature review and problem statement
PFT implies organization of railcar traffic on sorting, sectional and freight stations and shall ensure: -improving railcar efficiency and speeding up freight delivery by reducing the time spent at technical stations during accumulation and processing; -decreasing the cost of transportation by reducing the time and cost of processing railcars and concentration of shunting operations predominantly at well-technically-equipped marshalling yards; -rational allocation of marshalling operations among stations in accordance with their technical equipment.
Paper [1] outlined the main challenges that stand in the way of building an effective system of organization of railcar traffic under modern conditions.Article [2] presented principles for setting the problem on calculating PFT as a mathematical programming problem.Study [3] proposed a system of techniques for the organization of railcar traffic, which, by means of organizational methods, is able to adapt to the rapid variability of railcar traffic under modern conditions.Paper [4] presented a mathematical model for the calculation of PFT, the construction of which had used the concept of theory of sets.Article [5] proposed a technique for operational correction of PFT, which takes into account the settings, parameters, and condition of the rolling stock, special conditions for transportation, and requirements of the operators and owners of rolling stock.
On the North American continent, a key task in the field of organization of railcar traffic is the task on arranging railcars in sections and assigning them to trains.In addition, at the same time, a task on planning the operations of locomotives and the locomotive squads is solved.Article [6] proposes a specially designed meta-heuristic method, but small dimensionality of the problem, which is given as an example of its implementation, testifies to its imperfection.Paper [7] proposed to state a problem on routing railcar traffic as the problem of determining the graph structure.Railway stations represent vertices of the graph, and the blocks of railcars -arcs.To solve the problem, an algorithm is proposed based on the branch and bound algorithm.The algorithm generates a route for each block, solving the problem of finding the shortest path.As a drawback of this technique, we note a failure to comply with the regulations on the number of railcars in trains.In addition, for real polygons, the use of this method is difficult.The reason is that during simultaneous operations with railcars and blocks all over the range, railcar traffic routing problem passes to the class of problems of large and very large dimensionality.
The article by Swedish researchers [8], which was published in the magazine of the European Research Consortium for Informatics and Mathematics, proposes a method for solving a problem on the disbandment-formation of trains at a marshalling yard.The essence of this method is in the application of the multi-stage formation of "temporary" trains whose number is governed by the model depending on the availability of free tracks.Problem on the formation of trains is stated as a multiproduct flow problem, solving which is proposed with the help of computer program-solver that employs algorithm based on the combination of several methods of mathematical programming with constraints.Article [9] proposed mathematical models based on the use of neuro-fuzzy networks to determine the feasibility of forming and the route of group trains.The article also proposed an original method for operational correction of PFT by using evolutionary selection.Paper [10] proposed an original approach to the solution of problem on the rational allocation of shunting operations among technical stations of a railway network.The method proposed takes into account requests of consignors and provides for a guarantee of timely delivery of cargos in time agreed by the customer.
Although the basic principle to reduce accumulation railcar-hours remains unchanges, the examined methods have almost lost relevance in terms of practical application.They were developed for manual calculations, their use is only possible for polygons that include no more than 10-12 technical stations and 1-2 branches.
The aim and tasks of the study
The aim of this study is to develop a method for calculating a network plan for the formation of single-group freight trains.The method should provide for the accuracy of calculations and the possibility of taking into account constraints in the throughput and processing capabilities of stations and the throughput capacity of sections.This method is a key component in constructing a modern multi-level automated system for managing railcar traffic.
To achieve the set aim, the following tasks are to be solved: -based on the analysis of existing methods, to select a modern mathematical apparatus that utilizes capabilities of modern computational technology; -to represent a format of solving the problem in the form that is applicable for the chosen method of optimization; -to create a mathematical model, which consists of objective function and a system of constraints to solve the optimization problem on calculating PFT; -to verify adequacy of the model and efficiency of the devised method of optimization by using simulation.
Materials and methods for examining the process of constructing PFT
The problem on calculating PFT is a complex task of the combinatorial type.Solving the problems of this type is associated with the operation in large solution spaces.Variables in such problems are discrete, that is why objective functions are not smooth and differentiated.One of the few modern mathematical apparatuses for which such properties of combinatorial problems is not a problem, are genetic algorithms (GA).They do not belong in the class of gradient methods, as are representatives of the class of stochastic optimization methods.A fundamental difference between genetic algorithms and, for example, a method of random search or the method of stochastic approximation is that the genetic algorithm works with a whole population of solutions-candidates.Other algorithms work only with one solution, moving towards optimum and improving this solution only.
To solve a problem using GA, its decision must be represented in the form of a chromosome.Chromosome denotes a sequence of variables.Each variable is represented by a gene.Each gene is assigned with a place in the chromosome, which is called locus.Genetic algorithm simulates biological mechanisms of cross breeding, mutation, crossover and selection among a population of individuals.An objective function is used to evaluate in points the degree of individual's adaptability and is called the function of adaptability or a fitness function.
One can represent a solution as follows: each section between technical stations is presented by a region of chromosome.The number of genes that are included in such region is the number of flows of railcar traffic that pass this section of polygon.If the motion in the section occurs both in the odd and even directions, then it is matched with 2 regions of the chromosome, separate for each direction.Each gene within its region can take integer values, which can be in the range from 1 to N i , where N i is the number of flows that pass through this section.If the genes within a single region take the same values, it is interpreted as the unification of flows, which are represented by these genes, within this section.Thus we simulate partition of a set of flows that pass through this section into non-empty subsets as shown in Fig. 1.Within the first region AB, genes that correspond to flows AE and AF can take the same values, such as value (5).This will mean that flows AE and AF will pass through section AB together, as part of one destination.In region BC, the genes that correspond to flows AE and AF also took the same values (3); this means that along section BC, flows AE and AF will pass together.Thus, when executing the interpretation of an objective function or a fitness function, it is necessary to perform a procedure of logic control.This procedure will analyze subsets of flows of adjacent regions and combine those subsets that contain the same compositions of flows to one destination, that is, to collect destinations bit by bit.Next, each destination receives points that in the problem on calculating PFT may correspond to, for example, railcar-hours of downtime.These railcar-hours match the cost of the destination accumulation.These costs are calculated as the product of the norm of the number of railcars in a freight train in the section and the parameter of accumulation at the Dispatch station to destination.At the last destination station, the points are calculated, or railcar-hours for additional processing of transit railcars of those flows, for which the station of final destination is not their last point of transportation.Fig. 1 shows a scheme of encoding the PFT solution for solving by using GA.
Such representation of grouping the flows in each section corresponds to the set-theoretic representation on splitting the sets into subsets, then the number of combinations for each station is equal to the appropriate Bell number.
Construction of PFT is a complex combinatorial problem, which implies a considerable computational complexity.However, by using modern computing technology and contemporary mathematical apparatus, it is possible to overcome computational complexity.An important parameter that is directly related to the magnitude of time spent on railcars' downtime during the formation of trains is a parameter of accumulation.The possibilities to influence the magnitude of the accumulation parameter are few, within 5-10 % only.However, taking that influence into account when calculating a plan of the train formation will make it possible to improve parameters of the calculated plan.For this purpose, it is necessary to introduce to the model's objective function the parameters of train accumulation by destinations at the marshalling stations as stochastic variables.Thus, the problem on finding optimal PFT in such statement relates to the problems of stochastic combinatorial optimization.Objective function can be written down as follows: where w is the number of all possible destinations; m i is the norm of number of railcars in the train to the i-th destination; c i is the current parameter of accumulation to the i-th destination; i c is the mathematical expectation of the magnitude of accumulation to the i-th destination; s i is the root mean square deviation in the parameter of accumulation to the i-th destination; is the processing time of train to the i-th destination at the station of disbandment; s i di t is the time of disbandment of train to the i-th destination at the station of disbandment; n u is the number of railcars of the u-th flow; k is the number of flows of railcar traffic; q i is the number of technical stations to the i-th destination; z i is the number between technical stations to the i-th destination; r hi t t is the processing time of transit train without processing at the h-th station to the i-th destination; s ir di V is the sectional speed of freight trains in the r-th section to the i-th destination; x ij is the variable that takes value 1 if the ith destination includes railcar traffic of the j-th flow, otherwise it takes value 0; e w•h is the cost of railcar-hour; Sgn is the sign function; erfc is the Laplace complementary error function, which is used to calculate the probability of exceeding the parameter of accumulation its mean value.
First term in square brackets represents the cost of railcar-hours for the accumulation to the i-th destination.Second term represents excess cost of railcar-hours for the accumulation to the i-th destination associated with the risk of exceeding the current value of accumulation parameter.These costs model risk.When the current value of accumulation parameter takes values that are less than the mathematical expectation of this magnitude, then there is a risk to exceed it.A Laplace complementary error function is a non-elementary function that represents the Laplace probability integral.
One of the main goals when solving a problem on calculating PFT is the optimal allocation of work between technical stations.The key parameter of technical station is the magnitude of its processing capacity.The processing capacity of station is defined by its rail track development, throughput capacity of bottlenecks and processing capacity of shunting devices.It should be taken into account in the form of limitation: where g is takes value 1 if the s-th station is the disbandment station to the i-th destination, takes value 0 otherwise; N sp proc is the processing capacity for the s-th station; S is the number of technical stations on the polygon.
Simultaneously with processing freight trains, technical station processes and passes transit freight trains and passenger trains.Therefore, in addition to the processing capacity, the capabilities of each technical station are limited by its throughput.Station throughput capacity is defined as the number of freight trains (with and without processing) and the assigned number of passenger trains, which can pass the station per day in all directions.It is determined under the operating conditions of station to ensure full utilization of available means, based on the equipment available [11].
Station throughput capacity is determined by the lowest value of throughput by its receiving-dispatching tracks and necks.Limitations on throughput capacity of technical stations can be written down as follows: where w is takes value 1 if the s-th station is part of the route to the i-th destination (is a station of formation, disbandment or transit), takes value 0 otherwise; p s ca N is the throughput capacity of the s-th station; s s pas N is the number of passenger trains which pass the station in 24 hours.
When creating a mathematical model for making up a plan of train formation, one should also take into account the fact that not only technical stations, but the railway sections that connect them, also have limitations on the throughput.Throughput capacity of a railroad line is the largest quantity of trains or pairs of trains with defined weight, which can pass in 24 hours.It is determined depending on the available technical equipment, type and capacity of rolling stock and train traffic schedule type.The available technical means include a number of tracks, the kind of blocking in section tracks (automated, semi-automatic), power of traction substations, etc. Limitations on the throughput of the sections can be written down as follows: where h id takes value 1 if the d-th section belongs in the route of the i-th destination, takes value 0 otherwise; p d ca N is the throughput for the d-t-h section; proc d N is the number of passenger trains that pass the d-th section over 24 hours; D is the number of sections on the polygon.
Results of examining the process of calculating PFT
For the purpose of comparing the existing methods with the one proposed and to verify its adequacy, it is appropriate to solve, using the method proposed, the problem, which was solved by using existing method.Article [12] gives an example of solving a problem by applying a combined analytical comparison method.Fig. 2 shows initial data for the problem.Fig. 2, a shows a schematic of the polygon, railcar-hours of accumulation and the magnitudes of time saving when a transit railcar passes without processing.Fig. 2, b shows a diagram of railcar traffic.Fig. 3 shows solution presented in [12], which was obtained by a combined analytical comparison method.
The solution contains 8 destinations, the calculation of costs according to the optimal plan is given in Table 1.General railcar-hours comprised the magnitude of 6528.Out of which, 1328 railcar-hours -additional costs for processing transit railcars, 5200 railcar-hours -cost for the accumulation of destinations.The same initial data were used in [13] to demonstrate adequacy of the new proposed method for calculating a plan of train formation, which is called is a method of sequential growth in the railcar traffic flows [13].The optimal plan, obtained in [13] by the method of sequential growth in the railcar traffic flows, coincides with the result that was received by means of the combined analytical comparison method, given in [12].
Based on the proposed method, we created software in the language of Matlab.For simulation, we used initial data given in [12,13].Fig. 4 shows the optimal PFT and convergence dynamics of objective function.The magnitude of the time spent amounted to about 2 min.As can be seen from Fig. 4, b, the simulation we conducted demonstrated rapid algorithm convergence.
Table 1
Calculation of costs according to the optimal plan, obtained by a combined analytical comparison method Table 2 gives calculations of the cost of railcar-hours according to the calculated optimal plan.Table 2 Calculation of costs according to the optimal plan, obtained by using a program created on the basis of the proposed model According to data in Table 2, the costs comprise 1728 railcar-hours that are additionally spent on processing transit railcars and 4600 railcar-hours that are spent on the accumulation of destinations; the total cost according the plan is 6328 railcar-hours.Additional costs for processing are 400 railcar-hours larger than in the variant of the plan received in [12].The optimal plan, however, contains 7 destinations, not 8, as the plan, obtained by applying classical methods.This reduces the cost for the accumulation of
Fig. 1 .
Fig. 1.Scheme of encoding a solution for the problem on finding optimal PFT in the form of chromosome
Fig. 2 .Fig. 3 .
Fig. 2. Initial data for calculating a train formation plan: a -plan of polygon, b -diagram of railcar traffic | 2018-12-02T17:08:34.564Z | 2017-02-27T00:00:00.000 | {
"year": 2017,
"sha1": "ed4b3368a2bcdc49903e35af14a9b22a8e14eedc",
"oa_license": "CCBY",
"oa_url": "http://journals.uran.ua/eejet/article/download/93276/89921",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ed4b3368a2bcdc49903e35af14a9b22a8e14eedc",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Engineering"
]
} |
269040627 | pes2o/s2orc | v3-fos-license | People see more of their biases in algorithms
Significance Algorithms incorporate biases in the human decisions that comprise their training data, which can amplify and codify discrimination. We examine whether algorithmic biases can be used to reveal and help correct undetected biases of the human decision-makers on which algorithms are trained. We show that people see more of their biases in the decisions of algorithms than in their own decisions. Because algorithms reveal more of their biases, people are also more likely to correct their biases when decisions are attributed to an algorithm than to themselves. Recognizing bias is a crucial first step for people and organizations motivated to reduce their biases. Our findings illustrate how to use algorithms as mirrors to reveal and debias human decision-making.
Algorithmic bias occurs when algorithms incorporate biases in the human decisions on which they are trained.We find that people see more of their biases (e.g., age, gender, race) in the decisions of algorithms than in their own decisions.Research participants saw more bias in the decisions of algorithms trained on their decisions than in their own decisions, even when those decisions were the same and participants were incentivized to reveal their true beliefs.By contrast, participants saw as much bias in the decisions of algorithms trained on their decisions as in the decisions of other participants and algorithms trained on the decisions of other participants.Cognitive psychological processes and motivated reasoning help explain why people see more of their biases in algorithms.Research participants most susceptible to bias blind spot were most likely to see more bias in algorithms than self.Participants were also more likely to perceive algorithms than themselves to have been influenced by irrelevant biasing attributes (e.g., race) but not by relevant attributes (e.g., user reviews).Because participants saw more of their biases in algorithms than themselves, they were more likely to make debiasing corrections to decisions attributed to an algorithm than to themselves.Our findings show that bias is more readily perceived in algorithms than in self and suggest how to use algorithms to reveal and correct biased human decisions.
algorithm | algorithmic bias | bias blind spot | debiasing
Algorithms learn and incorporate biases in the human decisions on which they are trained (1)(2)(3)(4)(5).Algorithmic bias amplifies and codifies discrimination due to the scale with which algorithms are used in applications from deciding who is hired (1,6) to who receives healthcare or bail (2,7).Algorithmic biases also make human biases transparent that had been opaque when human decisions were unspecified or unaggregated (8,9).When Amazon trained an algorithm on its past human hiring decisions, for example, the hiring algorithm revealed a gender bias that had previously escaped notice (10).We examine whether algorithmic biases can be used to help human decision makers recognize and correct for their biases.
People have access to the output of their intuitive decisions but lack access to the asso ciative processes by which those decisions were made (11,12).Because people assess bias in their decision-making by introspectively examining their decision-making processes, bias in the self often goes unrecognized.By contrast, people more readily detect biases in the decisions of others because others are judged by their decisions rather than their decision-making processes.The phenomenon that people more readily perceive bias in the decisions of others than in their own decisions is the bias blind spot (13)(14)(15)(16).
We theorize that similar psychological processes lead people to perceive more of their biases in the decisions of algorithms than in their decisions.We propose that people are more able and motivated (17) to see their biases in algorithms because, like other people, the decision-making process of algorithms is opaque (18)(19)(20) and people perceive decisions made by algorithms like decisions made by other people (21,22).People should thus use the same criteria to evaluate bias in algorithms as they use to evaluate bias in other people and be less threatened by and dismissive of bias in the decisions of algorithms than self, even when algorithms are trained on their decisions (23,24).In nine preregistered exper iments (N = 6,175), we find evidence that people perceive more of their biases (e.g., irrelevant effects of age, attractiveness, gender, and race on interpersonal judgments) in the decisions of algorithms than in their decisions.The revelatory effect of algorithms holds when research participants are given incentives that encourage them to reveal their true beliefs and discourage strategic self-presentation.We find that people are as likely to see biases in algorithms trained on their decisions as biases in the decisions of other people and algorithms trained on the decisions of other people.Furthermore, we show this rev elatory effect of algorithms is driven by cognitive and motivated processes.It is larger for people more prone to the bias blind spot and larger when people are motivated to appear unprejudiced.Finally, we find that because people see more of their biases in algorithms, people are more likely to correct their biased decisions when those decisions are attributed to an algorithm than to themselves.Our findings show how to use algorithms to reveal and correct bias in human decision-making and provide evidence that the psychological processes used to perceive algorithms are scaffolded on the processes used to perceive other people.
Paradigm
We used a similar paradigm to test our hypotheses in all experi ments (see SI Appendix for additional details).All participants rated a set of targets (i.e., Airbnb listings or rideshare drivers) that varied randomly on relevant attributes (e.g., ratings, number of reviews) and varied systematically on a potentially biasing irrele vant attribute (i.e., age, attractiveness, gender, race).In the first part of the experiment, participants sequentially rated each target on analog sliders (e.g., likelihood of renting, perceived driving ability) in one of two phases (A and B), with targets presented randomly without replacement.
In all experiments, we included two experimental conditions in which we showed participants a summary of their target ratings from phase B. In the "self " condition, we truthfully attributed those target ratings to the participant (e.g., "your ratings").In the "self-trained algorithm" condition, we deceptively attributed those target ratings to an algorithm trained on other target ratings made by the participant (e.g., "predicted by an algorithm trained on your data from phase A").In experiments 1 and 2, we added two "real self-trained algorithm" conditions in which we truthfully showed participants predicted phase B target ratings from a real algorithm trained on their phase A target ratings.In experiments 3 and 4, we added two conditions in which we presented partic ipants with a summary of their target ratings from phase B, but we attributed their ratings to other participants in the experiment ("others" condition) or to an algorithm trained on the phase A target ratings of other participants in the experiment ("other-trained algorithm" condition).
All participants were then told about a research finding that explained how the irrelevant attribute might bias target ratings (e.g., age, attractiveness, gender, race) and participants reported the extent to which they perceived that "you [the algorithm/other participants]" showed the biasing tendency on a seven-point Likert scale with endpoints 1 (not at all) and 7 (very much).This absolute judgment of perceived bias was adapted from bias blind spot research (13,15), but avoids a potential confound in comparative judgments of perceived bias between self and other (25).Perceived bias was positively correlated with the actual bias exhibited by individual participants in all nine experiments [r range = 0.17 to 0.38; r average = 0.28 (95% CI = 0.24, 0.31)]; these correlations are high relative to correlations reported in bias blind spot papers comparing perceived and actual bias (e.g., r range = −0.25 to 0.14) (15).See SI Appendix, Fig. S17.
In all experiments, we predicted that participants would perceive the biasing influence of the irrelevant attribute to be greater in the "self-trained algorithm" than "self " condition.Replicating previous research on the bias blind spot, we also expected perceived bias to be greater in the "others" condition than "self " condition.
People See More of Their Biases in Algorithms
Experiments 1 and 2. In experiments 1 and 2, we tested whether people see more of their racial and age biases when those biases are reflected in the decisions of algorithms than in their own decisions.In a one-factor between-subjects design, we randomly assigned participants to one of four conditions: self, self-trained algorithm, first real self-trained algorithm, and second real selftrained algorithm.In experiment 1 (N = 801, Prolific Academic), participants evaluated Airbnb listings varying (randomly) on star ratings and (systematically) on whether the hosts had distinctively African American or White names (26).Participants in self, selftrained algorithm, and first real self-trained algorithm conditions evaluated the renting likelihood of 10 Airbnb listings in phase A and 6 Airbnb listings in phase B. To eliminate the possible influence of suspicion, participants in the second real self-trained algorithm condition only evaluated the 10 Airbnb listings in phase A. Participants in the self and self-trained algorithm conditions next saw a summary judgment of their ratings in phase B that was attributed to self or algorithm, respectively.In the real selftrained algorithm conditions, the summary ratings for phase B that were shown to participants were predicted by a participantlevel regression model trained on their phase A ratings using two regression coefficients.One coefficient was star rating (five-point scale).The other coefficient was race associated with hosts (African American or White).After viewing phase B summary ratings, all participants rated the perceived influence of racial bias on those ratings.To validate the real self-trained algorithm, we estimated a mixed effect regression comparing its predicted phase B ratings with all ratings made by participants who completed phase B, which revealed a strong average correlation (β = 0.75, t = 44.90,P < 0.001).In experiment 2 (N = 800, Prolific Academic), participants evaluated Uber drivers varying (randomly) on star ratings and (systematically) on whether the driver was young or old.The design was identical to experiment 1.This time, participants evaluated driving skills of different drivers in phase A and phase B. In the real self-trained algorithm conditions, the summary ratings for phase B were predicted by a participant-level regression model trained on participants' phase A ratings using star rating and age of the drivers (young or old) as coefficients.After viewing phase B summary ratings, all participants rated the perceived influence of age bias on those ratings.To validate the real self-trained algorithm, we estimated a mixed effect regression comparing its predicted phase B ratings with all ratings made by participants who completed phase B, which revealed a strong average correlation (β = 0.92, t = 80.56, P < 0.001).In both experiments, we only used deception in the self-trained algorithm conditions.
In experiment 1, we regressed the perceived influence of racial bias on three dummies for the algorithm conditions, with the self condition as the reference category, while controlling for actual racial bias.As preregistered, participants perceived more racial bias (β = 0.89, t = 5.60, P < 0.001) when their ratings were attributed to an algorithm (self-trained algorithm: M = 3.20, SE = 0.12) than to themselves (self: M = 2.29, SE = 0.11; Fig. 1A).Participants also perceived more racial bias in the ratings of real algorithms trained on their ratings than in their ratings for both the first (M = 3.42, SE = 0.13, β = 1.10, t = 6.73,P < 0.001) and second real self-trained algorithm conditions (M = 3.47, SE = 0.14, β = 1.18, t = 7.14, P < 0.001).By contrast, there was no difference in the perceived racial bias in ratings made by participants that were attributed to an algorithm (self-trained algorithm) and ratings predicted by real self-trained algorithms (respectively, P = 0.21, P = 0.08), or between the real self-trained algorithm conditions (P = 0.62).Additional results and robustness checks are reported in SI Appendix, Tables S10-S13.
In experiment 2, we regressed the perceived influence of age bias on three dummies for the algorithm conditions, with the self condition as the reference category, while controlling for actual age bias.As preregistered, participants perceived more age bias (β = 1.05, t = 6.61,P < 0.001) when their ratings were attributed to an algorithm (self-trained algorithm: M = 3.88, SE = 0.11) than to themselves (self: M = 2.83, SE = 0.11; Fig. 1B).Participants also perceived more age bias in the ratings of real algorithms trained on their ratings than in their ratings for both the first (M = 3.67, SE = 0.12, β = 0.82, t = 5.15, P < 0.001) and second real self-trained algorithm conditions (M = 3.82, SE = 0.12, β = 0.92, t = 5.79, P < 0.001).By contrast, there was no difference in the perceived age bias in ratings made by participants that were attrib uted to an algorithm (self-trained algorithm) and ratings predicted by real self-trained algorithms (respectively, P = 0.16, P = 0.43), or between the real self-trained algorithm conditions (P = 0.53).Additional results and robustness checks are reported in SI Appendix, Tables S10-S13.Since perceived bias was similar between the self-trained algorithm and the real self-trained algo rithm conditions in experiments 1 and 2, we used self-trained algorithm conditions in subsequent experiments so that the sum mary ratings of participants and algorithms were the same.
People See as Much of Their Biases in Algorithms as in Other People
Experiments 3 and 4. In experiments 3 and 4, we used a 2 (self, others) × 2 (participant, algorithm) between-subjects design.This design replicated our focal finding and tested if we would replicate the bias blind spot-people seeing more bias in the decisions of others than in their decisions.We recruited unique nationally representative online samples of US residents for each experiment from Prolific Academic.In experiment 3 (N = 797), each parti cipant rated 18 Uber drivers (males and females) on driving skill in two phases of nine ratings (A and B).In experiment 4 (N = 775), each participant evaluated 18 Airbnb listings whose hosts had distinctively African American (9) or White (9) names, similar to experiment 1 (26).After rating all targets, participants saw a summary of target ratings in phase B. Then, participants rated the influence of gender or racial bias on those ratings (experiments 3 and 4, respectively).
In experiment 4, regressing the perceived influence of racial bias on the same predictors revealed the preregistered significant inter action (β = 0.82, t = 3.55, P < 0.001).Participants perceived more racial bias (β = 1.08, t = 6.62,P < 0.001; Fig. 2B) when listing ratings were attributed to an algorithm trained on their ratings (self-trained algorithm: M = 3.63, SE = 0.13) than to themselves (self: M = 2.47, SE = 0.12).By contrast, there was no difference in perceived bias (β = 0.26, t = 1.61,P = 0.109) whether listing ratings were attributed to an algorithm trained on other participants (other-trained algorithm: M = 3.89, SE = 0.12) or to other partic ipants (others: M = 3.62, SE = 0.11).Consistent with classic bias blind spot findings, participants perceived more racial bias when listing ratings were attributed to other participants than to them selves (β = 1.12, t = 6.85,P < 0.001).Additional results and robust ness checks are reported in SI Appendix, Tables S10-S13.
Why People See More of Their Biases in Algorithms
We tested whether cognitive and motivated drivers explain why people more readily perceive their biases in the decisions of algo rithms than in their decisions in experiments 5 and 6.
Experiment 5.In experiment 5, we tested whether the revelatory effect of algorithms is moderated by individual differences in the bias blind spot, which is due to differences in the cognitive processes used to assess bias for self and others.People see less bias in their decisions than the decisions of others because they tend to introspectively look for biases in the process they used to make decisions (e.g., "I didn't think about gender when inviting speakers").By contrast, because people lack introspective access to the decision processes of other people, they look for biases in the decisions made by other people (e.g., "All of their speakers are men"; ref. 15).People perceive decisions made by algorithms to be even more opaque (a "black box") than decisions made by other people (18,19).Thus, differences in the tendency to exhibit the bias blind spot should moderate the perception of more bias in the decisions of algorithms than self.Participants (N = 396, Prolific Academic) were randomly assigned between-subjects to a self or self-trained algorithm condition and completed the same ratings and bias assessments as in experiment 3.All participants then completed a scale measuring susceptibility to the bias blind spot (15).We regressed perceived gender bias on condition (0 for self, 1 for self-trained algorithm), bias blind spot scale score, and their interaction while controlling for actual gender bias.The preregistered significant interaction (β = 0.29, t = 2.12, P = 0.03) revealed that susceptibility to bias blind spot increased the propensity to see more gender bias in ratings attributed to algorithm than to self.A floodlight analysis (27) revealed this difference was significant when bias blind spot scores were above 1.71 (SI Appendix, Fig. S18).Given proximity of this threshold to the mean (M BBS = 1.74,SE = 0.06), we present dichotomized scores in Fig. 3A.See SI Appendix, Tables S10-S13 for additional results and robustness checks.Experiment 6.We tested for the influence of motivated reasoning by examining whether algorithms selectively reveal the influence of biasing attributes in experiment 6.People are motivated to be unprejudiced for intrinsic and extrinsic reasons (28).If algorithms are perceived like other people, however, people should be less threatened by and dismissive of bias in decisions attributed to algorithms than self (23,24).In a 2 (self, selftrained algorithm) × 2 (racial bias, star rating) between-subjects design, we manipulated whether participants (N = 803, Prolific Academic) reported the perceived influence of an attribute that would evoke a high or low motivation to respond without prejudice (28).Participants evaluated the likelihood of renting 18 Airbnb listings as in experiment 4. Half then reported the perceived influence of racial bias on target ratings made by self or self-trained algorithm (high motivation).Half were told that guests on the Airbnb platform are less likely to rent apartments from lower rated hosts than higher rated hosts and reported the perceived influence of star ratings on target ratings made by self or algorithm (low motivation).We regressed perceived influence on self-trained algorithm (0 for self, 1 for self-trained algorithm), racial bias (0 for star rating, 1 for racial bias), and *** n.s.their interaction.The preregistered significant interaction revealed that algorithms selectively remove the bias blind spot (β = 1.55, t = 7.21, P < 0.001; Fig. 3B).Participants perceived more racial bias in ratings attributed to algorithms (β = 1.39, t = 9.17, P < 0.001; M = 3.74, SE = 0.13) than to themselves (M = 2.34, SE = 0.11).By contrast, participants perceived star ratings to have similarly influenced ratings attributed to algorithms (β = −0.16,t = −1.04,P = 0.297; M = 4.99, SE = 0.09) and to themselves (M = 5.14, SE = 0.11).See SI Appendix, Table S10-S13 for additional results and robustness checks.
People Are More Likely to Correct Their Biases in Algorithms
Experiment 7. We examined whether attributing decisions to algorithms makes people more willing to correct decisions in experiment 7. Participants (N = 400, Prolific Academic) were assigned to a 2 (self, self-trained algorithm; between-subjects) × 2 (precorrection, postcorrection; within-subjects) mixed design.All participants evaluated 18 Uber drivers in two phases that we varied systematically in facial attractiveness.Participants then reported the perceived biasing influence of attractiveness on driver ratings attributed to themselves or an algorithm trained on their ratings.Last, we allowed participants to correct driver ratings from phase B attributed to self or algorithm if they believed those ratings were biased.We computed a correction score, the average absolute difference in driver ratings before and after the opportunity for correction.As preregistered, participants corrected more when driver ratings were attributed to algorithms (β = 1.75, t = 3.28, P = 0.001; M = 3.77, SE = 0.46) than to themselves (M = 2.02, SE = 0.28).A similar difference in correction is observed when excluding outliers (β = 0.97, t = 4.03, P < 0.001; Fig. 4).Exploratory mediation analyses revealed that higher perceived influence of attractiveness bias predicted increased correction, which reduced actual attractiveness bias (SI Appendix, Figs.S20 and S21).
See SI Appendix, Table S10-S13 for additional results and robustness checks.
Robustness Checks
Generalization across Heterogeneity in Actual Bias.We calculated a measure of actual bias in individual participant evaluations in all experiments (e.g., the average difference in evaluations of male and female Uber drivers).Descriptive analyses show that the distribution of actual bias exhibited by individual participants was heterogenous (SI Appendix, Fig. S16 and Table S9) and normally distributed around zero, except in experiment 7 and supplemental experiments where average bias was significantly greater than zero (all P's ≤ 0.002).About half of the participants in each sample exhibited bias in the direction as it was described to participants (e.g., favoring Whites to African Americans, favoring younger people, favoring males to females, favoring more attractive people).Importantly, variation in actual bias exhibited by individual participants did not moderate their propensity to perceive more of their bias in the decisions of algorithms than self in experiments 1 to 7 (SI Appendix, Table S12, Panel A).As an additional robustness check, among the participants who exhibited bias in the direction as it was described (e.g., favoring Whites to African Americans, males to females, more attractive people), the conditional effect of algorithm (vs.self ) on perceived influence was significant in all experiments (SI Appendix, Table S12, Panel B).
Generalization across Race and Gender.The race and gender of participants did not moderate the propensity to see more bias in algorithms than self in any experiment.In addition, self-identified race and gender did not consistently predict actual bias across experiments (SI Appendix, Table S13).Black participants were less likely to exhibit a bias favoring Whites to African Americans in experiment 4, for instance, but were no less likely in experiment 6.
Reflection of True Beliefs.In supplementary experiment A, half of participants were given a financial incentive ( 29) that encouraged them to reveal their true beliefs and discouraged strategic responding (30).In a 2 (self, real self-trained algorithm) × 2 (incentive, control) between-subjects design, participants (N = 800, Prolific Academic) evaluated the trustworthiness of attractive and unattractive Uber drivers.This experiment was modeled on experiments 1 and 2, using the second real self-trained algorithm condition in which participants only evaluated 10 drivers in phase A. To validate the real self-trained algorithm, we estimated a mixed effect regression comparing its predicted phase B ratings with all ratings made by participants who completed phase B, which revealed a strong average correlation (β = 0.86, t = 54.76,P < 0.001).We regressed perceived influence on algorithm (−½ for self, ½ for real self-trained algorithm), incentive (−½ for control, ½ for incentive), and their interaction, while controlling for actual attractiveness bias (mean centered).There was a marginally signif icant main effect of incentive (β = 0.21, t = 1.79,P = 0.07) such that participants perceived more attractiveness bias in the incentive than control conditions.Exploratory analyses revealed that the increase in perceived attractiveness bias from control to incentive was marginally significant in the self condition (β = 0.31, t = 1.85,P = 0.07) but not significant in the real self-trained algorithm condition (β = 0.11, t = 0.68, P = 0.50; see SI Appendix, Fig. S22A).The interaction of algorithm and incentive was not significant (β = −0.20,t = −0.83,P = 0.41).Incentives may have increased perceived bias in self but did not moderate the propensity to see more bias in algorithms or the magnitude of the difference between self and algorithms.As predicted, participants perceived more attractiveness bias in ratings attributed to algorithms than to them selves (β = 0.91, t = 7.61, P < 0.001) both in control (M = 4.19, SE = 0.12 vs. M = 3.20, SE = 0.13; β = 1.01, t = 5.97, P < 0.001) and incentive conditions (M = 4.38, SE = 0.12 vs. M = 3.45, SE = 0.14; β = 0.81, t = 4.78, P < 0.001).See SI Appendix, Tables S10-S13 for additional results and robustness checks.
In supplementary experiment B, we tested whether incentivized ratings reflect true beliefs, not confusion about incentives, by using a simpler incentive offered to all participants and including a comprehension check.The check revealed that 76% of the par ticipants understood the incentive.We regressed the perceived influence of attractiveness bias on ratings made by self or a real self-trained algorithm (0 for self, 1 for real self-trained algorithm), while controlling for actual attractiveness bias (mean centered).Participants perceived more attractiveness bias in ratings made by the algorithm than self whether we included all participants in the analysis (M = 4.20 SE = 0.11 vs. M = 3.55, SE = 0.11; β = 0.63, t = 4.35, P < 0.001) or, as preregistered, only included participants who passed the comprehension check (M = 4.22, SE = 0.12 vs. M = 3.54, SE = 0.13; β = 0.63, t = 3.77, P < 0.001; see SI Appendix, Fig. S22B).In summary, participants appear to truly perceive more bias in decisions made by algorithms than themselves.See SI Appendix, Tables S10-S13 for additional results and robustness checks.
Discussion
Algorithms incorporate biases in the human decisions that com prise their training data, which can amplify and codify discrimi nation (1-5, 10).Our findings suggest auditing algorithms for bias can be beneficial not only for reducing algorithmic bias, but also for revealing biases in the human decisions on which they are trained.We find algorithms to be exempt from the bias blind spot that selectively inhibits people from recognizing and correcting their biased and prejudiced decision-making.In nine experiments, participants saw more of their biases in algorithms trained on their decisions than in their decisions (average d = 0.51; see SI Appendix, Fig. S15).For people and organizations motivated to reduce bias, recognizing bias is a crucial first step (31)(32)(33)(34).Our findings present initial evidence that algorithms can serve as mirrors that reveal and debias human decision-making.
Materials and Methods
The present research involved no more than minimal risks, and all study participants were 18 y of age or older.All experiments were approved for use with human participants by the Institutional Review Board on the Charles River Campus at Boston University (protocol 3632E) or Institutional Review Board at Erasmus University (ETH2324-0356); informed consent was obtained for all participants.All manipulations and measures are reported.Experiments were conducted on the Qualtrics survey platform.Condition assignments were random in all our experiments, with randomization administered by Qualtrics.Preregistrations, surveys, raw data and reproducible R code are available on the Open Science Framework at https://osf.io/yvjt3/?view_only=6d6abf4759ea4bab9588d70c7b77c0d0.
Following a general rule of thumb, we sought to obtain a minimum of 200 participants per experimental condition.For each study, we requested the preregistered sample size on the online platform (i.e., N = 800 or N = 600 or N = 400).The final sample was determined by the actual number of participants who signed up for each online study, which was slightly higher or lower than the preregistered sample size.We preregistered N=800 for experiments 1, 2, 3, 4, and 6 and supplementary experiment A. The number of complete responses was respectively N = 801 (N = 801 total sample, that is 0% dropout), N = 800 (N = 800 total sample, that is 0% dropout), 797 (N = 813 total sample, that is 2.0% dropout), 775 (N = 775 total sample, that is 0% dropout), 803 (N = 803 total sample, that is 0% dropout), 800 (N = 804 total sample, that is 0.5% dropout).We preregistered N = 400 for experiments 5 and 7.The number of complete responses was respectively N = 396 (N = 396 total sample, that is 0% dropout), 400 (N = 400 total sample, that is 0% dropout).We preregistered N = 600 for supplementary experiment B. The number of complete responses was respectively N = 603 (N = 603 total sample, that is 0% dropout), The representative sampling for experiment 3 and 4 were performed by Prolific by matching the sample to the US population distribution by age, gender, and ethnicity.Balanced sample (i.e., even distributions of male and female participants) and a ≥98% approval rate were panel-related conditioning factors used in the other experiments.
Experiment 1.We recruited an online sample of 801 US residents (M age = 35.9y, 48% female) from Prolific Academic.Participants were randomly assigned to one of the four conditions: self, self-trained algorithm, first real self-trained algorithm, and second real self-trained algorithm.Participants imagined they were looking for a one-bedroom apartment to rent for a weekend.We presented information about apartments with a description of the apartment and the name of the host, using distinctively African American and distinctively White names (26).The information about each listing also included two diagnostic attributes (i.e., star rating and number of reviews), with randomly generated values for each participant.The diagnostic attributes are typically provided on platforms such as Airbnb.Under the apartment description, we kept the number of reviews constant for each apartment (i.e., 100 more), but for each apartment, we generated a set of 10 randomly generated numbers for star rating between 3.9 and 5.In other words, the star rating of an apartment varied randomly across participants.The full list of stimuli is available in SI Appendix.
Participants in self, self-trained algorithm, and first real self-trained algorithm conditions reported their likelihood of renting 10 apartments in phase A and 6 apartments in phase B. Participants in second real self-trained algorithm condition only reported their likelihood of renting 10 apartments in phase A. Participants rated their likelihood to rent each apartment with a 100-point analog slider scale with endpoints 0 (not at all likely) to 100 (very much likely).Importantly, we hid the slider values to participants.While they had a sense of low and high likelihoods, participants were unable to know their exact driving evaluation values.After evaluating the apartments, participants in the self condition moved directly to the dependent variable page, while participants from the algorithm conditions read additional information about an algorithm that an algorithm was said to use "your own" evaluation data from phase A to predict the evaluation from phase B. The algorithm information is presented in SI Appendix, Fig. S1.
On the dependent variable page, we presented participants with a summary table including six apartments from phase B, with African American and White host names grouped separately.Importantly, while participants in self and self-trained algorithm conditions were presented with their own ratings from phase B, participants in the real self-trained algorithm conditions viewed the summary evaluations for phase B predicted by an individual-level regression model on the 10 observations from phase A with two independent variables (a dummy for African American or White name, and a continuous predictor for the star rating) and with renting likelihood as dependent variable.In the first real self-trained algorithm condition, participants completed evaluations in phase B. In the second real self-trained algorithm condition, participants did not complete phase B. Below the summary evaluation table, we presented participants with a short statement about research on racial bias "Research suggests that guests on the Airbnb platform are less likely to rent apartments from hosts with distinctly African American names than with distinctively White names."Finally, we measured our dependent variable of perceived influence with a single item: "to what extent do you believe that you (the algorithm) showed this tendency" on a seven-point Likert scale with 1 as not at all and 7 as very much, adapted from prior research on the bias blind spot (13,15).Example evaluation pages are presented in SI Appendix, Figs.S8 and S9.Last, all participants reported age, gender, and ethnicity.Experiment 2. We recruited an online sample of 800 US residents (M age = 38.6 y, 49% female) from Prolific Academic.Participants were randomly assigned to one of the four conditions: self, self-trained algorithm, first real self-trained algorithm, and second real self-trained algorithm.Participants imagined they would use a ride-sharing service and evaluated different drivers.We presented information about drivers, which involved a photo from Chicago Face Database (35) and the two diagnostic attributes (i.e., star rating and number of reviews) identical to experiment 1.To create a young and an old version of each photo, we edited these photos with an AI tool (https://ailab.wondershare.com/tools/aging-filter.html).The design was similar to experiment 1, in which participants in self, self-trained algorithm, and first real self-trained algorithm conditions rated perceived driving skill of 10 drivers in phase A and six drivers in phase B. Participants in second real self-trained algorithm condition only rated perceived driving skill of 10 drivers in phase A. Participants rated drivers' driving skill with a 100-point analog slider scale with endpoints 0 (not at all skilled) to 100 (highly skilled).After evaluating the drivers, participants in the self condition moved directly to the dependent variable page, while participants from the algorithm conditions read additional information about an algorithm that an algorithm was said to use "your own" evaluation data from phase A to predict the evaluation from phase B, similar to experiment 1.The algorithm information is presented in SI Appendix, Fig. S2.
On the dependent variable page, we presented participants with a summary table including six drivers from phase B, with young and old drivers grouped separately.Participants in self and self-trained algorithm conditions were presented with their own ratings from phase B, whereas participants in the real algorithm conditions viewed the summary evaluations for phase B predicted by an individuallevel regression model on the 10 observations from phase A with two independent variables (a dummy for young or old driver, and a continuous predictor for the star rating) and with perceived driving skill as dependent variable.Below the summary evaluation table, we presented participants with a short statement about research on age bias "Research on age biases suggests that people show a tendency to associate younger people with more driving skill than older people."We asked them to examine their driving skill evaluations from Phase B (the driving skill evaluations from Phase B predicted by an algorithm trained on their evaluations) for this age bias and measured perceived influence with the same scale as in experiment 1. Example evaluation pages are presented in SI Appendix, Figs.S10 and S11.Last, all participants reported age, gender, and ethnicity.Experiment 3. We recruited a nationally representative online sample of 797 US residents (M age = 45.78 y, 49% female) from Prolific Academic.Participants were randomly assigned to one of the four conditions in a 2 (self, others) × 2 (participant, algorithm) between-subjects design.Participants imagined they would use a ride-sharing service and evaluated different drivers.Then, we presented them information about 18 drivers (nine female and nine male) in two phases (i.e., phase A and phase B).The driver information presented involved a photo from Chicago Face Database (35) and four diagnostic attributes (i.e., number of trips, star rating, experience in platform and brand of the car).We chose the diagnostic attributes because they are typically provided on ridesharing platforms such as Uber.Under each photograph, we assigned each participant to a random selection of attribute values.In other words, the same driver had different attribute values across participants, similar to experiments 1 and 2. Every attribute included 10 different values.The 10 values were randomly generated numbers between 1,000 and 3,000 for the number of trips; randomly generated numbers between 4.00 and 5.00 for star rating; the experience in the platform ranged from 8 mo to 3.5 y; 10 car brands were selected from a list providing most popular cars commonly used by Uber drivers.The full list of stimuli is available in SI Appendix.
Participants rated every driver on perceived driving skill with a 100-point slider scale from 0 (not at all skilled) to 100 (very much skilled).After evaluating the 18 drivers in phases A and B, participants in the self and others conditions moved directly to the dependent variable page, while participants from the self-trained and other-trained algorithm conditions read additional information about an algorithm.In the self-trained algorithm condition, the algorithm was said to use "your own" evaluation data from phase A to predict the evaluation from phase B. In the other-trained algorithm condition, the algorithm was said to use evaluation data from phase A from "other participants of this study" to predict the evaluation from phase B. The algorithm information is presented in SI Appendix, Fig. S3.
On the dependent variable page, we presented participants with a summary table including the actual driving evaluation values for the nine drivers from phase B, ranked from highest to lowest.Participants in all conditions viewed their own driving evaluations from phase B, however, we provided different attributions across conditions.Participants in the self condition were informed that the table summarized their own evaluations.Participants in the others condition were informed that the table summarized the evaluations from other participants of the study.Participants in the self-trained algorithm conditions were informed that the table summarized the predicted evaluations by the algorithm trained on their own data.Participants in the other-trained algorithm conditions were informed that the table summarized the predicted evaluations by the algorithm trained on other participants' data.Below the summary evaluation table, we presented participants with a short statement about research on gender bias "Research on gender biases suggests that people show a tendency to associate men with higher driving skills than women."Finally, we measured perceived influence with the same scale as in previous studies.Example evaluation pages are presented in SI Appendix, Figs.S12 and S13.Last, all participants reported age, gender, and ethnicity.Experiment 4. We recruited a nationally representative online sample of 775 US residents (M age = 45.19 y, 50% female) from Prolific Academic.Participants were randomly assigned to one of the four conditions in a 2 (self, others) × 2 (participant, algorithm) between-subjects design.Participants imagined they were looking for a one-bedroom apartment to rent for a weekend and evaluated the renting likelihood of apartments similar to experiments 1 and 2. We presented information about 18 apartments in two phases (i.e., phase A and phase B).In this experiment, we presented information about apartments that included a description of each listing and the name of its host, using distinctively African American and distinctively White names (26).The information about each listing also included four diagnostic attributes for each apartment (i.e., number of reviews, cleanliness star rating, communication star rating, and location star rating), with randomly generated values for each participant, similar to experiment 1.The list of attributes is presented in SI Appendix.The algorithm presentation and summary evaluation table were similar to experiment 3. Last, we informed participants about research on racial bias and measured its perceived influence with the same scale as in previous studies.Finally, participants reported age, gender, and ethnicity.
Fig. 1 .
Fig. 1.People see more of their biases in algorithms (Experiments 1 and 2).The violin plots represent the shape of the distribution of perceived influence by experimental condition.The dot represents the mean, and the error bars represent the 95% CI.Experiment 1 is presented in panel A (N = 801), and Experiment 2 is presented in panel B (N = 800).
Fig. 2 .Fig. 3 .
Fig. 2. People see as much bias in algorithms as in other people (Experiments 3 and 4).*** P < 0.001, "n.s." nonsignificant.The violin plots represent the shape of the distribution of perceived influence by experimental condition.The dot represents the mean, and the error bars represent the 95% CI.Experiment 3 is presented in panel A (N = 797), and Experiment 4 is presented in panel B (N = 775).
Fig. 4 .
Fig. 4. People are more likely to correct their biases in algorithms (Experiment 7).***P < 0.001.The violin plots represent the shape of the distribution of perceived influence by experimental condition.The dot represents the mean, and the error bars represent the 95% CI.The dependent variable is correction, excluding outliers above three times the interquartile range (N = 376).For additional figures without outlier exclusions, see SI Appendix, Fig. S19. | 2024-04-12T06:17:31.271Z | 2024-04-10T00:00:00.000 | {
"year": 2024,
"sha1": "a9bb38f2d0eb1edecb8a0316a488e07cfe272db5",
"oa_license": "CCBYNCND",
"oa_url": "https://www.pnas.org/doi/pdf/10.1073/pnas.2317602121",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b1e1003a23820a218414c42982c8612edf1870b6",
"s2fieldsofstudy": [
"Psychology",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244890216 | pes2o/s2orc | v3-fos-license | Psychometric Properties of the Multidimensional Temperance Scale in Adolescents
Recent research has shown the relevance of measuring the virtue of temperance. The present study tested a multidimensional and second-order structure scale to assess temperance using a sub-scale of the Values in Action Inventory of Strengths for Youth (VIA-Youth). Scale properties were tested using data from a sample of 860 adolescents aged from 12 to 18 years old (M = 14.28 years, SD = 1.65). The sample was randomly split into two subsamples for model cross-validation. Using the first sample, we assessed scale dimensionality, measurement invariance, and discriminant and concurrent validity. A second sample was used for model cross-validation. Confirmatory factorial analysis confirmed the fit of one second-order factor temperance virtue model, with the dimensions of forgiveness, modesty, prudence, and self-control. The results indicate scale measurement equivalence across gender and stage of adolescence (early vs. middle). Latent means difference tests showed significant differences in forgiveness, modesty, and self-regulation by gender, and modesty according to adolescence stage. Moreover, the scale showed discriminant and concurrent validity. These findings indicate that this scale is helpful for assessing temperance in adolescents and suggest the value of temperance as a multidimensional and second-order construct.
Introduction
Virtues are central attributes that are highly appreciated in philosophy and religious theories worldwide, since they favor the optimal functioning of people [1,2]. Temperance, one of these identified virtues [3,4], contributes to a wide variety of positive consequences, such as individuals' well-being and the achievement of goals [5][6][7]. As a result, the interest among scholars in measuring this virtue has seen exponential growth in recent years [8][9][10][11].
Temperance involves regulating emotions, behavior, and motivation [2,12]. According to the literature [2,13], this virtue encompasses the strengths of modesty (avoiding flaunting and permitting personal accomplishments to speak for themselves), self-regulation (regulating behaviors and feelings), forgiveness (leaving aside anger or revenge towards the offender), and prudence (being cautious with individual decisions and avoiding actions one may regret). Some scholars have adopted the positive psychology approach [6,14] to research this virtue because the approach embraces the scientific study of positive human functioning and adaptive behaviors at all levels, such as personal, relational, and institutional [1,15].
Measures of Temperance
Temperance is recognized as a crucial trait related to adolescents' personal and academic positive outcomes [16][17][18][19]. The growing interest in studying virtues in adolescence [20]. This measure, which has been widely used, includes a subscale of the virtue of temperance [21][22][23]; the subscale consists of four first-order factor measures that include forgiveness, modesty, prudence, and self-regulation [20]. However, research shows that the factorial structure of the scale is inconsistent. That is, some studies reported it as a three-factor scale [9,23] and others [20,21] reported it as a four-factor, five-factor [24], or even six-factor scale [19]. In addition, a study conducted by Van Eeden et al. [22] showed no clustering of the strengths, contradicting the theory. Second, the evidence for second-order models is scarce [21,23]. Furthermore, most studies have conducted exploratory factor analyses or principal component analysis [9,19,24], using total strength scores instead of the items of the scale. As a result, the factor weights of each item were not reported. Finally, studies conducted within the Mexican context are limited, and have only focused on the adult population [25,26].
Measurement Invariance
Although the empirical evidence is still inconclusive, the current literature suggests that temperance differs by gender and age. Some studies [21,[27][28][29][30] report higher scores of temperance in males, whereas others [19,[31][32][33] report higher levels in females. Similarly, findings regarding age are contradictory, whereas some studies indicate that temperance positively correlates with age [10,31,32,34] and others have found no association between these variables [35,36] and it has been recently found that temperance decreases in adolescence. However, these findings should be taken with caution since these studies did not report measurement invariance when examining group differences. Verifying measurement invariance results is necessary to make a meaningful comparison between group means and to warrant that group differences are associated with latent variables [37,38]. Therefore, it is essential to examine the measurement equivalence of the temperance scale by gender and stage of adolescence to realize meaningful comparisons by groups in temperance dimensions of self-control, forgiveness, prudence, and modesty.
The Present Study
The measurement of temperance has some potential weaknesses, such as (a) the dearth of studies that have examined the fit to the data of a second-order factor model; (b) no study known by the authors has examined the invariance of measurement according to gender and stage of adolescence, although prior research suggests that temperance may differ by gender and age [27,32]; (c) the studies evaluating the discriminant and concurrent validity of temperance are scarce; (d) there is no study known by the authors that has examined the psychometric properties of a multidimensional temperance scale in Mexican adolescents. To attend to these gaps, in this study we proposed: (1) examining the dimensionality of a second-order model that displays four first-order factors (see Figure 1; see Table 1); (2) examining scale measurement invariance by gender and adolescence stage (early vs. middle); (3) comparing latent variable mean differences across groups, if scale measurement invariance is confirmed; (4) assessing discriminant validity by analyzing the relationships between each subscale; and (5) examining concurrent validity by testing the correlations between the dimensions of the temperance scale and bullying aggression (proactive and reactive). trol) that fit the data. Hypothesis 2 (measurement invariance): the scale shows robust invariance across gender and adolescence stages. Hypothesis 3 (latent means): Studies are not conclusive, and no previous hypothesis about gender and stage of adolescence differences was considered. Hypothesis 4 (discriminant validity): Each subscale of the temperance scale discriminates between conceptually similar constructs. Hypothesis 5 (concurrent validity): the dimensions of the temperance scale have a negative relation with proactive and reactive bullying aggression. To accomplish these purposes, we considered five hypotheses. Hypothesis 1 (internal structure): the indicators used to measure temperance reveal a second-order factor structure that contains four first-order factors (forgiveness, modesty, prudence, and self-control) that fit the data. Hypothesis 2 (measurement invariance): the scale shows robust invariance across gender and adolescence stages. Hypothesis 3 (latent means): Studies are not conclusive, and no previous hypothesis about gender and stage of adolescence differences was considered. Hypothesis 4 (discriminant validity): Each subscale of the temperance scale discriminates between conceptually similar constructs. Hypothesis 5 (concurrent validity): the dimensions of the temperance scale have a negative relation with proactive and reactive bullying aggression.
Participants
Participants were students from 32 public secondary and 32 high schools from three cities in Sonora, Mexico. These schools typically serve students of low and middle socioeconomic status. The study sample was composed of 860 adolescent students, 406 (47.2%) males and 454 (52.8%) females, whose ages ranged from 12 to 18 years old; 430 (50%) early adolescents (M age = 12.79 years, SD = 0.07) and 430 (50%) middle adolescents (M age = 16.58 years, SD = 0.06). The sample was randomly split into two subsamples for model calibration (n = 430) and cross-validation (n = 430).
Temperance
A subscale of temperance virtue (TV) of the Values in Action Inventory of Strengths for Youth [20] (VIA-Youth; Spanish version) was used; temperance is a virtue that encompasses strengths that focus on controlling excesses. The scale includes four dimensions: forgiveness, which involves leaving aside resentment or revenge and a benevolent feeling towards the offender (4 items, e.g., I am a forgiving person); modesty, which implies avoiding flaunting and permitting personal accomplishments to provide the necessary information about oneself (4 items, e.g., I never brag or flaunt my accomplishments); prudence, which includes being careful with personal decisions and avoiding speaking or behaving in a way that may be regretted (4 items, e.g., I think about the consequences of my behavior before I act.); and self-regulation, which involves the ability to regulate actions, emotions and resist temptations (4 items, e.g., I can control my anger quite well). Responses used a five-point Likert scale (0 = not like me at all to 4 = very much like me).
Procedure
First, the study received ethical clearance from the Ethical Research Committee from the Technological Institute of Sonora (Authorization number: PROFAPI_ 2020_0018). Then, we gained authorization from school authorities for conducting the study. In a virtual meeting organized by the teachers, we informed the students' parents about the research purpose. Then a consent letter was sent by email to parents to request their authorization for their children to respond to the questionnaires. Only 3% of parents rejected their children's participation. Once approvals were gained, students were invited to participate in the study voluntarily. Data collection was carried out through online surveys. The time estimated to respond to the survey was about 20 to 30 min.
Data Analysis
We verified that missing data (less than 5%) were completely random. We treated missing data using multiple imputation methods, accessible in SPSS 25 (IBM Corp., Armonk, NY, USA). Descriptive statistics were run on the items (means, standard deviations, skewness, and kurtosis). Then, an unconditional random effect model was calculated to examine the school dependency of temperance and bullying aggression. The results suggested that temperance differences (Wald z statistic = 1.68, p = 0.092; intraclass coefficient ICC = 0.04) and aggression (Wald z statistic = 1.24, p = 0.214; ICC = 0.05) differences were not dependent on school [61,62]. Confirmatory factorial analyses (CFA) were conducted using the Bollen-Stine and maximum likelihood bias-corrected confidence bootstrapping estimator (500 replicates with 95% CI) in AMOS 25 (IBM Corp., Armonk, NY, USA). These estimators were chosen as the Mardia coefficient value was 9.47, which suggests multivariate non-normality. Bootstrapping is a robust procedure for dealing with non-normality in multivariate data [63][64][65].
Dimensionality
In order assess the dimensionality of the temperance scale, we analyzed a first-order factor goodness-of-fit model (Model A). After establishing the four first-order measurement model's adjustment, we tested a model with these four factors as indicators of a secondorder temperance dimension to assess whether this first-order model could be conformed with the dimensions of one second-order factor model (Model B). In estimating the models' global goodness of fit, we used the X 2 statistic and associate probability, and Bollen-Stine bootstrap probability. Since X 2 and Bollen-Stine bootstrap are sensitive to large samples [66][67][68], the standardized root means square residual (SRMR), comparative fit index (CFI), Tucker-Lewis index (TLI), and root mean square error of approximation (RMSEA) with their confidence intervals were reported. Structural equation modeling (SEM) literature suggests that model fit is adequate when X 2 with p > 0.001; Bollen-Stine p < 0.05; CFI ≥ 0.95, and TLI ≥ 0.90. For the SRMR and RMSEA, a value ≤ 0.05 shows that the model fit is excellent, and a value ≤ 0.08 indicates an acceptable fit [38,69]. Differences in X 2 (∆X 2 ) and the Bayesian information criterion (∆BIC) were utilized to compare models. In cases where resulting differences in the X 2 (∆X 2 ) value are significant, the model with a lower X 2 has a better fit to the data [38,70]. Differences of BIC > 10 show distinctions in the model's fit to the data, and a model with greater BIC has a poorer fit [38,71,72].
Measurement Invariance
Nested models were tested according to the procedure suggested in the literature [76,77]. We tested the baseline model configural that considered a fixed number of factors in each group (configural invariance). When the baseline model fit each group, we tested the factors' loading invariance across groups (metric invariance). Once the metric invariance was verified, we evaluated the invariance-constrained measurement intercept (scalar invariance). Differences in X 2 with an associated p < 0.001 suggest the measurement model is equivalent across groups [38,77]. However, the ∆X 2 statistic is sensitive to sample sizes [77,78]; thus, scholars have advocated using goodness-of-fit indexes, such as differences in CFI (∆CFI) and differences in RMSEA (∆RMSEA). We followed the values proposed by scholars [77,79], who assert that differences greater than 0.01 in the CFI and 0.015 in the RMSEA exhibit a significant difference in model fit for the testing of invariance. In cases where the two procedures differ, we relied on the values of differences in CFI and RMSEA because of the larger sample used in this study [77][78][79]. If scalar invariance was confirmed, we calculated groups' latent mean differences. For this, the means for the reference group (male and early adolescents) were fixed. We used a z statistic to compare latent means [38,76].
Discriminant Validity
Discriminant validity confirms that the constructs are empirically unique [80,81]. Campbell [82] suggests that it ensures that a latent variable is "not correlated too highly with measures from which it is supposed to differ" (p. 6). Based on the literature, we assumed that discriminant invariance is confirmed when the average variance extracted (AVE) in each factor is greater than the square of this correlation with the other scale factors [81,83].
Concurrent Validity
Concurrent validity requires that the scale scores correlate in a hypothesized model with other constructs measured simultaneously [84]. To test concurrent validity, correlations to temperance dimensions with aggressive and proactive bullying aggression were calculated. Values of r greater than 0.10 indicate smaller effects, r values between 0.20 and 0.29 reveal a medium effect, and r values greater than 0.30 suggest a large effect [85].
Model Cross-Validation
We used a cross-validation method to test the replicability of the model dimensionality obtained in the calibration sample (n = 430) in an independent sample of adolescents (n = 430). A multigroup analysis was used to assess the model replicability in an independent sample. We compared the unconstrained model with a model that had factor loadings and fixed variances/covariances. Based on the SEM literature, we considered that factorial invariance was confirmed when ∆X 2 was not significant (p > 0.001), ∆CFI ≤ 0.01, and ∆RMSEA ≤ 0.05. The X 2 statistic is sensitive to a larger sample and non-normality departures, so we used ∆CFI and ∆RMSEA values when results were contradictory.
Descriptive Item Analysis
The collected responses suggested that adolescents exhibit a moderate level of temperance. Values' skewness and kurtosis indicated normal univariate distribution in all items (see Table 2).
Dimensionality
The initial four-first-order factor model (Model A) did not fit the data (see Table 3). Therefore, we improved the model's fit based on the analysis of factor loadings and modification indices. The literature suggests that the factor loading for an item should be 0.6 or higher [67,74,86] to be a salient factor. Based on this, item 1 ("I often stay mad at people even when they apologize"; standardized factor loading = 0.11), item 5 ("I am not a show-off"; standardized factor loading = 0.04), item 10 ("I often find myself doing things that I know I shouldn't be doing"; standardized factor loading = 0.42), and item 14 ("My temper often gets the best of me"; standardized factor loading = 0.17) were removed from the model. In addition, considering the modification indices (MI > 5) and the theoretical issues [38,74], we added three error covariances. Note: X 2 -chi-square; df -degrees of freedom; p-associated probability; SRMR-standardized root mean square residual; CFI-Comparative fit index; TLI-Tucker-Lewis index; RMSEA-Root mean square error of approximation; BIC-Bayesian Information Criterion.
These changes resulted in a significant improvement in the fit of this model (see Table 4). The goodness-of-fit suggests an acceptable fit of the four-first-order factors model (Model B). Then we compared the four-first-order factor models (Model B) with one secondorder model (Model C) that displayed four first-order factors. The adjustment to the data of one second-order factor model (Model C) was statically better than that of the four first-order factor model, ∆X 2 = 11.25, df = 2, p < 0.001; ∆BIC = 11.25. Therefore, based on theoretical and empirical findings, which suggest that temperance is a virtue that comprises several strengths, we chose Model C over the other choices and the described results are based on this model. Note: df -degree free; ∆χ 2 -difference in chi-square; ∆df -difference in degree free; ∆CFI-difference in comparative fit index; ∆RMSEA-difference in root mean square error of approximation.
Measurement Invariance by Stage of Adolescence
The baseline model fit to the data (configural invariance), X 2 = 225.16, df = 186, p = 0.026; Bollen-Stine bootstrapping p = 0.052; SRMR = 0.05; TLI = 0.98; CFI = 0.98; RMSEA = 0.024, 90% CI (0.009, 0.034), supporting the equivalence of the second-order factor structure of temperance across early and middle adolescent groups. Then, we assessed the metric invariance of all factor loadings (measure invariance). The model with the factor loadings constrained fit adequately to the data based on the criteria of the X 2 differences and changes in CFI and RMSEA values, ΔX 2 = 11.82, df = 12, p = 0.46; ΔCFI = 0.001; ΔRMSEA = 0.001), which suggests that the factor loadings are consistent across the stages of adolescence. Finally, we constrained the intercepts (scalar invariance) in the model comparison. Our findings suggested that there are no important group differences in the intercept, ΔX 2
Measurement Invariance by Stage of Adolescence
The baseline model fit to the data (configural invariance), X 2 = 225.16, df = 186, p = 0.026; Bollen-Stine bootstrapping p = 0.052; SRMR = 0.05; TLI = 0.98; CFI = 0.98; RMSEA = 0.024, 90% CI (0.009, 0.034), supporting the equivalence of the second-order factor structure of temperance across early and middle adolescent groups. Then, we assessed the metric invariance of all factor loadings (measure invariance). The model with the factor loadings constrained fit adequately to the data based on the criteria of the X 2 differences and changes in CFI and RMSEA values, ∆X 2 = 11.82, df = 12, p = 0.46; ∆CFI = 0.001; ∆RMSEA = 0.001), which suggests that the factor loadings are consistent across the stages of adolescence. Finally, we constrained the intercepts (scalar invariance) in the model comparison. Our findings suggested that there are no important group differences in the intercept, ∆X 2 = 44.52, df = 41, p = 0.327; ∆CFI = 0.002; ∆RMSEA = 0.002.
The goodness-of-fit statistic suggested that the measurement model was invariant across early and middle adolescent groups (see Table 4).
Latent Means Differences
To test latent means differences, we fit males' means to zero. The analysis revealed significant mean differences by gender on three of the first-order factors. Females had higher scores on forgiveness and modesty than males, but lower scores on self-regulation than males. The gender difference in prudence was not statistically significant.
Regarding latent means differences by adolescence stage, we chose early adolescents as the reference group and estimated the latent mean of the middle adolescent group. The test revealed that differences in forgiveness, prudence, and self-control were not statistically significant. However, the mean difference in modesty was statistically significant (see Table 5). Middle adolescents had a higher score on modesty than early adolescents.
Concurrent Validity
The dimensions of temperance correlated as expected with proactive and reactive aggression (see Table 6). As anticipated, all the factors of temperance had a negative correlation to proactive and reactive bullying aggression. The effect size of the correlation between modesty and proactive and reactive aggression was small (r > 0.10), and the values of all other correlations indicated a medium (r > 0.20) or large (r > 0.30) effect size. Overall, these results suggest that correlations between temperance dimensions and both types of aggression have theoretical and practical implications [83], confirming the Temperance Scale's concurrent validity.
Cross-Validation Analysis
We cross-validated the data to address problems associated with the replicability of the model. The model was tested on an independent sample. Multigroup invariance analysis provided evidence of configural (X 2 = 60.21, df = 48, p = 0.111; SRMR = 0.06; CFI = 0.96; TLI = 0.95; RMSEA = 0.05, 90% CI [0.03, 0.07]), metric, and scalar invariance (see Table 7). This evidence allowed us to conclude that the measurement model is replicable in both samples.
Discussion
We analyzed the psychometric properties of one second-order multidimensional model of Temperance of VIA-Youth, according to Park and Peterson's [20] conceptualization. Given the gaps in the construct measurement, this study can add to the field, particularly in terms of temperance assessment. Overall, our results showed that the adjustment to a single second-order measurement model fit the data better and demonstrated its replicability through cross-validation. Moreover, the results supported measurement invariance, indicating that the measurement model is equivalent by gender and adolescence stage. For deeply understanding the underpinning differences around temperance, this characteristic of the scale is crucial. Finally, we confirmed the discriminant and concurrent scale validity.
Temperance as a Second-Order Factor
The results confirmed our second-order structure hypothesis, which comprises four first-order factors: forgiveness, prudence, modesty, and self-regulation. Furthermore, after comparing the first-order and second-order models, we found evidence suggesting that the second-order model fits better to the data. These findings are aligned with previous research [21,23], indicating that temperance has a second-order structure that emerges from its four strengths. Considering this, subsequent investigations should analyze the foundations and outcomes of temperance considering its four dimensions.
Measurement Invariance by Gender and Adolescence Stage
Our findings support the measurement equivalence of the Temperance Scale by gender and stage of adolescence. These results indicate that the scale items may be utilized to measure this construct in both genders and in early vs. middle adolescents. Therefore, unlike previous scales, this scale allows researchers to compare genders and stages of adolescence more fairly and meaningfully.
Latent mean differences indicate that females scored higher in forgiveness and modesty than males. These results are in alignment with previous research [29,33]. Furthermore, similarly to other studies [31,32], we found that males showed higher self-control than females. Data did not show differences in forgiveness, prudence, and self-control regarding the adolescence stage. These findings are also congruent with past studies [32,35] that have found no relation between temperance and age. However, our findings reveal that middle adolescents scored higher in modesty than early adolescents. This evidence is consistent with that of Brown et al. [32], who found higher levels of modesty in older adolescents. Regardless of the present results, further studies should continue exploring gender and age differences to clarify the underpinnings of these discrepancies and their implications on adolescent development.
Discriminant Validity
The results prove that each temperance subscale assesses a different scale dimension, which supports discriminant validity. In line with previous research, study results indicate that temperance dimensions evaluate a different strength [20,21]. Our study provides empirical and theoretical evidence of the multidimensionality of temperance. Further studies need to examine the variables associated with first-order dimensions of temperance and its consequences in relation to adolescents' psycho-emotional development on each dimension of temperance.
Concurrent Validity
In addition, the data provide evidence in favor of concurrent validity. In line with prior research [45,52,87], these results showed significant and negative associations between traits that conformed to temperance virtues and proactive and reactive aggression. Moreover, these correlation effect sizes suggest practical implications. Overall, these results indicate that temperance and its strengths may be important variables to consider for preventing peer aggression.
Theoretical and Practical Implications
The results of this study suggest that theory about virtues and character strengths is a generative framework to study positive behavior. Furthermore, our findings confirm that virtues conform to strengths that influence moral behavior [2,20]. Specifically, the study evinced that temperance is a second-order factor that displays first-order factor measures: forgiveness, modesty, prudence, and self-regulation. Similarly to other studies [21,23], these findings confirmed this factor structure. The study confirms the value of the original classification of character strengths in the VIA. This instrument will allow us to analyze the possible positive results of temperance and explore the threshold effects and the possible exponential effects of combining two or more strengths [88]. In addition, our findings suggest that strengths that conformed to temperance virtues are essential for reducing peer aggression and should contribute to the comprehension of the underpinning factors of bullying. In this regard, temperance strengths are crucial for protecting people from excesses and encouraging positive social relations and adaptive behaviors [2], which could help to decrease peer aggression.
From a practical perspective, the present study highlights the value of a scale with robust psychometric properties to measure temperance in adolescents. The accurate measurement of temperance is critical for practitioners and schools in order to enhance adolescents' strengths rather than their weaknesses, thereby improving their mental health and fostering positive development [88]. Furthermore, latent means differences could support the development of differentiated tools to increase these strengths at different stages of adolescence and by gender, offering the opportunity to direct more appropriate strategies to encourage adolescents to engage in this virtue. Overall, robust theoretical and psychometrically temperance measures allow researchers to generate relevant findings regarding the antecedents and consequences of temperance in adolescence.
Limitations
Although this study provides a helpful scale for researchers, some limitations must be considered. First, data collection was carried out through self-reports; therefore, the students' responses could be influenced by social desirability [89]. Second, our sample consisted of adolescents from northwestern Mexico; therefore, a more diverse sample is desirable to generalize the results, recognizing that student responses may differ according to the country or region. Third, cross-cultural studies are essential to assess the replicability of the measurement model in a culturally diverse population. Forth, longitudinal designs are necessary to assess the extent to which temperance changes in childhood and adolescence across time and in terms of its relationships with bullying aggression.
Conclusions
The present research sheds light on the current understanding of temperance as a virtue and the strengths that comprise it. Our findings confirmed the value of the theoretical scheme of temperance [20] as a multifactorial second-order construct. Given the importance of having appropriate measures for evaluating constructs in positive psychology, this scale provides a robust psychometric instrument for assessing temperance in adolescents. We believe that this virtue is crucial for the positive development of youth. Therefore, we consider that future studies should explain the means through which temperance is built in school and family environments.
Additionally, our study provides a valuable instrument with evidence of robust validity for the evaluation of temperance as a multidimensional construct. The above allows better understanding to assess each of these strengths in particular, as well as helping to promote them in school interventions with adolescents. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-12-05T16:05:54.644Z | 2021-12-01T00:00:00.000 | {
"year": 2021,
"sha1": "446459e3e371f27649a0f85851be169a118306ac",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/18/23/12727/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ee24c1832876ffb66234168b54e233bfd7aafc46",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
} |
211556593 | pes2o/s2orc | v3-fos-license | B cells targeting therapy in the management of systemic lupus erythematosus
Abstract Systemic lupus erythematosus (SLE) is a chronic autoimmune disease which affects the majority of organs and systems. Traditional therapies do not lead to complete remission of disease but only relieve symptoms and inflammation. B cells are the most important effector cell types in the pathogenesis of SLE. Therefore, therapies targeting B cells and their related cytokines are a very important milestone for SLE treatment. Several biologics that modulate B cells, either depleting B cells or blocking B cell functions, have been developed and evaluated in clinical trials. Belimumab, a fully humanized monoclonal antibody that specifically binds B cells activating factor (BAFF), was the first of these agents approved for SLE treatment. In this review, we explore the currently available evidence in B cell targeted therapies in SLE including agents that target B cell surface antigens (CD19, CD20, CD22), B cell survival factors (BAFF and a proliferation-inducing ligand, APRIL), cytokines (interleukin-1 and type 1 interferons) and co-stimulatory molecules (CD40 ligand). We highlighted the mechanisms of action and the individual characteristics of these biologics, and present an update on the clinical trials that have evaluated their efficacy and safety. Finally, we describe some of the emerging and promising therapies for SLE treatment.
Introduction
Systemic lupus erythematosus (SLE) is a chronic autoimmune disease that can affect the majority of organs and systems. The clinical presentations include a great diversity of symptoms ranging from mild to severe disease, with periods of remissions and flares. SLE has an unpredictable prognosis, depending mainly on severity of the major organ involvement, and an increased mortality with an estimated standardized mortality rate of 2.4-5.9 [1,2].
Traditional therapies including corticosteroids, hydroxychloroquine, azathioprine, cyclophosphamide, mycophenolate mofetil (MMF) or tacrolimus have been proved effective for SLE patients reducing organ damage and mortality. However, these agents have broadly immunosuppressive mechanisms of actions causing mild-to moderate adverse effects [3,4]. Despite many advances have been made in the treatment of SLE resulting in an increase of shortand medium-term survival [5], the rate of premature deaths, long-term prognosis and damage accrual still remains poor especially in patients with major organ involvement.
Hence, prevention and minimization of damage accrual and pursuing a stable disease controls are the main target in SLE, the utility of traditional agents is limited due to the serious adverse effects, and it is necessary to search for more specific and less toxic therapies with at least similar effectivity.
The precise mechanisms underlying disease pathogenesis in SLE are not fully understood, though it is well established that defective immune regulatory mechanisms, such as the clearance of apoptotic cells and immune complexes, are important contributors to SLE development [6]. The loss of immune tolerance leads to activation of autoreactive T cells and B cells. These cells proliferate and differentiate into harmful pathogenic T cells and plasma B cells producing pathogenic autoantibodies [7,8]. Recent insights into the pathogenesis of autoimmune diseases led to the development of a novel class of agents which targets specific dysregulated components of the immune system. Targeted immunomodulatory therapies in rheumatic diseases were first introduced in the late 1990s [9,10]. Autoreactive B cells are one of the effector cell types responsible for the perpetuation of the inflammation responses. Therefore, therapies targeting B cells and their related cytokines are a very important milestone for SLE treatment [11].
In 2011, the United States Food and Drug Administration (FDA) and European Medicine Agency (EMA) approved belimumab, the first medication to be licensed exclusively for the treatment of SLE since 1957 [12]. Belimumab is a fully humanized monoclonal antibody designed to specifically target the B cells activating factor (BAFF), preventing BAFF interaction with BAFF-related receptors, hence reducing the numbers of peripheral naïve, transitional and activated B cells [13].
In the last years, several targeted therapies have been developed and evaluated in clinical trials. However, the heterogeneity in the clinical manifestation of SLE had a great impact in the success or failure of these agents in clinical trials. To standardize the outcome of drug efficacy in SLE clinical trials, several disease activity indexes (DAI) have been used, according to the FDA recommendation. Systemic Lupus Erythematosus Disease Activity Index (SLEDAI) includes 24 clinical and laboratory variables of 9 organ systems that are weighted by the type of manifestation but not by severity. Safety of Estrogens in Lupus National Assessment study-SLEDAI (SELENA-SLEDAI) is a modified version of SLEDAI with addition scoring of some descriptors such as rash, mucosal ulcers and alopecia. The British Isles Lupus Assessment Group (BILAG) index is an organ-based transitional activity instrument which provides disease activity scorings across eight organ systems on an ordinal scale (A to E) based on the physician's intention-to-treat premise.
BILAG-Based
Composite Lupus Assessment (BICLA) responder is referred as patients who meet BILAG improvement in A and B scores, no more than one new BILAG B score, no worsening of total SLEDAI score from baseline, less than 10% deterioration and no initiation of non-protocol treatment. SLE Responder Index (SRI) is a composite outcome including a modification of SELENA-SLEDAI, BILAG and a 3-cm visual analog scale of physicianrated disease activity (PGA). Cutaneous Lupus Erythematosus Disease Area and Severity Index (CLASI) is a tool for assessing disease activity and damage in cutaneous lupus erythematosus (CLE) [14].
This review describes the current available evidence in B cell target therapies in SLE. Furthermore, we describe the emerging targeted therapies with great potential in SLE management.
Roles of B cells in SLE
In physiological development, B cells are capable to differentiate into antibody-secreting plasmablasts and plasma cells. B cell development occurs in in the fetal liver and adult bone marrow. Pre-B cells arise from progenitor (pro-B) cells and differentiate into immature B cells. Early B cell development is characterized by a molecular progress to rearrange the heavy and light chains of their immunoglobulin (Ig) genes. This process is supported by stromal cell-deriver interleukin (IL)-7 [15]. When the rearrangement is completed, B cells express a unique B-cell receptor (BCR) required for further development and survival [16].
Immature B cells exiting from bone marrow enter a transitional phase during which further maturation events occur to produce mature cells. B cells migrate to secondary lymphoid organs (spleen or lymph nodes) where may encounter antigen through interact with dendritic cells or macrophages. B cells can enter in a germinal center or differentiate into a short-lived plasma cell. Within the germinal center, B cells undergo clonal expansion, class switch recombination or somatic hypermutations resulting in antibody-secreting cells, including plasmablasts and plasma cells, and memory cells [17]. B cells can also be divided in marginal zone B cell subsets found in the marginal zone of the spleen, or follicular zone B cell subsets circulating in the periphery. Memory cells generated in the germinal center persist and differentiate into plasma cells in a secondary immune response to provide rapid antibody production [18].
In SLE, plasmablasts and plasma cells produce a wide spectrum of autoantibodies which are considered highly relevant for the disease pathogenesis such antibodies against double stranded (ds) or single (ss) stranded DNA, RNA containing/binding nuclear antigens including Smith (Sm) antigen, ribonucleoprotein, the Ro and La antigens, ribosomal P protein or phospholipids. Different types of autoantibodies are associated with different clinical presentations in SLE. Anti-dsDNA antibodies are one of the most common autoantibodies in SLE patients and their serum levels associated with renal disease in SLE [19].
Besides the production of autoantibodies, B cells may also function as antigen-presenting cells (APCs), assisting in the activation of autoreactive T cells [20]. The antibody-independent role of B cells was highlighted by Chan et al. [21]. They created a mutant mouse (MRL/lpr) with B cells that only expressed a transgene encoding surface Ig but which did not permit the secretion of antibodies. These mice developed nephritis with cellular infiltration within the kidney and spontaneous T cell activation indicating that B cells themselves, without soluble antibody production, exert pathogenic role. Furthermore, autoantigen primed B cells can activate autoreactive T cells in vivo [22]. Therefore, functional B cells independent of autoantibody production, are essential for disease expression, either contributing directly to local inflammation or by serving as APCs for antigen-specific autoreactive T cells.
Further, B cells have a role as cytokine producers which contribute to modulate the magnitude of the immune response [23]. The profile of human B cell cytokine secretion depends on the stimuli they encounter through the BCR and CD40. Sequential stimulation of BCR and CD40 in B cells leads to B cell proliferation and secretion of pro-inflammatory cytokines such as tumor necrosis factor (TNF)a, lymphotoxin and IL-6 which may act as autocrine growth and differentiation factors, as well as, serve to amplify the ongoing immune response. In contrast, CD40 stimulation alone induces a significant production of IL-10 which suppressed "inappropriate" immune response. Therefore, cytokine secretion by B cells require additional signaling beyond activation which can be provided by the immune microenvironment and specific stages of B cell differentiation [24]. Many cytokine pathways have been involved in the pathogenesis of SLE. Cytokines produce during immune activation have pleiotropic effects and can contribute positively or negatively to the expression or function of other cytokines. Owing the heterogeneity of SLE manifestations, it is probable that the contribution of B-cell derived cytokines differs between individual patients [25].
B cells as therapeutic target in SLE
SLE is characterized by polyclonal B-cell overreactivity which might be related to intrinsic polyclonal B cell activation with disturbed activation thresholds, and ineffective negative selection, lack of immunoregulatory functions, overactive inflammatory environment or disturbed cytokine production by non-B immune cells. These mechanisms are not mutually exclusive and may operate to varying extents and at different times in SLE [26][27][28].
Therapies to selective deplete B cells or reduce B cell-related cytokines became attractive therapeutic approaches in SLE. However, an important raised issue is the long-term safety of B cell depleting agents, and it has been suggested that inhibition of B cell rather than depletion might be a preferred approach.
Targeting B cell-surface antigens
3.1.1. Anti-CD19 targeting therapies CD19 is a transmembrane glycoprotein of the Ig superfamily which is expressed throughout the whole B cells development from the pre-B cells to the plasma cells [40]. CD19 acts as an adaptor protein to recruit cytoplasmic signaling proteins to the membrane. It also works within the CD19/CD21 complex, as a positive co-receptor of BCR enhancing BCR signaling in response to T cell-dependent complement-tagged antigens [41,42]. In contrast, a negative co-receptor for BCR, Fcc receptor (FccR) IIb, the only Fc receptor present on B cells, is a negative co-receptor for BCR. Dysfunction of FccRIIb contributed to excessive B cell responses and autoimmunity. Mice on the nonautoimmune C67BL/6 background develop spontaneous lupus when the gene for FccRIIb is deleted [43]. Further, B cells expression of FccRIIb is abnormally low in SLE leading to inadequate suppression of autoantigen mediated-BCR activation [44]. Therefore, FccRIIb represents a novel therapeutic target in SLE. The pharmacological co-engagement of FccRIIb and BCR complex using recombinant antibodies is an approach to induce B cell suppression. There are two agents targeting CD19, the XmAb5871 which is a bispecific antibody targeting CD19 and FccRIIb which functions by inhibiting and preserving B cells, and the CD19 chimeric antigen receptors (CARs) engineered receptors on T cells in targeting CD19 leading to B cells depletion.
A bispecific antibody, XmAb5871, also known as obexelimab, was designed to bind FccRIIb with its Fc domain and with a humanized Fv region against CD19. The Fc domain was engineered to have >400 -fold increase affinity for FccRIIb compared to native IgG1 Fc [45]. XmAb5871 suppress B cell activation and also inhibited B cell proliferation induced by other B cells activation signals such as IL-4, BAFF and lipopolysaccharide (LPS). In addition, co-engagement of BCR and FccRIIb can also inhibit APC function, T cell stimulation, as well as toll-like receptor (TLR) 9 mediated signal [29,46]. XmAb5871 Fc domain targets only FccRIIb on B cells but it does not bind to the activating receptor, FccRIII on natural killer (NK) cells, hence B cell depletion though the antibody-dependent cell-mediated cytotoxicity (ADCC), the common anti-CD19 pathway, would not occur in patients treated with XmAb5871 [20]. Therefore, treatment with XmAb5871 might be a good approach in SLE.
To date, there is only one randomized, doubleblinded, placebo-controlled phase II trial of XmAb5871 currently ongoing (NCT02725515) aiming to determine the ability of XmAb5871 in maintaining SLE disease activity improvement achieved by a brief course of steroid therapy.
Emerging immunotherapy approach known as CARs-T cell therapy uses engineered receptors bind onto T cell surface which then able to target B cells. The process includes an adoptive cell transfer (ACT) collecting method using T cells from patients and reengineered with CARs. Then CARs-T cells are infused back into the patient recognizing and attacking the cells that have the targeted antigen on their surface [47].
Engineered T cell receptors which target CD19 þ B cells, anti-CD19 CARs, are able to deplete CD19 þ B cells. Further, chimeric autoantibody receptor (CAAR) have been successfully cloned into T cells, leading to selectively killing of autoreactive B cells while sparing healthy B cells [48]. CAAR with extracellular domain of desmoglein (Dsg) 3 was successfully used in pemphigus vulgaris mouse model to engineer T cells to kill autoimmune B cells. Dsg 3 CAAR-T did not lyse the human Dsg3keratinocytes suggesting that Dsg3 CAAR-T cause no toxicity to non-autoreactive cells. Therefore, CAAR-T cells could be developed as a targeted approach in SLE while avoiding general immunosuppression.
In 2017, two anti-CD-19 CARs-T cell therapies were approved by FDA, one for treatment of acute lymphoblastic leukemia (ALL) in children, and another for adults with advance lymphomas. To date, there is only one on ongoing clinical trial, an open-labeled, uncontrolled, single-arm phase I pilot study in CD19 þ SLE patients in China (NCT03030976).
CD19 targeting therapies are still under development in SLE and their effects still yet to be known.
Anti-CD20 targeting therapies
CD20 is an activated-glycosylated phosphoprotein only expressed on B cells from pre-B cells to memory B cells, but not expressed neither by pro-B cells, antibody-producing plasma cells or T cells. Consequently, targeting CD20 spares the B cell precursor, pro-B cells and plasma cells, removing the autoreactive memory B cells and reconstituting the population of naïve transitional B cells [49].
Three agents which target CD20 have been investigated in SLE. Rituximab, a chimeric antibody which depletes B cells via ADCC, complement mediated cytotoxicity (CDC), apoptosis and antibody-dependent phagocytosis (ADP). Ocrelizumab a humanized anti-CD20 monoclonal antibody, and the newly developed glycoengineered anti-CD20 antibody obinituzumab with strong affinity to NK cells.
Rituximab is a CD20 IgG1 type 1 chimeric monoclonal antibody that specifically targets the CD20 molecule on the surface of B cells and destroys B cells. It was first approved by the FDA for the treatment of B-cell lymphomas [50]. The Fab domain of rituximab binds to the CD20 antigen while the Fc domain recruit immune effector cells such as monocytes, macrophages, neutrophils and NK cells, resulting in B cell death. Rituximab-opsonized B cells are subject to attack and killed by four major independent mechanisms: CDC, ADCC, direct ability to induce apoptosis, and ADP [51,52]. In addition, binding of rituximab to B cells may act as decoy immune complexes that efficiently divert monocytes or macrophages with tissue-associated immune complexes, and, therefore, reducing inflammation and tissue damage.
Numerous open-label studies using rituximab in SLE patients reported favorable results. A systemic review of the use of rituximab in 188 patients from 35 articles (9 uncontrolled studies and 26 case reports) revealed that 91% patients had a significant improvement in at least one systemic SLE manifestation. Adverse events, most frequently infections, were reported in 23%, The global analysis of all cases supported the use of rituximab in severe, refractory SLE cases [30]. Moreover, a systematic analysis of reports that document outcomes of rituximab treatment for refractory lupus nephritis suggests that rituximab effectively induces remission of lupus nephritis in patients who have not achieved remission with standard therapies [53].
Further, the European registries have reported good efficacy and safety of rituximab in refractory renal and non-renal SLE manifestations with a high response rate at 6 and 12 months [54][55][56][57].
Two randomized, double-blind, placebocontrolled trials evaluating the efficacy and safety of rituximab versus placebo in SLE patients have been completed. The Exploratory Phase II/III SLE Evaluation of Rituximab (EXPLORER, NCT00137969), was performed in moderate to severe active non-renal SLE patients receiving baseline immunosuppressive drugs. Despite of significant improvement in anti-dsDNA antibody titers and complement levels, there was no significant difference in clinical response, measured as BILAG scores, between the two treatment arms [58]. Similar results were obtained from the Lupus Nephritis Assessment with Rituximab (LUNAR, NCT00282347) a phase III, randomized, controlled trial in patients with active proliferative lupus nephritis receiving steroids and MMF [59].
There were no statistically significant differences in the numbers of adverse events and serious adverse events between rituximab group and the control group in both EXPLORER and LUNAR. Several reasons might have contributed to the disappointed outcomes of these two trials including the background treatment with immunosuppressive agents, the high dose of corticosteroids or the relatively short duration of trials.
A phase III, open label randomized controlled trial of rituximab and MMF without oral steroids in patients with lupus nephritis (RITUXILUP, NCT01773616) started in 2015 in United Kingdom (UK) and stopped due to unknown reasons. The pilot study conducted prior RITUXILUP, has proven the ability of the rituximab-based regimen to treat lupus nephritis in the absence of steroids [60].
One main concern in the treatment with rituximab in SLE patients is the production of human anti-chimeric antibody (HACA). HACA titers correlated with poor B cell depletion. In addition, development of HACA could decrease the drug effect and render subsequent courses of rituximab ineffective [61].
Rituximab can trigger a sequence of events that exacerbate the disease in some patients with SLE. The post-rituximab flares, also known as the "BAFFling effects, are characterized by high levels of anti-dsDNA antibodies associated with elevated circulating BAFF levels and a high proportion of plasmablasts in the B cell pool. BAFF perpetuates autoreactive B cells and also stimulates T follicular helper (TFH) cells [62]. CD20 negative cells of the B cell linage such as plasma cells, are not directly affected by rituximab and these cells may contribute to maintenance of disease in some patients.
To date, there is one phase II, randomized, controlled trial, Rituximab Objective Outcome Measures Trial in SLE, (ROOTS, NCT03054259) to evaluate the efficacy of rituximab in SLE patients with skin disease and arthritis.
The humanized anti-CD20 monoclonal antibody, ocrelizumab, has been approved by FDA in 2017 as a treatment for multiple sclerosis. Ocrelizumab has higher ADCC activity and lower CDC effects compared to rituximab [63], therefore, it might be able to modulate pathogenic response more effectively than rituximab. Ocrelizumab does not produce HACA [63] and thus could be a more favorable agent than rituximab.
To date, there is only one completed phase III, randomized, double-blind, placebo-controlled trial of ocrelizumab in SLE that in patients with class III/ IV lupus nephritis who were receiving standard care of cyclophosphamide or MMF (BELONG, NCT00626197) [64]. The trial was designed to continue for more than 2 years, but it was stopped prematurely due an imbalance in serious infections between both ocrelizumab-treated patients versus placebo-treated patients in the subgroup of patients receiving background MMF. The analysis of 223 patients receiving 32 weeks of treatment indicated that the overall renal response rates with ocrelizumab were numerically but no statistically significant superior to those placebo group.
Another phase III trial with ocrelizumab was initiate in non-renal SLE patients, BEGIN (NCT00539838), but it was terminated early and results were not reported.
Obinutuzumab is a glycoengineered fully humanized anti-CD20 IgG1 type 2 monoclonal antibody binds to an epitope on CD20 that could partially overlap with the epitope recognized by rituximab [65]. Obinutuzumab has greater affinity than rituximab for NK cells leading to a high ADCC activity [66,67]. Recent reports have shown that obinutuzumab is more efficient than rituximab at inducing B cell cytotoxicity in in vitro whole blood assays in SLE and rheumatoid arthritis (RA) patients [68]. A phase II randomized, double-blind, controlled trial comparing the efficacy and safety of obinuzutumab plus MMF/mycophenolic acid (MPA) in proliferative lupus nephritis patients is currently ongoing (NCT02550652).
Finally, some case reports have shown the efficacy of ofatumumab, a fully humanized type 1 IgG1j monoclonal anti-CD20 antibody in patients with severe SLE or lupus nephritis [69,70]. Ofatumumab binds to CD20 with higher affinity and activates CDC and ADCC more effectively than rituximab in in vitro assays and causes prolonged B cell depletion in animal studies [71]. Clinical studies reported the efficacy of this drug in the treatment of RA and relapsing-remitting multiple sclerosis [72][73][74], some case reports have shown its efficacy in severe SLE or lupus nephritis patients [69,70]. To date there is no clinical trial evaluating the efficacy of ofatumumab in SLE.
In summary, the strong depleting effects of rituximab and the ability of producing HACA might be the major causes of its failure in treating SLE. Although, ocrelizumab is supposed has lesser extent in B cells depletion, the severe adverse events related to its stronger ADCC effects could not be neglected. Obinutuzumab with strong affinity to NK cells still remains under investigation.
Anti-CD22 targeting therapies
CD22, also known as sialic acid-binding Ig-like lectin 2 (Siglec-2) or B-lymphocyte cell adhesion molecule (BL-CAM) [75] is a B-cell-restricted membrane co-receptor expressed on the surface of pro-B cell. It is present throughout the most of B cell development, highly expressed on naïve B cells, but not expressed on plasmablasts and plasma cells [76].
The first CD22-specific treatment tested in clinical trials of SLE is epratuzumab, a monoclonal antibody which inhibits B cell proliferation, enhances the normal inhibitory function of CD22 on the BCR and limits the extent of BCR signaling reducing B cell stimulation [77].
Two phase III, randomized, double-blind, placebo-controlled multicenter studies, the ALLEVIATE 1 (NCT00111306) and ALLEVIATE 2 (NCT00383214) were carried out in patients with moderately to severely active SLE but both trials were discontinued prematurely because of interruption in drug supply. The exploratory pooled analysis of the data showed total BILAG scores lower in both epratuzumab arms (360 mg/m 2 and 720 mg mg/m 2 ) versus placebo and similar incidence of adverse events. These efficacy and safety profile of epratuzumab for SLE treatment supported its continued development for SLE [31]. These results are consistent with the first clinical trial of epratuzumab in SLE, a phase II, open-label, non-randomized, single-center study including 14 patients with moderately active SLE. Epratuzumab was safe and well-tolerated with few significant adverse events [78].
Promising results were published from the phase IIb randomized, double-blind, placebo-controlled study EMBLEM (NCT00624351) [79]. The study evaluated several doses of epratuzumab in 227 moderately to severely active SLE patients using a composite end-point, the BICLA. At 12 weeks, the responder rate in the placebo arm (21.1%) was lower than in the epratuzumab arm (45.9%), but the overall test of treatment effect was not statistically significant. The exploratory pairwise analysis of responder rates demonstrated statistically significant clinical improvements in patients receiving a cumulative dose of 2400 mg epratuzumab versus placebo. No decreases in Ig levels outside normal ranges were observed during EMBLEM. Unlike rituximab, epratuzumab only partially reduces B-cell numbers and, therefore, it is possible that this agent may not be associated with decreases in Ig levels or an increased risk of infection.
Extension study of long-term safety and efficacy of epratuzumab has been done in patients who enrolled EMBLEM up to 3.2 years. Epratuzumab was well-tolerated with sustained improvements in disease activity and health-related quality of life (HRQOL), as well as reduction of steroid doses [80]. Two phase III randomized, double-blind, placebocontrolled studies, the EMBODY 1 (NCT01262365) and EMBODY 2 (NCT01261793) assessed the efficacy and security of two 2400 mg cumulative dose regimens of epratuzumab versus placebo in moderate to severe active SLE patients. In both of these trials, the observed immunological effects of epratuzumab treatment were as expected leading to reductions in the levels of Ig and CD22 with the number of B cells in peripheral blood decreasing by 30-40%. However, treatment with epratuzumab in combination with standard therapy did not achieve statistically significant difference in the primary end point, response rate at week 48 according to the BILAG-BICLA definition, compared with placebo in conjunction with standard therapy [81]. There are several factors that could have influenced the results of EMBODY1/2. First, there was a high discontinuation rate, approximately one-third of patients discontinued the study prior to week 48 and were classified as non-responders. Another important factor is the corticosteroid dosage, about 40% of patients did not reduce their corticosteroids dosage in these trials and another 40% of patients increased their corticosteroid dosage or had missing data. These issues lead to high placebo response rates at the end points. Despite the disappointment in the trials outcome, another phase 3 extension study (EMBODY4) in SLE has been completed and undergone analysis. A summary of clinical trials of therapies targeting B cell surface antigens is shown in Table 2.
In summary, epratuzumab has been undergone several clinical trials and the long-term study suggests its potential in SLE treatment.
Targeting B cell survival factors
The BAFF, also known as B lymphocyte stimulator, and APRIL, are two related members of the TNF ligand superfamily of proteins that promote B cell survival, differentiation and maturation, play a key role in the development of autoimmunity, especially in SLE [82].
BAFF is mainly expresses by monocytes, macrophages and activated T cells and can remain in a membrane bound form or released as a soluble form after cleavage by furin [83]. Only the processed soluble BAFF form is required for B cell development [84]. In contrast, APRIL normally exists in a soluble form only [85].
BAFF binds to three receptors, the BAFF receptor (BAFF-R), mainly expressed in immature B cells, the transmembrane activator and cyclophilin ligand interactor (TACI), expressed in memory B cells, or the B cell maturation antigen (BCMA) expressed in plasmablasts and plasma cells. APRIL forms heterotrimers with BAFF and enhances BAFF-mediated cell activation [86]. APRIL binds to TACI, BMCA and proteoglicans.
Development and survival of immature B cells and naïve B cells mostly depend on BAFF. On the other hand, APRIL tends to interact more avidly with antigen-experienced B cells than BAFF, binding to BCMA and promoting long-lived plasma cell survival [82,87]. BAFF can also promote T cell activation, proliferation and differentiation. The balance between BCMA and BAFF-R signaling may control the development of TFH cells indicating that the BAFF/APRIL can also regulate autoimmunity via expansion of TFH cells [88]. Animals studies revealed that knockout of BAFF in lupus-prone mice causes a reduction in disease severity and mortality, and BAFF transgenic mice developed severe lupus disease [89]. In SLE patients, serum concentrations of BAFF and APRIL were higher compared to those in healthy individuals [90], and positively correlation was found between plasma BAFF levels and SLE disease activity. All these findings support BAFF/APRIL system as a candidate for therapeutic targeting in SLE [91].
Selective blockers of BAFF and APRIL have been investigated in clinical trials for SLE. These agents inhibit autoreactive B cell activation and autoantibody productions consequently should improve autoimmunity
Belimumab is a fully humanized monoclonal antibody that specifically binds to soluble trimeric BAFF, preventing his interaction with receptors, thus inhibiting B cell maturation and survival. Recent report suggests its binding to membranebound BAFF too [92].Two large double-blind phase II, multicenter, randomized placebo-controlled trials, BLISS-52 and BLISS-76 evaluated the efficacy of belimumab in patients with SLE receiving standard therapies (NCT00424476, NCT00410384). Both studies have reported that belimumab plus standard therapy has significantly improved SRI response rate, reduced SLE disease activity and severe flares with similar rates of adverse events compared to placebo group [32,93]. A pooled analysis of these two trials indicated that belimumab has greater therapeutic benefit than standard therapy alone in patients with higher disease activity, anti-dsDNA positivity, low complement or corticosteroid treatment at baseline [94]. Post hoc analysis indicates that belimumab treatment improved overall SLE disease activity in the most common musculoskeletal and mucocutaneous organ domains [95]. In patients with renal involvement, pooled subgroup analysis showed greater improvement in patients receiving belimumab that in those on placebo groups [96]. Belimumab was well-tolerated and there was no significant difference in the frequencies of serious adverse events between belimumab and placebo groups [97].
Further, a phase III randomized placebocontrolled study in patients with SLE conducted in China, Japan and South Korea showed significantly improved disease activity and reduction in prednisone use, with no new safety issues (NCT01345253) [98]. The phase III study (EMBRACE), which evaluates the efficacy and safety of belimumab in black race, failed to meet the primary endpoint, SRI response rate with modified SLEDAI-2000 (SLEDAI-2K) scoring for proteinuria at week 52, suggests that belimumab has different effects on different races. However, the significant improvement observed in subgroups with high disease activity (HDA) supports belimumab treatment in black SLE patients with high HDA [99].
These results highlighted the efficacy and tolerability of belimumab as a novel therapeutic agent for SLE. The FDA approved belimumab (Benlysta) in 2011 for SLE, then it was approved by the National Institute for Health and Care Excellence (NICE) in 2016 and by the Japanese Ministry of Health, Labour and Welfare in September 2017.
However, the efficacy of belimumab has not been evaluated in patients with major organ involvements including severe active lupus nephritis or severe active central nervous system (CNS) lupus and it has also not yet been studied in combination with other biologics or intravenous cyclophosphamide. To date there are numerous ongoing clinical trials of belimumab in SLE patients ( Table 3).
As previously mentioned, BAFFling effects occur after rituximab treatment. These effects are characterized by high levels of anti-dsDNA antibodies associated with elevated circulating BAFF levels and a high proportion of plasmablasts in the B cell pool [100]. BAFFling effects might be diminished or eliminated by using belimumab after rituximab treatment. To date, there are 2 ongoing studies evaluating the efficacy of co-administration of rituximab and belimumab: a phase II interventional trial, synergetic B cell immodulation in SLE (SYNBloSe, NCT02284984); and a phase III randomized, double-blind, placebo-controlled trial of belimumab administered in combination with rituximab in SLE patients (BLISS-BELIEVE, NCT03312907). A phase II study about rituximab plus cyclophosphamide followed by belimumab in lupus nephritis patients has been completed and undergone analysis (CALIBRATE, NCT02260934).
Other anti-BAFF agents are also being assessed in SLE patients and are currently undergoing clinical testing.
Tabalumab, a human IgG 4j anti-BAFF monoclonal antibody that targets both membrane-bound and soluble bound BAFF [101]. The membranebound BAFF is at least 50-fold more active than the soluble BAFF, thus a potential greater clinical response might be achieved by inhibiting membrane-bound BAFF, or both forms of BAFF. Remarkably, tabalumab underwent neither phase I nor phase II evaluation in SLE but plunged directly into two separate phase III studies, the ILLUMINATE-1 (NCT01196091) and ILLUMINATE-2 (NCT01205438) [102,103]. ILLUMINATE-1 did not meet the SRI-5 primary endpoint at week 52 [102]. ILLUMINATE-2 achieved SRI-5 primary endpoint but failed to meet the multiplicity-controlled secondary endpoints including the time-to-first severe SLE flare on the SELENA-SLEDAI Flare Index, corticoid sparing effects and fatigue at week 52 [103]. The pharmaceutical company discontinued the development of this drug for SLE.
Blisibimod, also known as A-623 or AMG 623, is a human peptibody which binds to both soluble and cell membrane-bound forms of BAFF [104]. A phase II, randomized, placebo-controlled trial of blisibimod in SLE, PEARL-SC (NCT01162681) confirmed its favorable safety and tolerability compared with placebo but failed to meet the primary endpoint SRI-5 at week 24 [105]. In this study, significant increases from baseline in serum concentrations of complement C3 and C4, and significant reductions in concentration of anti-dsDNA autoantibodies observed with blisibimod compared with placebo support the continuing development of blisibimod as a therapy for patients with SLE.
Recently, a phase III randomized, double-blind, placebo-controlled study, CHABLIS-SC1 (NCT01395745) of blisibimod in patients with active SLE failed to meet its primary end-point of SRI-6. However, blisibimod had successfully reduced the usage of steroid, levels of proteinuria and biomarker responses suggesting that blisibimod might have important clinical benefits [106]. Another phase III study of blisibimod in SLE patients with or without nephritis, the CHABLIS7.5 is still ongoing (NCT02514967). A long-term safety PEARL-SC (NCT01305746) extension trial evaluating the effects of blisibimod on renal and inflammation biomarkers in SLE patients has been completed and reported decreased in proteinuria, anti-ds-DNA antibodies, number of peripheral CD19þ/CD20þ B cells, serum levels of IgG and IgM and increased glomerular filtration rate (GFR) compared to placebo group, with similar safety profile [107].
Atacicept, previously referred to as TACI-Ig, is a fully human recombinant fusion protein containing the extracellular domain of TACI receptor joined to a human IgG1 the Fc domain. Atacicept binds to both BAFF and APRIL and inhibits activation of TACI-mediated signaling. A multicenter, Phase Ia double-blind placebo-controlled trial of atacicept versus placebo in patients with mild to moderate SLE demonstrated that treatment with atacicept resulted in dose-dependent reductions in Ig levels and sustained dose-related reduction of total B cells compared to placebo. There were no differences in the frequency or type of adverse events in patients treated with atacicept [108]. However, a II/III randomized, double-blind, placebo-controlled study of atacicept in lupus nephritis patients in combination with MMF and corticosteroids (NCT00573157) was prematurely terminated after the enrollment of six patients due to an unexpected decline in serum IgG and the occurrence of serious infections [109]. Moreover, a phase II/III randomized, double-blind, placebo-controlled trial APRIL-SLE (NCT00624338) was designed to assess whether atacicept could prevent flares in patients with moderate-to-severe SLE administered together with a course of corticosteroids. Patients were randomized to 75 mg or 150 mg of atacicept or placebo group. Two fatal infections in the 150 mg atacicept group led to premature termination of this group. There was no significant difference in 75 mg atacicept group compared to the placebo group in primary endpoint, the proportion of patients experiencing at least one flare of BILAG A or B, as well as in secondary endpoint or time to first flare [33].
A 24-week, randomized, double-blind, placebocontrolled phase IIb study compared the efficacy and safety of two doses of atacicept (75 and 150 mg) with placebo in patients with active SLE receiving standard treatment, ADDRESS II (NCT01972568). In predefined subpopulations with HDA at baseline and/or serologically active disease statistically significant improvements in the SRI-4 and SRI-6 response rates were seen with atacicept versus placebo. The risks of serious adverse events and serious or severe infection were not increased with atacicept as compared with placebo [110]. A long-term extension of ADDRESS II was evaluated the long-term safety and tolerability of atacicept in SLE patients (NCT02070978) and has been successfully extended for another 24 weeks. The results of this trial indicate that rates of flares continued to be low in atacicept-treated patients [111].
A novel soluble, fully human recombinant fusion protein, RCT-18, binds and neutralized the activity of BAFF and APRIL. This protein is constructed with the soluble extracellular BAFF/APRIL domain of TACI receptor and the Fc portion of human IgG. Compared to atacicept, RCT-18 reserves the N-terminal fragment of extracellular TACI and increases the binding capacity of TACI to BAFF [34]. Currently, a phase IIb randomized, double-blind, placebo-controlled trial to determine the doses of RCT-18 in SLE patients is in progress (NCT02885610).
Overall, belimumab, tabalumab and blisibimod target BAFF. However, tabalumab and blisibimod failed to achieve endpoints and worsen the patients' condition. Atacicept and RCT-18 target both BAFF and APRIL, while atacicept has been recognized to reach the end point in trials, it is not recommended due to the fatal adverse effects. RCT-18 is still under investigation. Table 4 summarized the clinical trials targeting B cell survival factors in SLE patients.
Targeting cytokines
Cytokines are secreted by different immune cells and modulate the activation and/or functions of several target cells in the immune systems. Interferons and some IL have an active role in the pathogenesis of SLE and can contribute significantly to the immune imbalance in the disease. Cytokine-based therapies could indirectly target B cells in SLE patients.
IL-6 and its receptor
IL-6 is a pleiotropic cytokine produced mainly by macrophages and monocytes, but also by T and B cells. IL-6 plays an important role in immune regulation and inflammation [112].
The receptor for IL-6 consist of two functional proteins, a 80 kDa glycoprotein (gp) namely IL-6R and gp130. IL-6R is the ligand specific binding component while gp130 is a shared receptor component responsible for transmitting signals of IL-6 related cytokines.
Data from animal studies indicated an association between IL-6 and SLE progression and results for human studies suggest that IL-6 plays a critical role in the B cell hyperactivity an immunopathology of SLE [112]. In SLE patients, serum concentration of IL-6 was substantially elevated compared to healthy subjects [113]. Treatment with hydroxychloroquine dramatically decreases IL-6 levels in SLE patients [114]. Moreover, neutralizing antibodies against IL-6 inhibits autoimmune response including spontaneous polyclonal antibody production by SLE monocytes and lymphocytes in vitro [115]. Taken together, targeting IL-6 signaling may improve SLE by indirectly downregulate the B cell differentiation into plasma cells and decrease IL-6 levels.
Sirukumab, formerly called CNT0136, is a human anti-IL-6 monoclonal antibody that binds to IL-6 with high affinity ad inhibits IL-6-mediated signal transducer and activator of transcription 3 (STAT-3) phosphorylation attenuating the biological activity of IL-6. Results from a phase I randomized, doubleblind, placebo-controlled study of sirukumab in patients with CLE or SLE (NCT01702740) reported that sirukumab infusions were generally well tolerated in patients with mild, stable, active disease [35]. Another phase II randomized, double-blind, placebocontrolled study evaluated efficacy and safety of sirukumab in patients with class III or class IV lupus nephritis (NCT01273389) showed that the addition of sirukumab to the treatment regimen did not result in an overall improvement in proteinuria at week 24 and 47.6% of patients in sirukumab group had more than 1 serious adverse events through week 40 [116]. There is no ongoing trial on sirukumab in SLE.
Tociluzimab is humanized anti-IL-6R that prevents the binding of IL-6 to both soluble and membrane-bound IL-6R. Only one phase I, open-labeled study of tocilizumab in SLE patients has been done (NCT00046774). The preliminary data showed significant improvements of disease activity after treatment with tocilizumab, but the increased risk of infection and neutropenia observed may limit its use in SLE [36].
Vobarilizumab, also known as ALX-0061 is an anti-IL-6R antibody with longer half life than tocilizumab composed of two heavy chains. One chain targets human serum albumin and is designed to prolong the molecules half-life by up to 14 days; the other chain targets the soluble IL-6R. Ablynx, the developer of vobarilizumab, failed to meet the primary endpoints in the phase II randomized, doubleblind, placebo-controlled trial in moderate to severe SLE patients (NCT02437890) and serious adverse events were reported in 2 percent in vobarilizumab group [117] suggesting that it has limiter potential in treating SLE.
In summary, sirukumab targets IL-6 while tocilizumab and vobarilizumab target IL-6R. The overall effect of these biologics is the inhibition of terminal differentiation of B cell into plasma cells. Sirukumab did no improvement in proteinuria in lupus nephritis patients whereas tocilizumab leads to a significant improvement in disease activity in SLE.
Type 1 interferons
Type 1 interferons (IFN-I), and particularly IFN-a, are central mediators in the pathogenesis of SLE [118]. Cell signaling by all IFN-I, including IFN-a, IFN-b, IFN-e, IFN-j, and IFN-x, is mediated by IFN-a is produced by B cells and plasmacytoid dendritic cells (pDCs) and participated in various pathogenic pathways in SLE [119]. IFN-a induces BAFF, which then indirectly support B cell differentiation and Ig class switching to generate potentially pathogenic autoantibodies [120]. Natural IFN-a-producing cells, pDC, have been found in the skin biopsies of CLE patients, with local productions of IFN-a [121]. The presence of IFN regulated genes correlated with anti-dsDNA antibodies and SLE clinical manifestations [122]. Moreover, development of SLE has been reported in patients receiving IFN-a therapy for a variety of conditions [123]. Therefore, blockade of IFN-a not only regulate B cells, but also able to diminish its supports in generating autoantibodies.
Rontalizumab is a humanized IgG1 anti-IFN-a monoclonal antibody which prevents signaling through IFN-IR by neutralizing all known subtypes of human IFN-a but does not bind to IFN-b or IFNx. A phase I randomized, double-blind, placebo-controlled trial has confirmed its preliminary safety, pharmacokinetics profile and pharmacodynamics effects in patients with mild active SLE (NCT00541749) [37]. A multicenter, randomized, placebo-controlled phase II efficacy and safety trial on rontalizumab (ROSE; NCT00962832) did not met the primary (BILAG-2004) or secondary (SRI) end points at week 24. However, a post hoc analysis suggested potential benefit in a small subset of patients with a low baseline IFN regulated genes [124].
Sifalimumab, formerly MEDI-545, is a fully human IgG1j monoclonal antibody that binds to IFN-a with high affinity and prevents IFN-a signaling through the type I IFN-R. Sifalimumab inhibits most but not all type I IFNa subtypes, but does not inhibit b or d IFNs. Clinical trials in SLE patients have established the safety profile of sifalimumab have suggested favorable effects on clinical outcome measures [125]. IFN target neutralization was significantly less in SLE patients with moderate to severe disease than in those with mild disease [125,126], probably reflecting an increased contributions of types Ib and d IFNs to the target signature in patients with moderate to severe SLE. A phase IIb, randomized, double-blind, placebo-controlled study demonstrated greater efficacy with sifalimumab than placebo in patients with moderate to severe active SLE and an inadequate response to standard-of-care treatments (NCT01283139) [127]. The broad-based improvement measured in both SLE composite end points (SRI-4, BICLA, BILAGbased Composite Lupus Assessment) and individual organ systems (CLASI, joint counts) supports evidence that IFN-a plays an important role in SLE pathogenesis. Another three completed phase II studies on sifalimumab in SLE patients (NCT00979654, NCT00657189, NCT01031836) reported to meet primary and some secondary end points but the treatment effects were modest, and the developer of sifalimumab had made the decision to discontinue its development in favor of another IFN-I inhibitor, anifrolumab.
Anifrolumab, also known as MEDI-546, is a fully human, IgG1j monoclonal antibody that binds to the IFNAR, resulting in IFN-stimulated gene transcription [128]. Both rontalizumab and sifalimumab have specificity only for IFNa, leaving other type I IFNs unaffected and able to bind IFNAR. On the other hand, anifrolumab, has the ability to block the entire IFN-I signaling through blocking type I IFN autoamplification loop and proinflammatory cytokine induction [129]. Therefore, it is considered as a promising therapeutic biologic for SLE as one of its major pathogenic mechanisms is chronic dysfunctional type I IFN signaling [118]. A phase II randomized, double-blind, placebo-controlled study of anifrolumab, in SLE patients with moderate to severe SLE (NCT01438489) met the primary endpoint SRI-4 at week 24 with a sustained reduction of oral corticosteroids [38]. To date, three phase III studies on anifrolumab in active SLE patients are ongoing (NCT02446899, NCT02446912, NCT02794285).
In summary, rontalizumab which targets IFN-a, sifalimumab and anifrolumab which target type I IFN-R have a good response and little adverse events in clinical trials. Blocking IFNAR is a promising and novel therapeutic approach for patients with SLE whose disease does not respond to currently available therapies.
Current clinical trials targeting cytokines are shown in Table 4.
Cd40l
Co-stimulation pathways involves mutual exchange of information and signaling and play an essential role in initiating, perpetuating and attenuating the proinflammatory immune response. The costimulatory molecule CD40 and its ligand CD40L, the CD40-CD40L system has pleiotropic effects in a variety of cells and biological processes.
CD40L has been shown to be an important immune-inflammatory modulator such it is a credible candidate for pharmacological intervention. CD40L is essential for the growth, differentiation, resistance to apoptosis, and effector functions of B cells [130]. CD40L is widely expressed on naïve and activated CD4þ T cells and platelets [131]. CD40-CD40L engagement is essential for normal T cell/B cell functional interactions of CD8þ T cell responses, Ig class-switching and induction of dendritic cell maturation [132][133][134]. Thus, targeting CD40L on T cells would eventually lead to diminish the function of B cells.
CD40L was reported to be overexpressed on T cells of female lupus patients [135,136] and CD40-CD40L system has an important role in the production of pathogenic autoantibodies and tissue injury in lupus nephritis [137]. These characteristics favor CD40L as a therapeutic target in SLE patients.
Two monoclonal antibodies against CD40L have been evaluated in SLE, ruplizumab and toralizumab. Ruplizumab, BG9588, is a humanized monoclonal anti-CD40L IgG1 antibody which blocks antigenspecific IgG responses in nonhuman primates (baboons and rhesus monkeys) with a variety of Tdependent antigens. Ruplizumab was analyzed in a phase II study in patients with proliferative lupus glomerulonephritis (NCT0001789) showing a reduction in proteinuria, hematuria, and anti-dsDNA titers and an increase serum C3 levels. The study was prematurely terminated after 2 patients experienced myocardial infarction [137]. Toralizumab, another anti-CD40L IgG1 antibody, was also associated with thromboembolic events [138].
A new monoclonal antibody against CD40L, dapirolizumab pegol CPD7657, has been developed. It is an anti-CD40L Fab' antibody fragment conjugated to polyethylene glycol (PEG). This molecule does not have Fc fragment and lack the ability to bind the platelet receptor FccIIa, which limits the risk of increased platelet aggregation. In two phase I trials in SLE patients, no thromboembolic events were recorded and good safety profile reported [39,139]. There is one ongoing phase II study evaluating efficacy and safety of dapirolizumab pegol in moderate to severe SLE patients (NCT02804763) ( Table 4).
In summary, the development of ruplizumab and toralizumab has been discontinued due to the severe adverse events. Dapirolizumab has lesser extent in platelet aggregation and hence no thromboembolic event has been reported. Its potential in treating SLE is still under investigation. Figure 1 illustrates the current approaches to B cell-directed therapies in SLE.
New trends in targeted therapies
We have described targeting therapies against B cells surface and B cells related cytokines which are located outside of B cells. Recently, researchers started to focus on SLE treatment which targets molecules within the cells. Bruton's tyrosine kinase (BTK), an important molecule in B cells, has been reported to have multiple pathogenic roles in SLE including B cells differentiation and autoantibodies production [140]. One of the BTK inhibitors, evotrubinib, has been reported to be a promising agent in treating chronic autoimmune disease in mice [141]. Further, BTK inhibition was proven to be effective in improving renal, cutaneous and brain manifestations in a murine model of SLE [142]. BTK inhibitors are now undergoing investigation in phase 1 studies in SLE patients (NCT02537028, NCT03878303). Nevertheless, SLE treatment has jumped into a new era in targeting molecules within the pathogenic cells.
Conclusion
SLE is a complex and clinically heterogeneous autoimmune disease involving many immunological pathways. SLE management, particularly refractory disease, represents a major challenge for rheumatologists, and the development of specific targeted therapies became a clinical necessity. Autoreactive B cells are one of the key molecules implicated in the pathogenies of SLE and wide range of biologics targeting B cells have been investigated. However, to date, only belimumab, a fully humanized monoclonal antibody that specifically binds to soluble trimeric BAFF, has been approved as a treatment of adult SLE patients.
One of the most promising biologics to treat SLE treatment is XmAb5871, an antibody engineered designed to bind FccRIIb with its Fc domain and with a humanized Fv region against CD19. This bispecific antibody inhibits B cell function and proliferation without depleting B cells has also been Targeting cytokines or co-stimulatory molecule is another strategy to indirectly modulate B cells. Tocilizumab and vobarilizumab block IL-6R while sirukumab blocks directly IL-6 (closed triangles). Anifrolumab blocks IFNAR while rontalizumab and sifalimumab has specificity for blocking IFN-a (open triangles) but not IFN-x (closed squares) and IFN-b (open squares) which also bind to IFNAR. Finally, dapirolizumab blocks CD40L disrupting CD40-CD40L interactions. APRIL: A proliferation-inducing ligand; BAFF: B-cell activating factor; BAFF-R: B-cell activating factor receptor; BCMA: B-cell maturation antigen; CARs: Chimeric antigen receptors; IL-6: Interleukin-6; IL-6R: interleukin 6 receptor; IFN: Interferon; IFNAR: the type 1 IFN-a/b/x receptor; TACI: transmembrane activator and CAML interactor potential to suppress APC function, T cell stimulation and TLR 9 mediated signaling. Although there CAARs therapies are still under development, in SLE, the strategy of reengineered CAAR-T cells targeting autoantibodies-generating B cells has a great potential to treat SLE while avoiding general immunosuppression. The successful trials in SLE with anifrolumab suggest its benefits in targeting type 1 IFN signaling, one of the most important pathogenetic pathway in SLE, and hence anifrolumab is also considered as a novel therapeutic approach in SLE patients who do not respond to current available therapy.
Although numerous open-label studies have been reported the success of rituximab in treating SLE, the failure of several trials in rituximab and other anti-CD20 biologics suggest that targeting CD20 þ B cells may not be an option in SLE patients despite their achievement in leukemia diseases.
Finally, due to the heterogeneity of SLE population the development of personalized targeted therapies will be necessary.
Disclosure statement
No potential conflict of interest was reported by the authors. | 2019-12-12T10:24:20.690Z | 2020-01-02T00:00:00.000 | {
"year": 2020,
"sha1": "cd50a35ad42a97777265df192be234f4af4ee454",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/25785826.2019.1698929?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "758cba14ed7e59a7c411046c8c670818d447b7b6",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
262140304 | pes2o/s2orc | v3-fos-license | Comparison of Chemical Composition, Quality, and Muscle Fiber Characteristics between Cull Sows and Commercial Pigs: The Relationship between Pork Quality Based on Muscle Fiber Characteristics
Abstract This study aims to compare the chemical composition, quality, and muscle fiber characteristics of cull sows and commercial pigs, investigating the effect of changes in muscle fiber characteristics on pork quality. The proximate composition, color, pH, water-holding capacity (drip loss and cooking loss), protein solubility, total collagen content, and muscle fiber characteristics of cull sows (n=20) and commercial pigs (n=20) pork were compared. No significant differences were found between cull sows and commercial pigs in terms of proximate composition, drip loss, protein solubility, or total collagen content of their meat (p<0.05). However, cull sow pork exhibited a red color and a higher pH (p<0.05). This appears to be the result of changes in muscle fiber number and area composition (p<0.05). Cull sow meat also displayed better water-holding capacity as evident in a smaller cooking loss (p<0.05), which may be related to an increase in muscle fiber cross-sectional area (p<0.05). In conclusion, muscle fiber composition influences the pork quality; cull sow pork retains more moisture when cooked, resulting in minimal physical loss during processing and can offer more processing suitability.
Introduction
Millions of sows are raised for breeding globally and each carries, on average, three to five litters.However, if continuous production becomes impossible or breeding fails, sows are culled (Bergman et al., 2018;Rodriguez-Zas et al., 2003;Sindelar et al., ARTICLE 2003).Up to 50% (or over 3 million) of breeding sows on many farms are culled annually (Blair and Lowe, 2019;USDA, 2016).Currently, commercial pigs are distributed in the market and produce excellent quality meat due to the control of feeding adjustments, age, and the rearing environment of these animals.Meat producers have a perception that the quality of meat from cull sows will be lower than that of commercial pigs due to their breeding environment (Sindelar et al., 2003).In South Korea, meat from culled sows is mainly distributed to the processed market rather than sold fresh due to their inappropriate appearance and dark red color (Hoa et al., 2020).However, little information is available regarding the meat quality and processing suitability.Therefore, research on raw materials will provide important information to consumers and processors.
The feeding system, age, and rearing environment have various effects on meat quality characteristics.Commercial pigs produced for meat production are slaughtered at approximately 180 days, representing the point of maximum profit, and have an average body weight of about 120 kg.Cull sows are maintained for a longer period to produce litters; their average body weight must be maintained at 160-180 kg or more due to allow adequate fetal growth and lactation (Aherne et al., 1999;Williams et al., 2005).In addition, cull sows experience different feeding regimes to commercial pigs, such as long-term feed intake, the restriction of feeding during pregnancy, and receiving the supplementary feed.In addition to their weight and age, all breeding practices of cull sows differ from those used for commercial pigs.This manifests as changes in the resultant pork quality (Huff-Lonergan and Lonergan, 2005;Mancini and Hunt, 2005).For example, older and heavier pigs develop darker, reddish flesh (Latorre et al., 2004).A higher pH and less drip loss are also encountered in pigs with weight gain (Virgili et al., 2003) and restricted feeding limits carcass and intramuscular fat accumulation, resulting in the reduced tenderness and juiciness of pork products (Lebret et al., 2001).In addition, various factors such as sex, breed, amount of exercise, stress, and nutritional status can affect pork quality (Rosenvold and Anderson, 2003).Therefore, the quality characteristics and chemical composition of meat obtained from cull sows can be expected to be vastly different to those of pork from commercial pigs.
Nutritional, sensory, and technological characteristics of meat are important factors in determining its quality.Research into meat quality is important in the meat processing industry and also to consumers when selecting products.Therefore, an investigation of the chemical composition and quality characteristics of pork obtained from cull sows will provide helpful information in this regard, increase value, and suggest utilization plans.In addition to research on the general quality of cull sow pork, it is crucial to investigate the fundamental factors that determine the standard of sows and elucidate its relationship with meat quality characteristics.Muscle fiber characteristics are related to meat grade (Jeong et al., 2010;Ryu and Kim, 2006) as they form the skeletal musculature that constitutes 75%-92% of meat.The muscle fibers of skeletal muscles are generally classified into type I, IIA, IIX, and IIXB, each displaying unique metabolic characteristics (Schiaffino and Reggiani, 1996).The muscle fiber type composition may change depending on various factors such as breed (Ryu and Kim, 2006), sex (Ozawa et al., 2000), age (Li et al., 2020), diet (Jeong et al., 2012), and muscle location (Hwang et al., 2010).In turn, the individual characteristics of muscle fiber types affect meat characteristics such as meat color, water-holding capacity, and texture (Pette and Staron, 2000).Therefore, changes in the quality characteristics of sows can be explained through an investigation of associated muscle fiber attributes; if sows display excellent quality characteristics, their muscle fibers can provide information to serve as a guideline in meat quality improvement.
The processed meat market has become as important as the fresh meat market due to the increasing needs and satisfaction of a growing consumer base.In general, non-preferred carcass parts such as hind legs are preferred for consumption over fresh meat (Vandendriessche, 2008).This is an attractive factor for processors, as it can lower the cost of meat products and simultaneously utilize non-preferred body parts.Most cull sows are distributed to the processing market, and their hind legs are typically used (Sindelar et al., 2003).However, such meat product manufacturing using non-preferred cuts focuses on hard-to-consume parts which are processed regardless of the functional characteristics of the raw meat (Troy and Kerry, 2010).Raw meat with poor technical characteristics such as water-holding capacity can significantly affect product yield, nutrition, texture, and color during processing due to water loss (Oh et al., 2008).Therefore, investigating the quality characteristics of raw meat will provide a means to confirm its suitability for processing.
In this study, we hypothesized that the environmental factors experienced by cull sows and commercial pigs prior to slaughter can cause differences in their quality characteristics and muscle fiber composition, ultimately leading to a change in processing aptitude.Therefore, this study compared the quality characteristics, chemical composition, and muscle fiber characteristics of the three major muscles of cull sow and commercial pig hind legs, analyzing the relationship between the quality characteristics and muscle fiber attributes.In addition, we aimed to link pork processing suitability with the quality attributes of fresh meat from cull sows to assess its applicability.
Preparation of samples
Commercial pigs [(Landrace×Yorkshire) ♀×Duroc ♂, LYD or (Yorkshire×Landrace) ♀×Duroc ♂, YLD species] and cull sows (Landrace×Yorkshire, LY or Yorkshire×Landrace, YL species) were purchased 48 hours post-mortem from the local market.The commercial pigs were generated by breeding 20 Duroc with 300 LY or YL.The pigs were in the same feeding condition according to the Korean Feeding Standard for Swine (RDA, 2022).A total of 40 pigs were used in the experiment; 20 commercial pigs and 20 cull sows were selected randomly.Their hot carcasses were then graded according to the standard grading procedure of the Korea Institute of Animal Products Quality Evaluation (KAPE, 2022).The carcass grade, backfat, and carcass weight are shown in Table 1.The commercial pigs possessed an average carcass weight of 83-92 kg, while that of cull sows was 183-195 kg.Ten hind legs of cull sows and commercial pigs (raised on different farms) were purchased at intervals of 7 d.Three representative hind leg muscles (biceps femoris, semimembranosus, semitendinosus) were selected for comparative purposes.Immediately after the purchase of the hind legs, these muscles were isolated, refrigerated at 4℃ for 24 h, and used for experiments.For the investigation of flesh color, pH, and muscle fiber characteristics, whole muscles were used after removing all connective tissue and fat.For the analysis of the proximate composition, water-holding capacity (drip loss and cooking loss), total collagen content, and protein solubility, each muscle was pulverized into 6 mm particles and sufficiently mixed to obtain a uniform sample.In order to measure muscle fibers, samples were taken from the central part of the biceps femoris, semimembranosus, and semitendinosus and stored frozen in liquid nitrogen.The entire experimental process is shown in Fig. 1.
Proximate composition
Moisture, crude protein, and crude ash content were analyzed for proximate composition using the AOAC (2000) method with pulverized cull sow and commercial pig muscle samples.Moisture content was expressed as a percentage (%) of the weight of a sample before drying, obtained by measuring the sample weight after placing it in a drying oven at 102℃ for 24 h.Crude protein content was determined using the Kjeldahl method.Crude ash content was measured via a classical dryashing technique at 550℃.Crude fat content was measured using the methodology of Folch et al. (1957).Moisture, crude protein, crude ash, and crude fat content were measured four times for each sample to obtain average values.
Color and pH measurements
Three hind leg muscles of sows and commercial pigs were exposed to air for 10 min before measuring their color using a chroma meter (CR-400, Minolta, Tokyo, Japan) that was calibrated with a standard white plate (Y=93.5,X=0.3132, and y=0.3198).Measurements were repeated eight times in the central part of each sample.The measuring conditions were D65 illuminant, 2° standard observer, and 65 mm aperture.According to the Commission International de l'Eclairage (CIE) system, color was expressed as CIE L*, CIE a*, CIE b*, chroma, and hue angle (h °).The chroma and hue angle (h °) was calculated as (a *2 +b *2 ) 0.5 and tan −1 (b * /a * ), respectively.
After mixing 30 g of a minced muscle sample with 270 mL of distilled water, it was homogenized for 30 s using a Polytron homogenizer (T25 basic, IKA Labortechnik, Selangor, Malaysia).All homogenized samples were analyzed with a pH meter (MP230, Mettler-Toledo, Greifensee, Switzerland).Four pH measurements were recorded for each sample, and the average value was used.The pH meter was calibrated at 20℃ using three standard buffers of pH 4.0, 7.0, and 9.0.
Water-holding capacity
Two methods were employed to assess the water-holding capacity of samples via weight loss assessments: drip loss and cooking loss measurements.Since the hind legs of pigs are mainly processed into ground meat products, we used ground samples that were shaped into patties to imitate a similar scenario.Eighty grams of a minced muscle sample was placed in a petri dish (90×15 mm) to form a patty, then placed on soaking pad and placed in a sealed container.Its weight was measured after storing a sample for 48 h at 4℃.Four replicates were taken for each sample, and the average value was used.The drip loss value (%) was calculated using the following Eq.( 1): Initial product weight − Final product weight Initial product weight × 100 (1) For the cooking loss measurement, patties of the same type as described above were placed in a convection steam oven (RCO-050E, KOSTEM, Gwangju, Korea) and cooked at 80℃ for 30 min before their weights were measured.Four measurements were taken for each sample, and the average value was used.The cooking loss value (%) was calculated using the following Eq.( 2): Uncooked pork weight (g) − Cooked pork weight (g) Uncooked pork weight (g) × 100 (2)
Total collagen content
Total collagen content was determined by modifying the method of Kolar (1990) and expressed as mg/g.Four grams of a sample was put in an Erlenmeyer flask, 30 mL of 7 N sulfuric acid was added, and the mixture was heated at 105℃ for 16 h on a heating plate (HP630D, Misung Scientific, Seoul, Korea).After hydrolysis, distilled water was added to a level of 500 mL, and the mixture was filtered using Whatman No. 1 filter paper.We combined 2 mL of the extract with 1 mL of chloramine T solution (1.41 g chloramine T, 10 mL distilled water, 10 mL n-propanol, and 80 mL citric buffer at pH 6), and the product was allowed to stand for 20 min.Absorbance was measured at 558 nm, and hydroxyproline content was determined from a standard curve.Collagen content was calculated from hydroxyproline content using the coefficient 7.25.
Three measurements were taken for each sample, and the average value was used.
Protein solubility
Protein solubility was measured according to the procedures suggested by Warner et al. (1997).For sarcoplasmic protein solubility, 3 g of the sample was mixed with 30 mL 25 mM potassium phosphate buffer (pH 7.2) and then homogenized for 30 s using a Polytron homogenizer.The homogenate was left at 4℃ for 20 h, centrifuged at 2,600×g for 30 min, and centrifuged homogenate was filtered with Whatman No. 1 filter paper.We measured the protein concentration of the filtered extract using the Biuret method with a bovine serum albumin standard curve.Total protein solubility was measured by mixing 3 g of a sample with 0.55 M potassium iodide and 50 mM potassium phosphate buffer (pH 7.2), with subsequent extraction conducted in the same way as described for sarcoplasmic protein solubility.The concentration of myofibrillar protein was calculated by subtracting the sarcoplasmic protein from total protein content.Three measurements were taken for each sample, and the average value was used.
Immunohistochemistry
The staining cross-sections of porcine skeletal muscles and classification of muscle fiber types were conducted according to Song et al. (2020b) with some modifications.Briefly, cross sections (10 µm thickness) were obtained from muscle cubes using a cryostat microtome (CM1520, Leica Biosystems, Wetzlar, Germany).Sections were blocked with 10% normal goat serum (Cell Signaling Technology, Danvers, MA, USA) for 1 h at room temperature.Primary antibodies (Developmental Studies Hybridoma Bank, Iowa City, IA, USA) were used to detect one or more myosin heavy chain isoforms (BA-F8, slow/I; SC-71, 2a and 2x; BF-35, all isoforms except for 2x; BF-F3, 2b).For multicolor immunofluorescence, Alexa Fluor 350, 488, and 594 (Thermo Fisher Scientific, Waltham, MA, USA) were applied to each section for 1 h at room temperature.
Primary and secondary antibodies were applied sequentially and in cocktails to sections.After incubation, all sections were rinsed thrice for 5 min with phosphate-buffered saline.All muscle fibers were inspected with a confocal scanning laser microscope (TCS SP8 STED, Leica Biosystems).Muscle fiber types were identified and classified into five types (I, IIA, IIX, IIXB, and IIB) according to the distribution of myosin heavy chain isoforms.Eighteen muscle fibers from each region were analyzed, and muscle fiber number composition (%), fiber area composition (%), and cross-sectional area (μm 2 ) were measured using Image-Pro Plus software (Media Cybernetics, Rockville, MD, USA).Three measurements were taken for each sample, and the average value was used.
Statistical analysis
All experiments were performed in the same location.A total of 20 cull sows and 20 commercial pigs were analyzed in batches of 10 pigs at 7 d intervals.Twenty-five dependent variables and one independent variable were used, and a general linear model was used to test the pig breed effect.The muscle fibers of six pigs were analyzed during the same period.Each variable was repeatedly measured (three times or more) and averaged for comparative purposes.All results are expressed as the mean±SE.Data analyses were conducted with ANOVA procedure SAS version 9.3 (SAS Institute, Cary, NC, USA).The mean values in differences of quality characteristics between pig groups were confirmed through Duncan's multiple range test at the 95% significance level.
Proximate composition
Table 2 shows the moisture, crude protein, crude fat, and crude ash content of three major hind leg muscles of cull sows and commercial pigs.There was no significant difference in these attributes between the muscle groups of cull sows and commercial pigs (p>0.05).This corresponds with the results of Song et al. (2020a).In contrast, Hoa et al. (2020) and Kim and Kim (2018) reported significant differences in the crude protein and moisture content of a similar comparison.The differences were speculated to be attributed to the increased age of sows, genetic factors, or variations in feeding methods.
However, most of these studies using cull sows analyzed a variety of hind leg muscles, leading to methodological differences.
Based on the results of this study, it is determined that other growth environments (e.g., age, genetic factors, and feeding methods) of the sow did not affect the proximate composition changes of the three muscles (biceps femoris, semimembranosus, semitendinosus).
Muscle color
Table 3 summarizes the colors observed for the three muscles we investigated in cull sows and commercial pigs.Cull sows possessed a lower CIE L* and hue angle in the biceps femoris, semimembranosus, and semitendinosus (p<0.05);higher CIE a* in all muscles (p<0.05); and higher chroma in the biceps femoris and semimembranosus (p<0.05).Consumers generally assess the freshness of meat based on its color to decide on purchase (Forbes et al., 1974).In addition, color is used as a means of predicting meat quality through values of CIE L*, CIE a*, CIE b*, chroma, and hue angle (Hughes et al., 2014;Norman et al., 2003).The hue angle value changes according to the color (from red to yellow), a larger angle indicates less red pigmentation, and a higher chroma value is associated with a more vivid color.Our results indicate that pork from cull sows displayed decreased CIE L*, increased CIE a*, and vivid color.Meat color varies depending on parameters such as pig breed, age, sex, motility, and muscle group (Forrest et al., 1975); after slaughter, it is further affected by packaging conditions, the aging process, and lipid oxidation during exposure to consumers when being displayed to consumers (Domínguez et al., 2019).Meat from cull sows in this study exhibited a dark and reddish color, as has been reported for pigs with an increased breeding period (Miao et al., 2009).Since these sows are bred over a long period, their meat develops a dark and reddish color due to increased age.
pH and drip loss
Table 4 shows the pH values recorded for the muscles of cull sows and commercial pigs.In cull sows, the pH was high in all muscles, the biceps femoris, semimembranosus, and semitendinosus (p<0.05).The pH of meat helps with predicting quality and is closely associated with moisture content (Huff-Lonergan and Lonergan, 2005), as pH is related to an electrical attraction that can maintain moisture in the meat.When the number of positively and negatively charged groups in proteins equalizes, the attraction force between proteins is maximized, and water escapes from the cells; at the isoelectric point (around pH 5.5), the sum of the total charges becomes zero, resulting in maximum moisture loss (Huff-Lonergan and Lonergan, 2005).In live animals, acid-base homeostasis maintains a pH of about 7.0-7.2 in muscle tissue (Tarrant et al., 1972), as determined by metabolic activity within the muscle (Kylä-Puhju et al., 2004).However, muscle tissue converts into meat during rigor mortis, at which point the muscles undergo anaerobic metabolic activity that causes a decrease in pH due to the production of lactic acid.The difference in anaerobic metabolism activity varies depending on the muscle fiber type.It is also associated with meat color.Muscle fiber type Type 2b, representing white muscle, induces more anaerobic metabolism activity, resulting in muscle pH differences.Cull sows and commercial pigs, with different colors, would have differences in muscle fiber type.Therefore, the difference in meat pH between commercial pigs and cull sows is considered to be influenced by muscle fiber type.
Table 4 records the drip loss values of the three muscle groups we investigated.The drip loss values did not differ significantly among any muscles (p>0.05);however, numerically, all muscle of sows was about 1%-2% lower than that of commercial pigs.Drip loss is reflected in the amount of water that escapes from meat without applying an external physical force, and it is closely related to the pH of the meat (Qiao et al., 2007).Since no external force is applied, electrical attraction among proteins is maintained via the pH to prevent the exudation of moisture (Huff-Lonergan and Lonergan, 2005).Cull sows with a high meat pH were expected to have less drip loss but showed no significant difference in this study.This may be related to muscle structure.Drip loss due to structural factors mainly involves the development of extracellular spaces and postmortem proteolysis (Bertram et al., 2004;Offer et al., 1989).In Fig. 2, the extracellular space between muscle fibers is considered to be wider in cull sows than in commercial pigs.This tendency reportedly causes high water loss, as the extracellular space provides a drip channel that conducts water to the surface of the meat (Offer et al., 1989).As a result, cull sows with a larger extracellular space were expected to experience more drip loss, but the high pH electrical force prevented moisture loss.Therefore, it is concluded that these two conditions counteracted each other, resulting in no change in drip loss.
Cooking loss, protein solubility, and total collagen content
Table 4 shows the cooking loss values across three muscle groups.In cull sows, cooking loss was low in all muscles, the biceps femoris, semimembranosus, and semitendinosus (p<0.05).Cooking loss was determined by measuring the water liberated by applying heat to the meat.Most water loss during cooking is related to the denaturation of proteins and collagen in response to increased temperatures, causing a contraction of muscle fibers that allows water to escape (Lepetit et al., 2000).Protein denaturation, in particular, results in shrinkage of the meat and is one of the main causes of cooking loss (Tornberg, 2005).
However, an investigation of protein and collagen content in this study found that their effect on the cooking yield was insignificant.Current research regarding cooking loss in meat remains inconclusive beyond the effects of protein and collagen.If other conditions (ex.muscle fiber characteristic) are responsible for cooking yield increases, cull sows can provide important information on pork quality.Therefore, the reduced cooking loss results in this study are considered to offer a new perspective beyond the well-known influence of collagen and protein, which have the strongest correlation with cooking loss.
Table 4 lists the protein solubility of cull sow and commercial pig meat across three muscle groups.There was no significant difference between cull sows and commercial pigs in the protein solubility recorded in all muscles (p>0.05).
Despite this lack of statistical significance for protein solubility, it tended to be numerically higher in the semimembranosus and semitendinosus muscles of cull sows, indicating a degree of protein denaturation.Water content in meat is closely related to protein denaturation resulting from cooking and rigor mortis (Lopez-Bote et al., 1989).Most water is retained in the thin (actin) and thick (myosin) filaments of myofibrillar proteins.Actin and myosin, responsible for muscle relaxation and contraction, contract due to heat (Tornberg, 2005).The resultant contraction of the space holding water causes moisture loss.
Any degradation of myofibrillar and cytoskeletal proteins is also associated with water loss.In addition, salt-soluble myofibrillar proteins play an important role in determining the quality characteristics of meat products (Santhi et al., 2017).
Sarcoplasmic proteins affect attributes such as color and water-holding capacity (Sayd et al., 2006).Many studies have reported that sarcoplasmic proteins aggregate at 40℃-60℃ and play a key role in the quality of processed meat (Farouk et al., 2002;Hamm, 1977).Protein in fresh and processed meat products determines quality.The close relationship between water loss and proteins may have influenced the results of water-holding capacity in this study.Our drip loss and cooking loss findings suggest that pork from cull sows possess a higher water-holding capacity than pork from commercial pigs, which could be partly explained by pH and protein solubility factors.Table 4 displays the comparative total collagen content of three types of muscle from cull sows and commercial pigs, with no significant differences being recorded (p>0.05).Collagen, a stromal protein, forms the basic structure of connective tissue and plays a protective role in cells, muscles, tissues, organs, etc., by connecting or covering the exterior.In cooked meat, collagen produces a tough texture due to denaturation (Lewis and Purslow, 1989); water is exuded due to the contraction of the endomysial collagen fibers surrounding the water-bearing muscle fibers at a cooking temperature of 60℃-70℃ (Lepetit et al., 2000).The total amount of collagen is proportional to the amount of muscle activity and varies for each muscle part (Hill, 1966).Although collagen content was expected to be increased in the hind legs of cull sows (due to their longer time of activity than commercial pigs), no such difference was observed.This study showed the opposite results to the general trend, in which meat from older animals lose more water during the cooking process (Shimokomaki et al., 1972).In this study, the effect of collagen on cooking loss did not seem as pertinent as that of pH, protein solubility, and muscle fiber characteristics.
Muscle fiber characteristics
Fig. 3 shows the muscle fiber number, area composition, and cross-sectional area present in cull sows and commercial pigs.The ratio of type I, type IIA, and type IIX fibers in the biceps femoris was high in cull sows, while the ratio of type IIB fibers was high in commercial pigs (p<0.05).In the semimembranosus muscle, the ratio of type IIB fibers was high in commercial pigs (p<0.05).In the semitendinosus muscle, the ratio of type I and type IIX fibers was high in cull sows, and the ratio of type IIA and type IIB fibers was high in commercial pigs (p<0.05).In terms of fiber area composition, the area ratio of type I, type IIA, and type IIX fibers in the biceps femoris was high in cull sows, while the area ratio of type IIB fibers was high in commercial pigs (p<0.05).In the semimembranosus, the area ratio of type I fibers was high in cull sows, while the area ratio of type IIB fibers was high in commercial pigs (p<0.05).In the semitendinosus muscle, the area ratio of type I and type IIX fibers were high in sows, and the area ratio of type IIA and type IIB fibers was high in commercial pigs (p<0.05).
The cross-sectional area of type I, type IIA, type IIX, and type IIXB fibers in the biceps femoris was wider in cull sows than in commercial pigs (p<0.05).In the semimembranosus, the cross-sectional area of type I and type IIA fibers was also wider in cull sows (p<0.05).In the semitendinosus, the cross-sectional area of type I fibers was wider in cull sows, while the crosssectional area of type IIA fibers was wider in commercial pigs (p<0.05).
The types of muscle fibers include type IIB, type IIXB, type IIX, type IIA, and type I, each with different characteristics.
Muscle fibers with different characteristics are broadly categorized into anaerobic metabolism and aerobic metabolism.Type IIX, type IIXB, and type IIB, Fast-twitch glycolytic fibers involved in anaerobic metabolism, perform glycolysis.Their metabolic activities play a role in reducing pH due to lactic acid accumulation.Type I and type IIA, involved in aerobic metabolism, participate in oxygen storage and transport, and they have higher pH compared to the muscle fibers involved in anaerobic metabolism.Due to the effect of these metabolic activities on muscle fibers, the presence of type IIB fibers is positively correlated with drip loss (Huff-Lonergan and Lonergan, 2005), and an increase in the ratio of type I fibers causes the color of the meat to become more red (Kim et al., 2010).Our findings suggest that an altered muscle fiber composition affected the metabolic activity and meat color in cull sows, causing an increase in pH and a red meat color.Furthermore, the change in muscle fiber composition, especially the increase in type I led to an increase in pH, which may have contributed to our results for water holding capacity.It is necessary to confirm the role of type IIX fibers, which account for 30%-50% of fiber number and area composition in cull sows.In general, type IIX fibers mediate the fiber type transition that occurs during growth and aging.Information on the characteristics of type I and type IIB is known, but the effect of type IIX on quality is known to play a role similar to that of type IIB (Klont et al., 1998).Although type IIX, which accounts for a large proportion in sows, is classified as a glycolytic muscle fiber, it does not seem to affect pH or water-holding capacity.Therefore, the effect of type IIX on quality is not regarded as significant.
Muscle fiber type composition may change over rearing period.In muscles that are involved in long-term endurance activities, muscle fibers may change from type IIB → type IIX(D) → type IIA → type I. Conversely, in muscles that require instantaneous force, fibers may change from type I → type IIA → type IIX(D) → type IIB (Caiozzo et al., 1992;Schiaffino and Reggiani, 1996); with an increase in age, they turn into endurance-requiring muscles, causing an increase in muscle fiber type I. Therefore, the observed increase in type I and the decrease in type IIB muscle fibers in cull sows may have occurred with an increase in age.The cross-sectional area of muscle fibers is also related to advancing age in carnivores; the total number of muscle fibers is genetically fixed, and only their lengths and cross-sectional areas increase over time (Stickland et al., 1975).We noted that the cross-sections of type I and IIA fibers similarly enlarged in older cull sows.However, it was not the increase in the crosssectional area of the overall muscle fiber, but the size of the muscle fiber that increased in muscle fiber number and area composition.In other words, it was confirmed that the cross-sectional area did not increase in undeveloped muscle fibers.
Until now, studies on quality characteristics have been conducted according to muscle fiber types, but most studies related to muscle fiber size are related to texture.To our knowledge, no studies have investigated the cross-sectional area size and cooking characteristics of muscle fibers.However, since meat contracts while cooking, this process is highly related to the cross-sectional area of muscle fibers.Assuming that muscle fibers have the same density, an increase in the size of muscle fibers will mean that less physical deformation occurs during contraction, thereby retaining more water within the muscle fibers.We can conclude that the small cooking loss in cull sow meat observed in this study may be highly related to the size of muscle fibers.
Conclusion
Cull sows and commercial pigs show notable distinctions in muscle fiber type, size, and cross-sectional area due to their different growth environment.While these differences in muscle fibers don't impact the pork's chemical composition in cull sows, they do affect meat pH and color.The glycolytic muscle fiber type (type IIX) appeared to have little effect on meat quality.In addition, a small cooking loss was observed, which may result from an increase in the cross-sectional area of the muscle fibers.As a result, a difference in the quality of meat between the cull sow and the commercial pig was observed due to their different growth environments and resultant changes in muscle fiber characteristics, and cull sow was confirmed to provide meat suitable for cooking.More importantly, as a relatively low water loss was observed when cooking pork obtained from cull sows, this meat source can offer nutritional and economic advantages.
Fig. 3 .
Fig. 3. Comparison of muscle fiber characteristics in major three muscles between commercial and cull sow pork.x,y Different letters on the bar indicate significant differences between C and S within the same muscle fiber type.a-c Different letters on the bar indicate different muscle fiber types within the same group at p<0.05.C, commercial pig; S, cull sow.
Table 3 . Comparison of instrumental color measurement between commercial pig and cull sow in three major muscles
a,b Means with different superscripts are significantly different within the same muscle (p<0.05).
Table 4 . Comparison of technological quality traits, collagen and protein solubility between commercial pig and cull sow in three major muscles
a,b Means with different superscripts are significantly different with the same muscle (p<0.05). | 2023-09-23T15:03:29.290Z | 2023-09-21T00:00:00.000 | {
"year": 2024,
"sha1": "4f02161171cefebb7469adc2c53500ed45f24d6c",
"oa_license": "CCBYNC",
"oa_url": "https://www.kosfaj.org/download/download_pdf?pid=kosfa-2023-e58",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "28dac94f058829d7ad2bac37542839367b5adad4",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
52102015 | pes2o/s2orc | v3-fos-license | Detection of Regional Wall Motion Abnormalities in Compressed Sensing Cardiac Cine Imaging
Background: Recently faster cardiac magnetic resonance (CMR) cine sequences basing on k-t compressed sensing have been developed. Purpose: To compare two compressed sensing CMR sequences-one in breath-hold technique and one during free breathing—with the standard SSFP sequence with respect to regional left ventricular function assessment. Material and Methods: Left ventricular short-axis stacks of two compressed sensing sequences in breath-hold technique (sparse_HB) and during free breathing (sparse_FB; both spatial resolution, 1.8 × 1.8 × 8 mm) and a standard SSFP cine sequence (spatial resolution, 1.9 × 1.9 × 8 mm) were acquired in 50 patients on a 1.5 T MR system. Regional wall motion abnormalities (RWMA) were rated qualitatively (normal/hypo-/a-/dyskinesia) by two experienced readers in consensus for all cardiac segments (American Heart Association’s segment model) and sequences. RWMA detection rates were compared between sequences by kappa statistic. Results: In 13 patients, RWMA were detected in at least one cardiac segment. The RWMA detection rates were similar between CMR sequences (hypokinesia, 7.2% to 7.9%; akinesia, 0.8% to 1.3%; dyskinesia 0.3% to 0.4%) and kappa statistics revealed an almost perfect agreement in RWMA detection between both sparse and the standard SSFP sequence (standard versus sparse_HB: kappa, 0.918, p value, <0.001; standard versus sparse_FB: kappa, 0.868, p value, <0.001). Conclusion: Compressed sensing cine CMR acquired during breath-hold or free-breathing allows reliable RWMA detection, thus, might alternatively be used in cine CMR for regional left ventricular function assessment. How to cite this paper: Goebel, J., Nensa, F., Schemuth, H., Maderwald, S., Quick, H.H., Schlosser, T. and Nassenstein, K. (2018) Detection of Regional Wall Motion Abnormalities in Compressed Sensing Cardiac Cine Imaging. World Journal of Cardiovascular Diseases, 8, 277-287. https://doi.org/10.4236/wjcd.2018.86027 Received: April 20, 2018 Accepted: June 9, 2018 Published: June 12, 2018 Copyright © 2018 by authors and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY 4.0). http://creativecommons.org/licenses/by/4.0/ Open Access
Introduction
Cardiac magnetic resonance imaging (CMR) is an established imaging tool in the diagnostic workup of patients with suspected heart disease and plays an important role in risk stratification (e.g. in coronary artery disease or myocarditis) and non-invasive therapy monitoring [1] [2] [3].Due to continuous technical progress in CMR sequence design, hopefully the time consuming acquisition of standard breath-hold steady-state free precession (SSFP) cine sequences might be replaced by faster alternatives, which allow either a significant reduction in breath-hold times or even image acquisition during free breathing, which would be extremely helpful in patients with limited breath-hold capability.In this context, the recently developed k-t compressed sensing sequence technique with parallel imaging and iterative reconstruction seems very promising [4] [5] [6] [7].Provided that special conditions regarding image properties, aliasing artifacts, and image reconstruction are fulfilled, this technique allows a considerable acceleration of CMR data acquisition due to noticeable k-space under sampling [4] [5].Until now, several slightly different SSFP sequences basing on this new technical approach have been evaluated by different groups with the main focus on global left ventricular (LV) function [8]- [13].Regarding global LV function, the overall consensus of these studies was that compressed sensing cine SSFP sequences are as reliable as conventional cine SSFP imaging.Despite the fact that regional wall motion abnormality (RWMA) detection is also of high clinical relevance in many settings (e.g.assessment of ischemic heart disease, high-dose dobutamine stress CMR, where an ischemia is defined as a stress-induced new or aggravated RWMA [14] [15] [16] [17] [18]), only limited data exists concerning regional wall motion assessment by compressed sensing cine imaging.
Thus, the aim of the present study was to compare two different compressed sensing CMR sequences-one acquired in breath-hold technique, with reduced breath-hold times and one during free breathing-with the current standard SSFP sequence with the focus on regional left ventricular function assessment.
Material and Methods
Prospective analysis and use of data was approved by the local ethic committee.
All included patients gave written informed consent for CMR examination and study participation.
Assessment of Regional Wall Motion Abnormalities and Left Ventricular Volumetry
Analysis for RWMA was performed in consensus by two experienced readers (CMR experience > 12 years and >6 years).Presence and severity of RWMA were evaluated visually and graded as "normal", "hypokinesia", "akinesia", or "dyskinesia" in all CMR sequences and in all cardiac segments, except for segment 17 (in accordance to the American Heart Association's segmental model) [14] [19].Thus, a total of 800 cardiac segments in each CMR sequence were analyzed (50 patients with 16 analyzed cardiac segments each).
Left ventricular volumetry was performed in all patients and CMR sequences by one reader (CMR experience > 6 years) using the Argus software (Siemens Healthcare, Erlangen, Germany; employed standard values based upon [20]).
Because excellent inter-rater agreement for left ventricular volumes has been repeatedly reported for segmented compressed sensing cine SSFP sequences in breath-hold technique and during free breathing, we waived this subanalysis [10] [11] [13].The papillary muscles and trabecullae were attributed to the ventricular cavity, and the most basal short-axis slice to be included into volumetry was defined by having at least 270˚ of the chamber circumference surrounded by visible myocardium [21] [22].ferences in RWMA detection between the three employed CMR sequences kappa statistic was performed and interpreted as proposed by Landis and Koch [23].
Statistical Analysis
Comparison of volumetric data between the CMR sequences was done by Wilcox on test, and inter-rater agreement was assessed by Bland-Altman analysis.A p value less than 0.05 was considered statistically significant.
Left Ventricular Volumetry
Left ventricular volumetric values of all three CMR sequences are presented in Table 2. Comparing standard SSFP and sparse_HB, small, but significant median Table 1.Detection rates of regional wall motion abnormalities given in absolute numbers and percent values (in brackets) for the three CMR sequences.Figure 1.Midventricular short-axis slices in end-diastole and -systole in a 14-year-old male with history of anthracycline-based chemotherapy for acute lymphatic leukemia reveal an unimpaired global and regional left ventricular function: in all three employed CMR sequences excellent imaging details and blood-myocardium contrast can be seen (standard SSFP sequence; compressed sensing sequence in breath-hold technique (sparse_HB); compressed sensing sequence during free breathing (sparse_FB)).
Figure 2. Short-axis slices of a 65-year-old woman with history of transmural myocardial infarction: in all three CMR sequences (standard SSFP; sparse in breath-hold (sparse_HB); sparse during free breathing (sparse_FB)) an anteroseptal and anterior akinesia is clearly detectable.
differences were found for EDV (difference of median, 8 ml; p value < 0.001), SV (difference of median, 8 ml; p value < 0.001), and EF (difference of median, 1%; p value, 0.016), but not for ESV (p value, 0.198).Comparing standard SSFP and sparse_FB, small, but significant median differences were found for ESV (difference of median, 4 ml; p value < 0.001), SV (difference of median, 4ml; p value, 0.004), and EF (difference of median, 2%; p value< 0.001), but not for EDV (p value, 0.817).These findings were confirmed by the Bland-Altman analysis (Table 3, Figure 3), where an overall good agreement in volumetric values between the standard SSFP sequence and both sparse sequences was found.
Discussion
In this study we could demonstrate for the first time that not only our compressed sensing CMR sequence acquired in breath-hold technique, but also our compressed sensing CMR sequence during free breathing allowed reliable regional left ventricular function assessment.This result is in line with the findings J. Regarding the global left ventricular function, only small, not relevant differences in left ventricular values were found between the standard SSFP sequence and both compressed sensing CMR sequences.For the sparse_HB sequence slightly lower EDV, SV, and EF values were found which might be caused by an insufficient capture of the end-diastole [10].For the sparse_FB sequences lightly higher ESV and consecutively lower EF values were found.But overall a sufficient agreement between the standard SSFP sequence and both sparse sequences was found which is in accordance to recently published studies where a sufficient to excellent agreement between the investigated sparse and reference se-World Journal of Cardiovascular Diseases quences were reported [10] [12] [13].
Our study is not without limitations.First, we analyzed the RWMA exclusively visually.Although this is common practice, accuracy might benefit from a quantitative RWMA analysis.Second, only a limited number of patients/cardiac segments with RWMA were included, which was due to our unselected patient cohort.
Conclusion
In conclusion, compressed sensing cine imaging of the left ventricle acquired either during breath-hold or during free breathing allows the reliable detection of regional wall motion abnormalities.Thus, these fast cine sequences can alternatively be used for the assessment of LV function.
Figure 3 .
Figure 3. Bland-Altman plots demonstrate a good agreement between the volumetric values derived from the standard SSFP sequence and both sparse sequences.
Table 2 .
Volumetric values (presented as median and interquartile range (in brackets)) of all 50 investigated patients comparing the three CMR sequences.
Table 3 .
Results of the Bland-Altman analysis comparing the volumetric values of the standard SSFP sequence with both sparse sequences.
[7]Allen et al. who compared an iteratively reconstructed k-t under sampled breath-hold SENSE cine sequence with a conventional breath-hold SSFP cine sequence based on GRAPPA (acceleration factor, 2) with respect to RWMA detection in 20 patients and in 9 healthy volunteers[7].They rated the RWMA qu-Based on our results, we totally agree with Lin et al. and believe that the faster compressed sensing CMR sequences, which were proven to reliable detect RWMA, have the potential to replace standard cine SSFP imaging in clinical routine for the assessment of regional and global LV function.Moreover, Goebel et al.DOI: 10.4236/wjcd.2018.86027283 World Journal of Cardiovascular Diseases | 2018-08-27T16:08:36.657Z | 2018-06-12T00:00:00.000 | {
"year": 2018,
"sha1": "f12984b1221d84b3cf6878a894f112cdbcb43c59",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=85219",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "6564273c685a947ae96c30b6dd5fc4b486e5315f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
26058981 | pes2o/s2orc | v3-fos-license | PLS3 sequencing in childhood-onset primary osteoporosis identifies two novel disease-causing variants
Summary Altogether 95 children with primary bone fragility were screened for variants in PLS3, the gene underlying X-linked osteoporosis. Two children with multiple peripheral and spinal fractures and low BMD had novel disease-causing PLS3 variants. Children with milder phenotypes had no pathogenic variants. PLS3 screening is indicated in childhood-onset primary osteoporosis. Introduction The study aimed to determine the role of pathogenic PLS3 variants in children’s bone fragility and to elucidate the associated phenotypic features. Methods Two cohorts of children with bone fragility were screened for variants in PLS3, the gene underlying X-linked osteoporosis. Cohort I comprised 31 patients with childhood-onset primary osteoporosis of unknown etiology. Cohort II comprised 64 children who had sustained multiple fractures but were otherwise healthy. Clinical and radiological data were reviewed. Peripheral blood DNA was Sanger sequenced for coding exons and flanking intronic regions of PLS3. Results In two patients of cohort I, where other common genetic causes had been excluded, we identified two novel disease-causing PLS3 variants. Patient 1 was a male with bilateral femoral fractures at 10 years, low BMD (Z-score −4.1; 18 years), and multiple vertebral compression fractures. He had a novel nonsense variant in PLS3. Patient 2 was a girl with multiple long bone and vertebral fractures and low BMD (Z-score −6.6 at 6 years). She had a de novo missense variant in PLS3; whole exome sequencing and array-CGH identified no other genetic causes. Iliac crest bone biopsies confirmed low-turnover osteoporosis in both patients. In cohort II, no pathogenic PLS3 variants were identified in any of the subjects. Conclusions Two novel disease-causing variants in PLS3 were identified in a boy and a girl with multiple peripheral and spinal fractures and very low BMD while no pathogenic variants were identified in children with less severe skeletal fragility. PLS3 screening is warranted in male and female patients with childhood-onset primary osteoporosis. Electronic supplementary material The online version of this article (doi:10.1007/s00198-017-4150-9) contains supplementary material, which is available to authorized users.
Introduction
Childhood-onset primary osteoporosis is a rare but clinically important condition characterized by reduced bone strength and an elevated risk of fractures [1]. Current definition Electronic supplementary material The online version of this article (doi:10.1007/s00198-017-4150-9) contains supplementary material, which is available to authorized users. requires a clinically significant fracture history and BMD Zscore at or below −2.0. A clinically significant fracture history is defined as (1) two or more long bone fractures ≤10 years and (2) three or more long bone fractures ≤19 years [2]. When vertebral compression fractures are present, they alone can be grounds for the diagnosis even when BMD is normal [2]. The diagnosis further requires that other nutritional or medical causes for the child's bone fragility have been excluded. With such a broad definition, it is easy to understand that childhood-onset primary osteoporosis includes a spectrum of skeletal diseases with different molecular causes. Most affected children are classified as having osteogenesis imperfecta (OI), in which the great majority of cases are due to pathogenic variants in either COL1A1 or COL1A2, the two genes encoding type I collagen [3,4]. However, during the last 10 years, studies have shown that pathogenic variants in 20 different genes can cause childhood-onset primary osteoporosis [5][6][7][8][9][10][11].
PLS3, located on the X chromosome, is one of the most recently described genes underlying childhood-onset primary osteoporosis. PLS3 osteoporosis was first described in five families with apparent X-linked osteoporosis in 2013, and the gene was subsequently shown to be of importance in bone metabolism [12]. PLS3 codes for the protein Plastin3, which is widely expressed in solid tissues and thought to be involved in cytoskeleton remodeling. Plastin3 contains two actin-binding sites and two calcium-binding sites, and in in vitro experiments, it can form bundles of actin by crosslinking single filaments to each other [13,14]. In bone, Plastin3 has been suggested to either be part of the osteocytes' mechanosensing apparatus or play a role in the mineralization process [12,15]. Plastin3 has also been reported as a protective modifier in spinal muscular atrophy (SMA) and to be important in axonogenesis [16,17]. A PLS3 knockout mouse model shows decreased BMD and knockdown of PLS3 in Zebrafish results in craniofacial dysplasia in the developing Zebrafish larvae [12,18]. However, as of yet, the direct function of Plastin3, the mechanism of its action, and pathogenesis of PLS3 osteoporosis are still undetermined.
Because of PLS3's location on the X chromosome, males are in general more severely affected by loss of function variants than females. PLS3 osteoporosis is foremost characterized by vertebral compression fractures, frequent peripheral fractures, and a low BMD [12,19]. Other traits common in classical OI, such as blue sclerae, joint hyperlaxity, and short stature, are usually not present. However, only eight families with childhood-onset primary osteoporosis due to pathogenic variants in PLS3 have been described in the literature, and thus the features of PLS3 osteoporosis have not been fully characterized [12,15,19].
In this study, we have attempted (1) to determine the overall role of pathogenic variants in PLS3 in children with bone fragility and (2) to increase the understanding of the clinical features of PLS3 osteoporosis.
Materials and methods
This study involves two patient cohorts assessed for primary skeletal fragility at the Children's Hospital, Helsinki University Hospital. The institutional Ethics Review Board at Helsinki University Hospital approved the study protocol, and all patients and/or their guardians gave a written informed consent before participation.
Cohort characteristics
Cohort 1 (Bprimary osteoporosis^) consisted of 31 patients (17 boys and 14 girls), who had been referred to the Metabolic Bone Clinic, Children's Hospital, Helsinki during the years 2003-2013 for investigation for childhood-onset primary osteoporosis but where the investigation did not reveal a molecular cause of their disease. The inclusion criteria were (1) exclusion of type I collagen-related OI, either clinically or by genetic testing, and (2) exclusion of secondary osteoporosis with biochemical and individually determined clinical evaluations. Further, all had been screened and found negative for pathogenic variants in WNT1 and LRP5 and several had undergone more extensive genetic testing before inclusion. At the time of referral, all the patients were between the ages of 4 and 17 years, displayed clinically an osteoporotic phenotype, and several of them had similarly affected family members. Most of the patients (25/31) fulfilled the ISCD criteria [2] for pediatric osteoporosis. Six patients only had a history of increased non-spinal fractures and a BMD Z-score >−2.0 but had other additional features (e.g., osteopenic appearance on skeletal radiographs, fragile bone on orthopedic surgery) suggesting osteoporosis.
Cohort II (Bfracture prone^) consisted of 64 otherwise healthy children who had sustained multiple fractures (43 boys and 21 girls), all recruited at the Children's Hospital, Helsinki, Finland during a prospective epidemiological study [20]. Over a 12-month period, all children (n = 1412) aged 4-15 years who had been treated for an acute and radiographically confirmed fracture were assessed for fracture history. The trauma mechanisms and previous medical histories were available for 1361 (96%) of the children. The inclusion criteria for cohort II were (1) age 4-15 years, (2) ≥2 low-energy long bone fractures before age 10 years, (3) ≥3 low-energy long bone fractures before age 16 years, or (4) ≥1 low-energy vertebral fracture (loss of ≥20% vertebral height). Children with a diagnosis or suspicion of OI were excluded, as well as children with an underlying disease that could explain their bone fragility. Altogether 71 patients of the >1400 children fulfilled these criteria; DNA from peripheral blood was available for 64 of them and they comprised cohort II. Data on blood biochemistry, spinal radiographs, and DXA measurements were collected for all participants.
As expected, children in cohort I displayed in general lower BMD Z-scores at the lumbar spine and a higher prevalence of vertebral compression fractures (p < 0.001 and p < 0.004, respectively) than children in cohort II (Fig. 1). Data for BMD at the lumbar spine and femoral neck and information about vertebral compression fractures were not available for four, six, and three children, respectively, in cohort I. In cohort II, 61% (n = 39) of the children had sustained at least three significant fractures, and of them 54% (n = 21) had sustained at least four significant fractures. Twenty percent (n = 13) had sustained at least three significant fractures before the age of 10 years, and of them 38% (n = 5) had sustained at least four significant fractures before the age of 10 years.
Genotyping of PLS3
Genomic DNA was extracted with the Puragene DNA Purification kit (Gentra), Archive Pure DNA Blood Kit (5Prime), or QIAamp DNA Blood Maxi Kit (Qiagen). Primers for PCR amplification were constructed using Primer3Plus and were derived from the canonical ensemble transcript (ENST00000355899). Sanger sequencing was performed using BigDye® technology on a 3730 ABI sequencer, and electropherograms were later interpreted using the Staden package (version 2.0.0b10). Primers are available upon request.
Whole exome sequencing
Patient 2 in cohort I together with her nuclear family was subjected to whole exome sequencing. Sequencing was performed using an Illumina HiSeq 2500 at Science for Life Laboratory, Stockholm, Sweden. The Agilent SureSelect XT All Exon V4 target enrichment kit was used for whole exome capture. All analysis computations were performed on res o u r c e s p r o v i d e d b y S N I C t h r o u g h U p p s a l a Multidisciplinary Center for Advanced Computational Science (UPPMAX) [21]. For a full description of the data processing and analysis, see Supplemental appendix and Supplemental Fig. 1 (Online Resource).
Array comparative genomic hybridization
For patient 2, a customized 2x400K array (Agilent Technologies) was used, with enriched probes in 2269 genes, including over 300 genes known to underlie skeletal diseases. In the specifically targeted genes, this highresolution array comparative genomic hybridization (array-CGH) has an average coverage of one probe per 100 base pairs in coding regions and one probe per 500 base pairs in introns and UTRs. The experiments were performed using standard procedures, and results were analyzed using Agilent Genomic Workbench 7.0.
Bone histomorphometry
Iliac crest bone biopsies were taken as part of normal diagnostic evaluation for both patients in cohort I who were deemed to have disease-causing variants in PLS3. For bone turnover assessments, the patients were pre-treated with oral tetracycline for 2 × 2 days with a 10-day treatment-free interval. The bone biopsies were analyzed by an experienced histomorphometrist using a semiautomatic image analyzer (Bioquant Osteo; Bioquant Image Analysis Corp., Nashville, TN, USA). The recommendations of the American Society for Bone and Mineral Research regarding abbreviations and nomenclature were followed [22]. Age-specific reference values and Zscores were calculated for all parameters [23,24].
Statistical analysis
The statistical analyses were performed using R version 3.3.1. Because of the distribution of the data, the Mann-Whitney U test was used to compare BMD between the groups. The chisquare test was used to test categorical data. A p value <0.05 was considered significant.
Genetic findings in cohort I
Sanger sequencing of PLS3 in cohort I identified two patients (6.5%) with novel variants in PLS3 that were deemed to be damaging and causative of their phenotypes (described below). In the remaining 29 patients in cohort I, a total of 7 single nucleotide variants (SNVs) and 1 small deletion were found ( Table 1). Three of the SNVs were found in coding regions, all were synonymous and with an allele frequency of at least 4% in the general Finnish population (SISu database); the other four variants were intronic, all previously reported. One of the synonymous variants, rs140121121, has previously been associated with osteoporosis [12], but this SNV was not enriched in our cohort compared with the normal Finnish population (allele frequency 0.067 vs 0.051). Outside the coding region, we found one 13 bp deletion (rs201765481) situated 190 bp upstream of exon 6 of PLS3. Its allele frequency was higher than expected in our cohort (0.043 vs 0.009 (dbSNP)), but when we sequenced this region for 96 healthy Finnish control samples, the frequency was found to be similar (0.059) as in our cohort, suggesting that the deletion is not pathogenic. The other non-coding variants were regarded as not being of significance in this context, either because of their high allele frequencies or location far from canonical splice sites.
Genetic and clinical findings in the two patients with disease-causing variants in PLS3
Patient 1 is a presently 30-year-old Finnish male, who was found to have a novel hemizygous nonsense variant (c.766C>T; pArg256*) in exon 8 of PLS3. This variant is not found in dbSNP, SISu, ExAC, or gnomAD databases and classified as pathogenic according to the American Collage of Medical Genetics and Genomics (ACMG) guidelines for interpreting sequence variants [25]. He has also been sequenced for a core panel of bone fragility genes (Blueprint Genetics, Helsinki), including COL1A1 and COL1A2, without pathogenic findings. The mother of patient 1 was confirmed to be heterozygous for the variant, while the brother, father, and three maternal relatives were negative for the variant. Patient 1 has a history of multiple fractures since early childhood. Between the ages of 9 and 10 years, he fractured both femurs in two separate low-energy traumas. At 10 years, he was diagnosed with multiple vertebral compression fractures, and at 13 years, he sustained two humeral fractures at two separate occasions. A bone biopsy at the age of 11 years confirmed the diagnosis of trabecular osteoporosis, low bone turnover, and normal mineralization ( Fig. 2 and Supplemental Table 1; Online Resource). He had considerably low DXA measurements; at 18 years, the BMD Z-score for the lumbar spine was −4.1 and for the femoral neck −3.3. He then received a 1-year treatment with zoledronic acid, and by age 20, a slight improvement was seen with BMD Z-scores of −3.8 and −2.8 for lumbar spine and femoral neck (Fig. 3). The patient also displays some extraskeletal features such as slightly blue sclerae, slightly yellow teeth and loss of enamel, generalized joint hyperlaxity, soft skin, minor aortic valve regurgitation, and asthma. In other respects, his pubertal development and adult height (175 cm) were normal. Measurements for calcium, phosphate, and alkaline phosphate were normal and there was no hypercalciuria, but urinary NTX was low (normal creatinine). Taken together, his biochemical profile was normal except for a mild vitamin D deficiency (serum 25-OH-vitamin D 35 nmol/L). The mother, heterozygous for the variant, had osteopenia on DXA scan (total body Z-score −1.4 at 46 years) and had sustained one radius fracture after a fall at 35 years. The mother also had joint hyperlaxity and slightly blue sclerae but was otherwise healthy.
Patient 2 whose variant was deemed disease-causing is a 10-year-old Finnish girl with no family history of osteoporosis. She proved to be heterozygous for a novel de novo missense variant in exon 12 (c.1424A>G; p.N446S) that was absent in her parents and healthy sister (Supplemental Fig. 2; Online Resource). The variant was not found in either dbSNP, SISu, ExAC, or gnomAD databases. The amino acid in this position is highly conserved over different species, and the missense variant has a scaled CADD score of 21.5 and is predicted deleterious by both SIFT and MutationTaster. According to the ACMG guidelines [25], this variant is classified as likely pathogenic, which means that the variant is considered to be pathogenic at a level of ≥90% certainty. However, since the phenotype was significantly more severe than anticipated for a female with a heterozygous missense variant in PLS3, we investigated the possibility of other causative variants. An array-CGH detected no significant gene dosage imbalances; her other PLS3 allele was also intact. We also performed whole exome sequencing for patient 2 and her nuclear family (healthy parents and healthy sister). Since the parents were completely healthy, the analysis focused on recessively inherited and de novo variants. However, apart from the de novo variant in PLS3, no other potential diseasecausing variants were found. Importantly, no other damaging variants were found in any other genes associated with OI. The genes COL1A1 and COL1A2 had also previously been Sanger sequenced without pathogenic findings. Moreover, previously reported female patients heterozygous for pathogenic variants in PLS3 show variable expressivity [12,19] (Supplemental Table 2; Online Resource). Based on these findings, the heterozygous PLS3 variant was regarded as the most likely cause for the phenotype.
Patient 2 has a history of multiple long bone and vertebral compression fractures and remarkably low BMD. By the age of 6 years, she had sustained three lowenergy long bone fractures and one finger fracture and was then referred to the Metabolic Bone Clinic, Children's Hospital, Helsinki for further investigations. Her BMD Z-scores for the lumbar spine, proximal femur, and total body were −6.6, −4.5, and −3.5, respectively. Spinal radiographs showed three asymptomatic vertebral compression fractures. Extra-skeletal manifestations were seen in the form of joint hyperlaxity with hyperextension in elbows and knees, but she did not have blue sclerae. Secondary causes of osteoporosis were excluded. Serum calcium, phosphate, alkaline phosphatase, PTH, and vitamin D were normal. A bone biopsy confirmed the diagnosis of trabecular osteoporosis, low bone turnover, and Table 1; Online Resource). Treatment with pamidronate was started at the age of 6 years with a cumulative dose of 9 mg/kg the first year and continued with zoledronic acid 0.025 mg/kg every 6 months thereafter. Follow-up measurements at 17, 29, and 40 months showed a good treatment response and she has not experience new fractures thereafter (Fig. 4).
Genetic findings in cohort II
In cohort II, Sanger sequencing of the PLS3 gene showed in total 8 SNVs and 1 small deletion in the 64 patients with fractures (Table 1). All variants have been previously described, and most of them corresponded to the findings in cohort I. One coding variant was unique to cohort II, a rare missense variant (rs140968059, p.I309V) found in heterozygous form in one girl. However, the substitution was predicted benign by both SIFT and MutationTaster, and both isoleucine and valine are branched-chain amino acids with very similar chemical structures, making a substitution between the two less likely to be damaging. Thus, none of the variants found in the 64 fractureprone children were considered causative of their skeletal fragility.
Discussion
In this study, we have tried to answer two questions: (1) Are pathogenic variants in PLS3 responsible for a proportion of bone fragility in children? (2) How to better recognize patients whose bone fragility are caused by pathogenic variants in PLS3? We addressed these questions in the more severely affected children in cohort I and in the seemingly healthy but fracture-prone children in cohort II.
In cohort I, a non-negligible proportion of the patients (6.5%; 2 of 31 screened subjects) had variants in PLS3 that were deemed to be causative of their osteoporosis. In the case of patient 1, the 30-year-old Finnish male was hemizygous for a novel nonsense variant located in the mid-part of the gene. Such a variant, based on what is known, leads to a complete loss of function of the mature protein and the patient will effectively be left without any functioning copy of PLS3 [26,27]. Both the inheritance pattern and the phenotype of previously described patients with premature stop codons in PLS3 support this conclusion [12,15]. PLS3 is also recognized as a gene with extremely low tolerance to protein truncating variants (probability of loss of function intolerance (pLI) = 0.99) [28].
In patient 2, the 10-year-old Finnish female, we detected a novel de novo heterozygous missense variant. Other upper panels (a and b) show biopsy from patient 1, and the lower panels (c and d) biopsy from patient 2. Both subjects display trabecular osteoporosis, low bone turnover, and normal mineralization. Panels a and c show low trabecular bone volume and low trabecular thickness. Panels b and d show low osteoid surface and reduced numbers of osteoblasts and osteoclasts in line with low bone turnover potential genetic causes were excluded by array-CGH and whole exome sequencing. PLS3 is also considered sensitive to missense variants (Z-score 2.51) ranking in the top 15% of genes most sensitive to missense variants [28]. Early-onset symptomatic osteoporosis has previously been described in adult females with heterozygous pathogenic variants in PLS3 [12,19] but perhaps not to the extent seen in this 10-year-old girl. Our findings therefore extend the phenotypic spectrum of PLS3 osteoporosis to include also girls with severe primary osteoporosis.
Previously reported patients with severe osteoporosis due to pathogenic variants in PLS3 have almost all been male, and in general males tend to be more severely affected. In 2015, Laine et al. reported on a large Finnish family with osteoporosis due to a pathogenic splice variant in PLS3. In this large family, hemizygous males had more severe osteoporosis, but all heterozygous females had low BMD and one affected female had a phenotype more in resemblance with her male relatives, with recurrent peripheral fractures and multiple vertebral compression fractures [19]. Van Djik et al. also reported a variable clinical phenotype in females with heterozygous pathogenic variants in PLS3 [12] (Supplemental Table 2; Online Resource). In our study, variable expressivity can also be seen in females with heterozygous pathogenic variants; the mother of patient 1 had a mild phenotype while patient 2 had severe osteoporosis. The cause of this variable expressivity is not known. Some variants could perhaps have a dominant negative effect on the other allele, but since this variable expressivity can be seen also within families where all affected share the same pathogenic variant, other factors are likely to contribute. Skewed X-inactivation could explain why female patients with identical genotypes can display different phenotypic severity, but modifying variants in regulatory elements could also exert an effect on the phenotype. Moreover, lifestyle factors may influence the phenotypic presentation even in monogenic forms of osteoporosis.
Pathogenic variants in PLS3 seem to have a substantial impact on BMD and involve compression fractures of the vertebrae. In cohort I, among the 31 included patients, the Spinal radiograph (a) at the age of 12 years shows a kyphosis and a significant spinal osteoporosis with compressed vertebrae. Radiograph (b) at 21 years, after a 1-year zoledronic acid treatment, shows an improvement of kyphosis and the shape of vertebrae, but his BMD remained very low. Long bone radiographs (c) at the age of 21 years show generalized osteopenia and very thin cortices in the lower leg. (d) Lumbar spine BMD from childhood to adulthood (shaded areas denote Z-scores ±2.0) two described patients stood out with the lowest and third lowest BMD Z-score at the lumbar spine and the lowest and second lowest BMD at the femoral neck. Both patients also had a history of multiple vertebral compression fractures as an indication of significant spinal osteoporosis, which seems to be a hallmark for PLS3 osteoporosis. They also had a history of multiple major low-energy long bone fractures at an early age. Bone histomorphometry confirmed in both patients trabecular osteoporosis with low bone turnover and normal mineralization. The low bone turnover in PLS3 osteoporosis stands in contrast to the high bone turnover seen in type I collagen-related osteogenesis imperfecta [29]. These findings are in line with previous reports on patients with osteoporosis due to pathogenic variants in PLS3 [12,15,19].
Treatment with bisphosphonates has been evaluated for only a handful of patients with PLS3 osteoporosis, but all these reports suggest that treatment is at least initially beneficial for increasing BMD [12,15]. In patient 1, the year-long treatment with zoledronic acid, which started as late as at 18 years, increased his BMD only slightly and 1 year after discontinuation his BMD was still very low. In patient 2, bisphosphonate treatment, which started at a much younger age, has significantly improved BMD and prevented further fractures, but long-term treatment results remain yet to be seen.
In cohort II, consisting of seemingly healthy but fractureprone children, no disease-causing PLS3 sequence variants were found. We also looked for enrichment of both rare and common SNVs that, at least in theory, could be modifiers of protein function and perhaps help to explain the wide range in the number of childhood fractures seen in the general population. However, we did not find any enriched PLS3 SNVs in cohort II and conclude that PLS3 variations do not explain increased bone fragility in this large cohort of children. Based on our findings, and the previously reported cases, it seems that clinically relevant pathogenic variants in PLS3 result not only in increased long bone fractures but are always associated with significantly reduced BMD and vertebral fractures.
We recognize some limitations in our study. Our cohorts were relatively small and this limits our ability to make strong conclusions in the overall pediatric population and may have prevented us from finding significant associations. Furthermore, we only searched for variants that were thought to directly affect protein structure (exonic or splice variants), which means that other possibly important variants in introns or regulatory regions could not be detected. However, because of our fairly stringent inclusion criteria, we believe that our results are representative of patients assessed in pediatric bone clinics for suspected primary osteoporosis. This is also, to our knowledge, the first study that systematically screened for PLS3 variants in a single-hospital based cohort of children with bone fragility, and the finding of two novel pathogenic or likely pathogenic variants in PLS3 supports the relevance of our research approach. We did not perform functional studies to evaluate the mechanisms through which the variants lead to clinical manifestations and it thus remains unknown whether the missense variant leads to protein instability or otherwise infers with PLS3 function. Such studies were beyond the scope of this study but once more data emerges about the physiological role of PLS3 in skeletal homeostasis, functional evaluation of mutated PLS3 may provide important insights to the pathogenesis of PLS3 osteoporosis.
Conclusions
This study expands the spectrum of disease-causing PLS3 variants and the associated phenotypes; it gives further support to the importance of spinal osteoporosis as a consequence of pathogenic variants in PLS3 and indicates that also females with heterozygous pathogenic variants in PLS3 can develop childhood-onset primary osteoporosis. Based on our findings, PLS3 screening should be considered in children-both boys and girls-with multiple peripheral and spinal fractures and low BMD. Molecular diagnosis is important for appropriate patient management and genetic counseling even if specific treatment for PLS3 osteoporosis is not yet available. In contrast, children who show an increased propensity to fracture but do not fulfill the criteria of osteoporosis (i.e., have BMD within the normal range) are less likely to have diseasecausing variants in PLS3, and our study does not provide support that screening for PLS3 variants in these children is meaningful. noncommercial use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 2017-10-03T08:43:57.206Z | 2017-07-26T00:00:00.000 | {
"year": 2017,
"sha1": "759ceefdeb4e96937bce0edfa0a4ad606762a9ab",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00198-017-4150-9.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "21806a97add3f09b0f67cc4bb43bf25681da4bf3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
155193287 | pes2o/s2orc | v3-fos-license | Strategies for treatment of childhood primary angiitis of the central nervous system
Objective Childhood primary angiitis of the CNS (cPACNS) is a devastating neurologic disease. No standardized treatment protocols exist, and evidence is limited to open-label cohort studies and case reports. The aim of this review is to summarize the literature and provide informed treatment recommendations. Methods A scoping review of cPACNS literature from January 2000 to December 2018 was conducted using Ovid, MEDLINE, PubMed, Embase, Cochrane Database of Systematic Reviews, Cochrane Central Register of Controlled Trials, ClinicalTrials.gov, Vasculitis Foundation, European Vasculitis Society, CanVasc, Google Scholar, and Web of Science. Potentially relevant articles were selected for full-text review using the STROBE checklist if they met the following inclusion criteria: (1) reported treatment, (2) addressed pediatrics, (3) focused on the disease of interest, (4) included ≥5 patients, (5) original research, and (6) full-length articles. Reviews, expert opinions, editorials, case reports with <5 patients, articles lacking treatment information, or non-English articles were excluded. A standardized assessment tool measured study quality. Treatment and outcomes were summarized. Results Of 2,597 articles screened, 7 studies were deemed high quality. No trials were available so no meta-analysis was possible. Overall, treatment strategies recommended are induction with acute antithrombotic therapy subsequently followed by high-dose oral prednisone taper over 3–12 months and long-term platelet therapy. In angiography-positive progressive–cPACNS and angiography-negative–cPACNS, we also recommend 6 months of IV cyclophosphamide therapy, with trimethoprim/sulfamethoxazole as part of induction, and maintenance therapy with mycophenolate mofetil/mycophenolic acid. Conclusion No grade-A evidence exists; however, this review provides recommendations for treatment of cPACNS.
Childhood primary angiitis of the CNS (cPACNS) is an increasingly recognized inflammatory brain disease. Previously healthy children present with severe neurologic deficits that, left untreated, could lead to devastating neurologic insult and even death. Early recognition and intervention with targeted therapy has led to better survival. The term PACNS was first coined by Calabrese for adults in 1988 1 and has been adjusted for the pediatric population: patients <18 years of age presenting with a newly acquired focal or diffuse neurologic or psychiatric deficit without an underlying systemic disorder and evidence of vasculitis on angiography and/or histopathology. 2 Classification of cPACNS is based on vessel size, with 3 subtypes recognized: angiography-positive nonprogressive (APNP) and angiography-positive progressive (APP) disease, affecting the large/medium-sized vessels, and angiography-negative (AN) disease, affecting small cerebral vessels. 2,3 Children with APNP-cPACNS typically present with a monophasic event consisting of focal neurologic deficits and evidence of ischemic stroke on MRI with corroborating angiography. 2,4,5 Patients with APP-cPACNS present with both focal and diffuse neurologic deficits 4,5 and progressive vessel narrowing on angiography beyond 3 months of disease. 2 In contrast, patients with AN-cPACNS exhibit normal angiography because of the difficulty in observing small vessels on digital subtraction angiography (DSA) or magnetic resonance angiography (MRA). 6,7 However, MRI is frequently abnormal, characterized by multifocal inflammatory lesions not specific to any vessel distribution or territory. 8,9 Thus, elective brain biopsy is mandated for diagnostic confirmation of AN-cPACNS. 4,7,10 Children typically present with both focal and diffuse neurologic and psychiatric deficits together with seizures. 3,4,7,9 The overall incidence of cPACNS remains unknown.
Similar to many other pediatric diseases, treatment is predominantly derived from adult PACNS literature. 10,11 While therapeutic strategies for cPACNS have been described, they come largely from observational and open-label cohort studies because of the absence of randomized control trials and are not standardized across centers. To date, no scoping review of the pediatric literature on treatment of cPACNS has been performed. Therefore, in this review, we aim to describe the evidence and efficacy of treatment regimens reported in the pediatric literature. Studies were eligible if they met the following inclusion criteria: (1) treatment reported, (2) pediatric population addressed, (3) focused on disease of interest, cPACNS, (4) included ≥5 patients, (5) original research, and (6) full-length articles. Only articles written in English were included. Reviews, expert opinions, editorials, case reports, and studies without treatment information were excluded. Studies were screened by perusing titles and abstracts for relative content. Articles deemed relevant were selected for full-text review and individually assessed for inclusion criteria using the STROBE checklist for cohort, case-control, and cross-sectional studies by 2 of 4 reviewers: J.B., A.D., B.G., and M.T. In case of conflicting evaluations, a third reviewer, S.M.B., was asked to make the final decision. Selected articles were assessed for quality. Studies reporting data on the same population were included individually and discussed together.
Data analysis
Quality was assessed through utilization of a modified version of Pasma et al. 12 Quality Assessment Tool. Questions addressed patient recruitment through sampling method, participation, treatment and outcome measurements, and conflict declaration. Nonrelevant questions were dropped from the tool. Three questions were deemed essential: >80% participation, reproducible treatment strategy, and reproducible outcome measure. A score of 1 was given to each question on satisfaction of both reviewers' assessment, with a maximum total score of 6. A study with a total score of 4 or higher and at least 2 of 3 essential questions was considered high quality.
Studies were evaluated for design, location, sample size, and patient demographics. This information together with treatment strategies and outcomes was aggregated. Studies were compared by diagnostic subtype, treatment regimen, and outcomes. Acceptable study outcomes included mortality and neurologic outcome, preferably using the Pediatric Stroke Outcome Measure (PSOM), 13 dysfunction across 4 domains: sensorimotor, language production, language comprehension, and cognition/behavior.
Role of funding source
There was no funding source for this study. The corresponding authors had full access to all the data in the study and final responsibility for the decision to submit for publication.
Data availability
Data not provided in this article including comprehensive treatment and outcome summaries and search strategies are available to be shared by request.
Results
The search strategy identified a total of 2,596 articles, with 1 additional study identified by author MT (figure 1). Of 2,597 articles screened for title and abstract, 110 articles had full-text retrieval for assessment of inclusion criteria and 9 original articles were included for detailed analysis 2,4,9,14-19 (figure 1). Reasons for article exclusion included: treatment description missing (n = 24), exclusion of pediatric population (n = 8), not PACNS (n = 5), case reports <5 children (n = 20), book chapters/reviews/editorials (n = 39), and abstract only (n = 5). Seven studies were deemed high quality after quality assessment was conducted. Study characteristics are summarized in table. Of 9 studies reviewed, 4 described the same study population from Lahore, Pakistan [16][17][18][19] ; 4 studies described the same population from Toronto, Canada 2,4,9,14 ; and 1 reported a case series from Los Angeles, US. 15 The Pakistani group [16][17][18][19] identified children <16 years of age subsequently diagnosed with cPACNS at their center between January 2009 and December 2010. Diagnostic categorization was two-fold: based on stroke characteristics and cPACNS subtype. A total of 68 patients were identified: 50 presented with ischemic stroke, 10 with hemorrhagic stroke, and 8 with both ischemic and hemorrhagic lesions. Alternatively, 51 patients were classified as APNP-cPACNS and 17 as APP-cPACNS. Diagnoses were based on conventional angiography (CA) and/or MRA in all patients.
Induction therapy, 3 days of IV methylprednisolone (IVMP), and/or IV immunoglobulin for 5 days with subsequent oral prednisone taper over 30 days, was completed by 56 patients. Patients with ischemic stroke also received IV heparin and subsequent oral anticoagulation therapy. Supplementary calcium and vitamin D were prescribed with anticonvulsants, antipsychotics, antibiotics, antivirals, and antacids as needed. A total of 12 patients died before completing induction therapy from complications involving cerebral artery and/or parenchymal bleeding. The remaining 41 patients were allocated to 24 months of maintenance therapy consisting of either aspirin (ASA) daily for ischemic stroke or together with daily azathioprine for progressive arteriopathies. Despite a clearly delineated therapeutic regimen, the Pakistani group reported conflicting data in regard to the number of patients assigned to each therapy across the 4 studies. Both the initial study by Malik et al. 19 and Alhaboob et al. 16 identified 41 patients assigned to ASA alone and 15 patients to ASA together with azathioprine for 24 months. However, Malik et al. 17 indicated that 40 patients were assigned to the 24 months ASA only group, and 16 patients to ASA and azathioprine, with ASA duration increased to 60 months in this study. Malik et al. 18 did not provide details of induction therapy, only that ischemic infarcts were initially treated with IV heparin. From this group, 40 patients were discharged on ASA for 24 months and 14 received adjunctive azathioprine therapy.
The Pakistani group reported mortality and morbidity across different intervals. Remission was described as a complete absence of disease activity in clinical symptoms, exam findings, lab markers, and imaging for at least 3 months, and relapse was defined as an emergence of signs/symptoms of stroke confirmed by neuroimaging (CA and/or MRA) after remission. All 4 studies documented mortality at discharge in 12 patients (17·6%). Malik et al. [17][18][19] reported clinical state at time of discharge for all survivors, defined as neurologic assessment for motor, visual, and/or speech difficulties. Normal examination was reported in 11 (20%), minor disability in 14 (25%), moderate disability in 11 (20%), and severe disability in 20 (35%) patients. Malik et al. 15 described relapse in a total of 30 patients (54%) during maintenance therapy. From the ASA only group, 18 patients (45%) relapsed within the first 24 months, which resulted in 10 deaths, and of the remaining 22 patients (55%) who completed ASA therapy, 5 relapsed and 5 died.
Overall mortality in the ASA group was 15/40 (38·5%). Of the 16 patients on maintenance with ASA and azathioprine, 2 patients (13%) relapsed within 24 months and both survived. The number of deaths reported was 5 (31%): 3 patients died from massive cerebral hemorrhage, and 2 of 3 patients died after relapse within 6 months of successfully completing therapy. Therefore, the overall mortality during maintenance reported by Malik et al. 17 at a medium followup of 34 months was 20 (35·7%) of the initial 56 survivors, either as a direct result or as a complication of first or second relapse, with 5 deaths reportedly due to non-cPACNS causes. Malik et al. 17 also reported the outcome at last follow-up using the PSOM. 13 From the 32 patients assessed: 8 (25%) a normal outcome, 10 (31%) minor disabilities, 10 (31%) moderate disabilities, 4 (13%) severe disabilities, and 4 patients were lost to follow-up.
The Canadian studies 2,4,9,14 All 14 patients with AP-cPACNS received ASA, and 9 (64%) received additional anticoagulation with unfractionated/lowmolecular weight heparin for 2 months. High-dose prednisone was prescribed in all but 1 patient with AP-cPACNS, with a varied tapering schedule: 2-6 months in 6 patients (46%) and 12 months in 7 (54%). IV cyclophosphamide over 6 months followed by MMF or azathioprine was given to a further 7 patients (54%) with AP-cPACNS; 1 patient with AP-cPACNS received anticoagulation therapy only. Highdose prednisone with 12-months taper was prescribed in all patients with AN-cPACNS. IV cyclophosphamide for 6 months in combination with azathioprine or MMF was reported in 20 patients (80%) with AN-cPACNS, while azathioprine or MMF alone as induction/maintenance therapy was described in 3 patients (12%) with AN-cPACNS. Two patients (8%) with AN-cPACNS did not receive therapy. Outcomes were reported serially across 24 months and included disease activity as measured by the Physicians Global Assessment (PGA), a visual analog scale with a score of zero indicating no disease activity and 10 indicating severe activity; von Willebrand factor (vWF) antigen levels where high numbers indicate high disease activity with >1·40 IU/mL being considered abnormal; and neurologic outcome on PSOM with good outcome defined as a score of ≤0·5 across each domain. Disease activity was elevated at time of diagnosis, but significantly decreased over time in all patients (p < 0·001). vWF antigen levels decreased over time (p < 0·001). PSOM summary scores decreased significantly over time (p < 0·001), with 52% of patients having a good neurologic outcome at 12 months, and 65% at 24 months. A total of 6 (15%) developed disease flare during follow-up defined as an increase in PGA by at least 1 cm in presence of recurrent symptoms, laboratory changes, and/or MRI findings. No patients died.
Gallagher et al. 15 described 5 cases of AP-cPACNS at their center with no fixed treatment regimen. Diagnosis was based on MRA and/or CA abnormalities. Two patients were treated with IVMP, either at their first or subsequent disease presentation. All patients received IV cyclophosphamide; however, 1 discontinued after the first infusion due to intolerance. All patients received oral prednisone of varying doses; 2 received short-term (<6 months) steroid therapy, 2 remained on oral steroids long-term (>6 months), and 1 was lost to follow-up after 6 months. One patient received long-term treatment with azathioprine and 2 with methotrexate. Four (80%) of the 5 patients received anticoagulation therapy with either ASA or warfarin during their illness. No validated outcome measures were used to assess outcomes for these patients; however, at last follow-up, 3 patients were reported as neurologically asymptomatic, 1 had mild residual deficits, and 1 was lost to follow-up but continued on treatment at their last documented visit.
Discussion
This is the first scoping review of treatment in cPACNS. Despite the paucity of randomized clinical trials, evidence in the literature offers support for treatment strategies. For APNP-cPACNS, the authors recommend treatment with long-term antiplatelet therapy to reduce the risk of stroke relapse and mortality as reported in the Pakistani cohort. [16][17][18][19] The authors concur with CNS vasculitis expert opinion in recommending short-term immunosuppressive with IVMP and acute antithrombotic therapy, subsequently followed by Figure 2 Recommended treatment protocol high-dose oral prednisone ( figure 2A). 3,10,[21][22][23] In APP-cPACNS, the authors support a combination of anticoagulation and induction therapy, with both IV steroids and IV cyclophosphamide with steroid taper (figure 2B) 3,4,[15][16][17][18][19] to increase the prospect of neurologic recovery. 3,10,15,[21][22][23][24][25][26] Induction therapy for AN-cPACNS should similarly consist of IV steroids and IV cyclophosphamide (figure 2C). Evidence in the literature 9,10,21-24 for long-term maintenance therapy in APP-cPACNS and AN-cPACNS supports daily MMF/ mycophenolic acid in preference to azathioprine to avoid the possibility of treatment failure or intolerance, as reported by Hutchinson et al. 9 and the Pakistani group (figures 2B, 2C). [16][17][18][19] In refractory disease, Batthish et al. 25 reported successful use of infliximab therapy in treating 2 cases of AN-cPACNS with good control of inflammation and subsequent prevention of brain damage. Rosati et al. 26 recently published a case series of 4 patients with AP-cPACNS successfully treated with longterm MMF. All revealed subsequent stability or improvement on MRI/MRA with no progression of arterial disease, and no relapses were reported in the follow-up period (range 10-42 months). 26 Sen et al. 24 also report successful maintenance treatment with MMF in 3 cases of cPACNS that had failed methotrexate or azathioprine and steroid treatment alone. No patients were reported to have had recurring symptoms, side effects, or new lesions on MRI. 24 While this review provided a comprehensive review of treatment literature in cPACNS, there were several limitations.
Based on our quality assessment definition for Q4, a minimum of 1 reproducible reported outcome was sufficient to satisfy outcome reproducibility. Many studies satisfied this criterion by reporting mortality, despite a lack of or irreproducibility of supplementary outcomes. Furthermore, using a validated measure like the PSOM was sufficient to satisfy the Q5 requirement. While the Pakistani group utilized the PSOM in outcome assessment, the lack of outcome definitions led to difficulty in result generalizability. Finally, while the same patient cohort was included in all 4 Pakistani group studies and treatment regimens were well described, the number of patients reported to each treatment arm was grossly inconsistent and calculated incorrectly. Despite these contradictions, the studies sufficiently satisfied the reproducible treatment criterion and were consequently rated high quality. Therefore, the authors believe that the quality assessment for a number of studies reviewed is overstated; were these studies to be rescored retrospectively, they would be deemed low quality. Finally, the case series by Gallagher et al., 15 despite meeting inclusion criteria, only described case reports with inconsistent treatment regimens and lacked validated outcome measures.
In conclusion, available literature on treatment in cPACNS were thoroughly reviewed and findings summarized. Based on the evidence and current expert opinion, the authors provide recommendations for cPACNS treatment strategies. Rapid initiation with the recommended therapeutic interventions would serve to optimize survival and prevent permanent brain injury in patients with cPACNS to achieve the best possible outcome. | 2019-05-17T13:33:54.146Z | 2019-05-03T00:00:00.000 | {
"year": 2019,
"sha1": "f073d0d1498303da55e1e4c7eed4ecf2fcf895e9",
"oa_license": "CCBYNCND",
"oa_url": "https://nn.neurology.org/content/nnn/6/4/e567.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f073d0d1498303da55e1e4c7eed4ecf2fcf895e9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
214676236 | pes2o/s2orc | v3-fos-license | AN ECONOMIC ANALYSIS OF AGRICULTURAL PRODUCTION FUNCTION ON THE PADDY FIELDS OF THAILAND
This research study is based on analyzing the variables, which assess the Thailand’s paddy production. Similarly, the TFP in Thailand’s paddy production can be determined through these factors. Moreover, the substitution elasticity between capital and labor has been measured in this research. Further, the substitution elasticity between the older and young farmers has been tried to be investigated. It is expected that the results of research would offer suitable suggestions for policy makers to improve the level of paddy sub-sector productivity in Thailand. In this study, the first objective is based on analyzing the factors, which assess the paddy production with reference to Thailand. Cobb-Douglas production function has been used in this study. Moreover, the stand error and coefficient of determination have been used to assess the regressors, which are significant. The study has used production function to formulate the problem. The findings of the study have revealed the fact that A crucial role is played by productivity in improving the paddy production of country. Therefore, there is need for determining the level of paddy production to be sufficient (high enough) or insufficient. Productivity can be measured in different ways. The measurement of productivity is based on the type of information of the productivity available.
Background
Apart from being a staple food and a source of calorie, paddy sub-sector is vital in influencing the Thailand political environment. In the Thai political perspective, the government has formulated several policies involving subsidies and incentives, a target of 10-tonne productions per-hectare programme and paddy mini-estate. All of the above policies are tools to achieve multi-racial unity in Thailand, which will be achieved if the government can reduce poverty and income gap between agricultural (paddy farmers) and non-agricultural sectors. Thus, through unity, Thailand can create political stability and a stable government which promotes national development (Jermsittiparsert, Sriyakul, & Pamornmast, 2012;Jermsittiparsert, Sriyakul, & Rodoonsong, 2013;Sriyakul & Jermsittiparsert, 2017)
ENTREPRENEURSHIP AND SUSTAINABILITY ISSUES
ISSN 2345-0282 (online) http://jssidoi.org/jesi/ 2020 Volume 7 Number 3 (March) http://doi.org/10. 9770/jesi.2020.7.3(38) In addition, in 2009, this sector also provided employment opportunities to 316,000 Thai farmers (Fahmi, Samah, & Abdullah, 2013). There were about 116,000 full-time farmers who have made this sector as the main source of income. Meanwhile, there were more than 200,000 paddy farmers that have made this sector as the second source of income (Fahmi et al., 2013;Fatah & Cramon-Taubadel, 2017). Apart from providing employment opportunities, this sector also becomes a source of income for farmers. However, the total incomes earned by paddy farmers are relatively low, which has contributed to the high rate of rural poverty. In 1980, the rural poverty rate was about 37.4 per cent. Approximately 73 per cent of the rural poverty is contributed by paddy farmers. Most of these farmers have an average monthly income of less than RM1,500 per month (Kamaruddin, Ali, & Saad, 2013). However, in 2009, the rural poverty rate has decreased to 8.4 per cent. To some extent, this shows that there is an improvement in the rural poverty rate in Thailand. Nevertheless, in 2009, the rural poverty rate (8.4 per cent) was still high compared to the urban poverty rate (1.7 per cent). A majority of the poor people in rural areas are associated to the paddy farming (Fahmi et al., 2013). For THAILAND, the total poverty rate was recorded about 1.4 per cent out of the amount, which is 1 per cent under hard-core poor. If we assume that the poverty rate for all granary and nongranary areas 4 equals to 1.4 per cent, therefore, the total poverty rate in paddy sub-sector exceeds 11 per cent. Overall in Thailand the paddy production ( Figure 1).
Figure 1. Paddy production in Thailand
Source: USDA As mentioned by Zhou and Liu (2019) and Ligon and Sadoulet (2018), poverty has a positive relationship with a total production. Therefore, it is assumed that 11 per cent poverty will cause a large fall in production. Even globally the paddy production during current decade has increased significantly ( Figure 2).
Figure 2. Global paddy production and area
Source: Food and agriculture association of United Nations
ENTREPRENEURSHIP AND SUSTAINABILITY ISSUES
ISSN 2345-0282 (online) http://jssidoi.org/jesi/ 2020 Volume 7 Number 3 (March) http://doi.org/10. 9770/jesi.2020.7.3(38) Among the identified factors that contribute to the high poverty rate is the age factor. On the average, the farmers' age that is involved in paddy farming is more than 60 years. At this age, farmers are no longer effective to execute any physical works in paddy fields. This has contributed to the low level of productivity in the paddy subsector. Another factor that has contributed to higher poverty rate is the low level of education among paddy farmers. Low education levels contributed to paddy farmers with the problems in obtaining lucrative income from the farming activities (Moyo, Francis, & Bessong, 2018). Several studies have shown that the economic situation has a direct relationship with a poverty rate Zhou and Liu (2019). This is because the economic growth will reduce poverty rates whereas the economic slowdown will lead to the increase of farmers' poverty rates. This is true for the Thailand case in which we cannot reduce poverty rates especially those of farmers' poverty when they were paired with economic slowdown in 1997. Various policies and programmes have been carried out by the government to ensure farmers get high returns from paddy-farming activities. These directly affect the farmers' income level and, perhaps, could drive them out of continuous poverty. However, the government has also needed to ensure that the prices paid by rice consumers are at the affordable level (Sinha & Sheth, 2018). Basically, all the government's paddy-and-rice policies are to make sure that Self Sufficiency Level (SSL) for rice increases and are enough to meet the local rice demand. Additionally, the important objective in every policy and programme is to increase farmers' productivity. This is because when farmers' productivity increases, their income would also increase. This will enable the country to achieve rice self-sufficiency rate (Sinha & Sheth, 2018).
The productivity has to be increased by the farmers for increasing the production of paddy. A crucial role is played by the improvement in level of productivity in increasing the paddy production of a country (Jones, 2018). For increasing the paddy production in an effective manner, there is need for exploring the variables, which significantly affect the level of production. The efficiency of every input can be measured through the information of production input. After the determination of the significant input of production of paddy, the TFP (total factor productivity) can proceed. Technological and labor progress is linked with the TFP (total factor productivity). It is crucial to determine the substation elasticity between the technological progress and labor input. The production can be determined as labor or capital intensive with the estimate of substitution elasticity in the paddy production. Moreover, different farmers' age can be included in paddy production. It is critical to assess the substitution elasticity between the old and young farmers. The farmer's productivity increases as per this finding. The TFP sub sector of paddy is indirectly increased with the increase in farmers' productivity.
By using the approach of production function, the TFP can be measured systematically (Jones, 2018). This research study is based on analyzing the variables, which assess the Thailand's paddy production. Similarly, the TFP in Thailand's paddy production can be determined through these factors. Moreover, the substitution elasticity between capital and labor has been measured in this research. Further, the substitution elasticity between the older and young farmers has been tried to be investigated. It is expected that the results of research would offer suitable suggestions for policy makers to improve the level of paddy sub-sector productivity in Thailand. In this study, the first objective is based on analyzing the factors, which assess the paddy production with reference to Thailand. Cobb-Douglas production function has been used in this study. Moreover, the stand error and coefficient of determination have been used to assess the regressors, which are significant.
The second objective in this research is to analyze the growth of TFP in paddy sub sector of Thailand. Cobb-Douglas production function is the common function employed for the determination of TFP. The TFP value can be calculated as the residual between the values added and growth as per the given data on capital and labor input and shared values. The third objective of the study is to measure the substitution elasticity between capital and labor, old and young farmers in the production of paddy. There are several implications of determining the substation elasticity in the production of paddy. It can be used relative to the capital and labor employed in the process of production. The fourth objective of the study is to analyze the substitution elasticity between the old and young farmers in the production of paddy.
The farmers and policy makers can find this study useful. The study creates and influence in the form of micro as well as macro planning by the top management. The inter-government agencies are involved in the macro planning and the policies are formulated by the policy makers. At the same time, the farmers' mobilization at the level of production is involved in the micro planning. These policies should be implemented by the government in specific actions taken at the level of farm. The actual production takes place at the level of farm. An indirect influence can be obtained by the farmers through different policies, which can be introduced by the government.
Literature review
The sector of agriculture is an ancient sector of economy all over the world. It has great significance for the social, economic, and political development in almost every country. The history of developments in agriculture has given some unique experience to every developing country. This is distinct for every emerging country, but every country has similar characteristics for the development of agriculture. The share of agriculture sector in majority of the countries is declining. In this way, the contribution of agriculture sector declines in the economic growth and this makes it the third engine. However, its contributions are still significant in the economic development.
The agriculture sector in the emerging economics is not performing optimally. Most of the farmers are working at a small scale, which make their contributions limited. The activities of agriculture are carried out in the form of subsistence and traditional. For this, the investments on agriculture give low returns. Moreover, the family size of the farmers is mostly large, and their income is below the line of poverty. They are not able to support their family in terms of finance. This makes them live in a miserable condition. They are influenced to live in the hard-core poverty. Some researchers have signified these circumstances (Caiazza et al., 2016;Thompson, 2018). These researchers have claimed that same characteristics are shared by the small-scale farmers in the emerging economies including less productivity, inefficiency, and poverty. Usually, the yield is obtained for consumption of family. The surplus is sold. In this way, income earned by the farmers becomes low and this creates an influence in their savings. Resultantly, difficulties are faced by the small-scale farmers in investing and purchasing the quality seeds, machines, and fertilizers. Because of these reasons, the small-scale farmers are facing debt situation in the emerging economies (Ligon & Sadoulet, 2018). For the strategy formulation to increase the income level of small-scale farmers, several empirical studies have been carried out. It has been suggested by researchers that there is need for improving the quality of life of small-scale farmers particularly in the emerging economies. In order to resolve this issue, it has been suggested to commercialize and re-structure human capital in the sector to make agriculture system systematic.
The training centers of agriculture should be facilitated with modern equipment and laboratories. In this way, skilled human capital can be produced for agriculture sector. The grade, production, and quality of products in the agriculture sector can be improved. The prices of the products will be competitive as well. The efforts are not precise and certain challenges exist in the way. The local communities may resist the change. Several people consider the agriculture field as a third-class job and are not involved in it. Among these people, most are the young people. Because of this perception, unemployment has increased in the rural areas. Moreover, there are several job opportunities in the urban region, which influence young people to migrate there for the sake of employment. The job opportunities in the cities are filled by the young's influx. The unemployment rate among the youngsters is tried to be reduced by these circumstances. The rate of unemployment in the urban region is not more than 10 percent yearly (Giannakis & Bruggeman, 2015;Wharton, 2017).
The economies of dualism are still being practiced by the developing countries. It shows that both the modern and traditional sectors exist altogether and have some major contributions in the growth of economy. There is more capital employed in the industrial sector and skilled labor is required. There are several unskilled people in the agriculture sector, and they are dependent on the traditional techniques of agriculture. The modern techniques are employed by the farmers who work at large scale. Therefore, they enjoy greater returns as well.
Agricultural and Economic Development
In the essay "An Inquiry into the Nature and Causes of the Wealth Nations", Adam Smith detailed the causes and determining factors of economic growth. He presumed that the level of market size limits the labor division, which helps in understanding the concept of wealth creation. Entrepreneurship is influenced to innovate because of the market size development. The specialized labor is created through major capital investments. Adam Smith claims that this can increase the labor productivity. The capital accumulation, economic development, and savings can be increased (Lucas & Fuller, 2017).
The problems of economic development have also been discussed by the classical economists such as Staurt Mill, Ricardo, and Malthus. For instance, in the famous book The Principles of Political Economy and Taxation", Ricardo claimed that the most dominant sector of a country is agriculture sector. The book was published in 1971. The people were classified as
ENTREPRENEURSHIP AND SUSTAINABILITY ISSUES
ISSN 2345-0282 (online) http://jssidoi.org/jesi/ 2020 Volume 7 Number 3 (March) http://doi.org/10. 9770/jesi.2020.7.3(38) investors, capitalists, and labors. It was also stated that land is a scare factor and competition exists over its utilization (Lucas & Fuller, 2017). The land comes under competition for being used in agriculture or industry. Ricardo in his book said that there come changes in technology with the passage of time. The economic growth can occur rapidly because of the changing in the technological level. Moreover, the stationary state can be avoided by the technological changes. Therefore, the economic development can be enhanced by the changes in technology. It was added by Ricardo that agricultural development is based on labor. The wage rate paid to the labor determines their continuity for working in the agricultural sector. The minimum level of wage determines the increase or decrease in the labor force. When the owners receive rate of return, which is higher than the minimum benefit received, this results in accumulation. In this way, investors are attracted to invest their money for getting higher returns. There can be decline in the marginal product with the increase in employment in a limited area of land. To resolve this issue, it has been suggested by Ricardo that technology can be used and capital should be accumulated. The productivity of labor increased with this accumulation. Marx also argued the concept of capital accumulation for economic growth. This was later discussed by Domar, Kaldor, and Harrod, being the members of neo Keynesian and neo classical economics. It was thought by neo-Keynesian and neo-classical economics that the agricultural productivity can be increased by savings.
Therefore, it must be ensured that investment should be made in the industrial and agriculture sector. It has been suggested by Schumpeter that some internal agents promote the development that can result in a new set of factors of production. These were named as entrepreneurs by Schumpeter. Similarly, the process of development happens when the ideas of employers are supported with the talent of entrepreneurs. The development process is supported by different factors and infrastructures such as finances and physical provisions.
There is a key role of agriculture sector in the economy particularly in the initial development stages as per the perspective of economic development (Awokuse & Xie, 2015;Inam & Effiong, 2017;Vigliarolo, 2020).
A large surplus is produced by the agriculture sector that is necessary for economic transformation. Alternatively, this role is not performance by the non-agriculture sector because of their small size. Therefore, these limitations should be overcome by the agriculture sector in the emerging economies. The non-agriculture sectors cannot progress until these challenges and obstacles are overcome.
Higher wages for the employees can be the result of improvements in the non-agriculture sectors. This makes them able to spend more on necessities such as food and clothing. Resultantly, the demand for food can increase. The food supply is inelastic that reflects that it cannot change with the change in price. When the demand increases for food, this can shift the prices of food products. A negative influence is incurred on the society with the increase in the food prices. This problem can be solved through imports. This can be an expensive solution due to financial constraints. The economic transformation can be encouraged by the changes in the agriculture sector (Sender, 2016). A crucial role is played by agriculture sector in the initial stages of transformation of economy. This occurs in various ways. The income and resident's welfare may increase because of growth in the agriculture sector in the respective economies. This makes them able to raise demand for the food products and other services developed by the non-agricultural sectors (Rehman, Chandio, & Jingdong, 2017).
The development of industry based on agro is encouraged with the improvements in the agriculture sector. The growth of agro-based industry is dependent on the downstream sectors such as textiles, fuel, beverages, food, and machines. The industry based on agro is crucial as it can offer input of production for industries including pesticides, agricultural machinery, and fertilizers (Wiggins, Sabates-Wheeler, & Yaro, 2018). Infrastructure increased with the development of agro based industry in the rural and urban regions, which is supported by the government.
Developments in Production Function Analysis
Production function is an important economic analysis tool which has evolved over time. There are two school of thoughts regarding the production function's pioneers, where one views Philip Wicksteed as the pioneer of production function while other states Johann von Thunen as the first pioneer. Production concept refers to skillfully arranging the acquired knowledge and does not act as a tool for representing consequences of the economic choices. Rather, it acts as a tool for obtaining an entity which could affect the economic decision-making.
ENTREPRENEURSHIP AND SUSTAINABILITY ISSUES
ISSN 2345-0282 (online) http://jssidoi.org/jesi/ 2020 Volume 7 Number 3 (March) http://doi.org/10. 9770/jesi.2020.7.3(38) Economic efficiency is the main area of concern which often get highlighted while analyzing the production function. The researchers (Chukwuji, Inoni, & Oyaide, 2006;Leibenstein, 1966) suggested that economic efficiency is of two types, namely; resource allocative efficiencies and technical efficiencies. The term efficiency relates to the knowledge engineering. Although several empirical researchers assume that production functions do not involve any managerial, technical and engineering efficiencies. Assuming this, several previous researches (Chukwuji et al., 2006) have particularly focused upon the allocative efficiency of resources which presents an ideal combination of resource and technical allocation efficiencies.
Generally, a physical relation exists among input and output, such as one machine and one labor combination will produce many output units. The literature frequently uses financial values for indicating input-output relationship. A few studies have also shown the use of different physical units for measuring the input-output relationship. It may cause certain problems while performing the empirical analysis, involving undivided units. However, there is another opinion by Faber, Proops, and Baumgärtner (1998), that production process means producing various number of outputs. Therefore, a weighted price can be observed to consider the difference in products. Thus, the wastage and error involved in the physical production process can be isolated.
It has often been assumed by researchers that the technical efficiency issues of a firm can be resolved through the production function, but in fact it is a false proposition because for each variable different measuring unit is used. Apart from this, production function must not be taken as a business model, since it ignores several cost and management aspects. Thus, to avoid such problems, a non-parametric approach i.e. Data Envelopment Analysis (DEA) was suggested by several researchers (Angelidis & Lyroudi, 2006;Banker, Charnes, & Cooper, 1984). According to Emrouznejad and Thanassoulis (2001), the DEA approach can measure multiple input-output analyses and does not involve mathematical forms in its production function.
Knut Wicksell is the pioneer of developing algebraic hypothesis into physical agricultural production functions, in agricultural economics field. He reported positive increasing returns on labor and capital when applied to infertile soil. Based on this hypothesis, he explained that the quality and quantity of inputs determine the size and growth of agricultural output. In Knut Wicksell's view, the input-output relationship can be demonstrated in mathematical form. Thus, for a certain period of time, if X2, X2, and X3 are the inputs and P denotes the total output, then production function can be presented as follows:
……… (1)
Tough, Wicksell was the key person who has formulated a basic production function, the first empirical estimation which was performed by Cobb and Douglas (1928). The production function was later known as Cobb-Douglas Production Functions (CD). The origin of certain functions can be traced back to the work of Wicksell.
……… (2)
According to Wicksell, the coefficient for Equation 2 above can be unity and have a constant return to scale. In Charles W. Cobb's and Paul H. Douglas's study, a similar production function was used by them as proposed by Wicksell. They used the data on the U.S. manufacturing industries from 1899-1922. Cobb's and Douglas's work was the first empirical work using time-series data. Generally, the form of production function is as follows:
……… (3)
where P is output, L is labour and C is capital input in the industry. The estimation was resulted from the production function model which used by Charles W. Cobb and Paul H. Douglas, as follows:
……… (4)
From Equation 4 above, Cobb and Douglas (1928) have indicated that a combination among labour and capital coefficients equals to one. They also indicated constant returns to scale. This finding has confirmed the Wicksell's earlier hypothesis. If the coefficient is greater or lesser than one, then the total product may be larger or smaller than the number of combinations of input used. Therefore, we can identify whether the firms enjoy anincreasing or decreasing marginal productivity. In another study, Cobb and Douglas (1928) have stressed on the unitary degree of elasticity or the amount of elasticity of resources which are equivalent to one. They have employed the function, where the coefficient j and k can take a non-zero value. The Cobb-Douglas's production function has become popular until today. This is because the Cobb-Douglas's production functions are the simplest production function. After the development of a production function that was highlighted by Cobb and Douglas (1928), the study of the production function became popular among numerous researchers (Fraser, 2002). There are various forms of estimation which can be carried out by using the production functions. The study of production function can be broken down into several types of data, such as the cross-sectional, time series, and panel data.
Single Output Production Functions
The production function knowledge has grown enormously since 1970s. During this period, the development of the knowledge has brought a number of prominent scholars. Among them were Turgot, Johann von Thunen, Philip Wicksteed, Malthus, Cobb and Douglas (Hussain et al., 2019). Since then, the production-function development has been a crucial tool in empirical analysis in all economic schools of doctrine. Returning to the historical development of the production functions, many scholars believed that Turgot was the first scholar to have introduced the production functions knowledge around 1767. According to Schumpeter (1954), Turgot has argued how the dissimilarity in factor proportions affects the marginal productivity of production. Based upon Turgot's observations, the utility of consumption of one product may reduce if the supply of the product increases. The increase in quantity of production input may increase the productivity up to a maximum point. After this point, the increasing in the unit of input used may decrease the marginal productivity level to zero. Eventually, if there are more input units added, the productivity may turn to negative. Consequently, after a maximum point, additional input may be unproductive. Subsequently, more than thirty years after Turgot, the production functions knowledge has evolved. Several scholars have successfully connected to this development such as Johann von Thunen and Philip Wicksteed (Mishra, 2007). The numerical concept of the production functions has been introduced by Malthus. Towards this, Malthus introduced the logarithmic production function in 1978. The idea of logarithmic production functions is to capture the law of diminishing returns. To facilitate the description of his model, Malthus specified that the population increases by the geometric ratio (1, 2, 4, 8, 16, 32...) while land increases the arithmetic ratio (1, 2, 3,4, 5...). Malthus has then presumed that labour may experience diminishing returns when combined with land.
Following Malthus, David Ricardo introduced the idea of a quadratic production function in 1817. According to Ricardo, the growth may stop when diminishing return of capital is combined with limited land. At this point, the investment may drop. This is because the economic growth may have reached the stationary phase. After Malthus and Ricardo, Johann von Thunen has introduced the exponential production function. In fact, he was the first person to have used this function. Von Thunen exponential production function can be written as follows (Mishra, 2007):
……… (5)
where G1, G2, and G3 are the labour, capital, and fertiliser, ai is a parameter. PR is the von Thunen's production function (Blaug, 1985;Lloyd, 1969). According to Lloyd (1969), von Thunen was probably the first economist to have applied the theory of differential calculus in calculating the level of productivity. Lloyd has also believed that von Thunen was perhaps the first person to use calculus to solve the problems of economic optimisation. He further added that von Thunen also has used calculus to interpret the marginal productivity of economic production function. He was the first to formulate that algebraic production functions as Equation 6 …………. (6) is the output per worker WPR, capital per worker is CPW. Thunen's production function with L (labour) such as: …………. (7) Based on the above equations, we can conclude that the von Thunen's production function is a hidden Cobb-Douglas's production function (Blaug, 1985;Lloyd, 1969). Based on Equation 7, von Thunen has discovered that labour alone cannot be an effective production input. Von Thunen has then transformed Equation 7 to be Equation 8 (Mishra, 2007).
…………. (8)
Nevertheless, after a long review process, von Thunen corrected his early notation about labour. In his new discovery, he has found that labour alone can produce product. However, modern economists have never formulated a production function by using labour as the sole factor of production. In addition, in 1923, another scholar named Wicksell introduced a production function that is similar to CobbDouglas's production function with an exponential of up to unity. Based on the previous work, Samuelson (1979) presumed a Cobb-Douglas's production function as merely a special case for other production functions. The CoobDouglas's production function can be written as follows:
ENTREPRENEURSHIP AND SUSTAINABILITY ISSUES
ISSN 2345-0282 (online) http://jssidoi.org/jesi/ 2020 Volume 7 Number 3 (March) http://doi.org/10. 9770/jesi.2020.7.3(38) 2019 where, PR is output, L is labour, CAP is capital, is a stochastic disturbance term. and are the elasticises of output with respect to the input of production respectively. Meanwhile, the Marginal Rate Technical of Substitution (MROTS) can be written as follows: - Equation 11 and 12 define the total cost (COST) and the isocost line, respectively, in terms of the quantity of labor (L), the quantity of capital (CAP), the wage rate (w), and the rental price of capital (r). ……... (11) Equations 13 and 14 are the alternative ways of expressing the necessary condition for the optimal combination of inputs. The first states that the optimum combination is found where the absolute value of the slope of an isoquant (MROTS) is equal to the absolute value of the slope of the isocost line. The second notes that the marginal rate of technical substitution is equal to the ratio of the marginal products of labor and capital and is therefore equal to the absolute value of the slope of the isocost line at the optimum. The last rewrites the second to show that it implies that the optimum combination of inputs is found where the marginal product of an input divided its cost per unit is the same for all inputs. ……… (13) ………. (14) The elasticity of substitution of the Cobb Douglas's production function can be expressed as follows: = …………. (15) If a = 1, means that any changes in L/K will be matched by a proportional change in wlr and the relative income that is earned by capital and labour will stay constant. After 33 years, Cobb-Douglas's production function was introduced. Arrow, Chenery, Minhas, and Solow (1961) made some modifications to the function. However, the changes were only an extension of the Cobb Douglas's production function, not an alternative paradigm. One of the Cobb-Douglas's production function properties is that the elasticity of substitution between capital and labour is constrained to unity. However, the production function that was formulated by Arrow et al. (1961) allows the elasticity of substitution labour and capital to be flexible and the value lies between zero and infinity. This function is known as a Constant Elasticity of Substitution (CES). The CES value lies between the CobbDouglas's, Leontief's, and linear production functions. Therefore, we said that the CES production function is a special case for those three production functions above. Nevertheless, its value remains fixed along and across isoquant and ignores the size of output or input into the production process. The CES function can be written as follows:
……… (16)
where is value-added, CAPi is capital, and Li represents labour. Notations and are the efficiency, distribution, and substitution parameters. Meanwhile, the random errors are UI, U2 and Un. Basically, we assume that the random errors are independent and normally distributed. The number of samples was represented by n. Under the perfect competition, the elasticity of substitution for CES production function is = . Transformed Equation 9 to log functions as follows: , ……... (17) where i=1,2, 3…...n is wage for labour while is the CES elasticity of substitution. If the CES elasticity substitution value is 1 (a = I), then we have a Leontief production function. If the elasticity substitution of CES approaches zero, then we get the linear homogeneous Cobb-Douglas's function. Meanwhile, if o approaches negative infinity, then we get the Leontief s function. Conversely, there are two problems that are related to the CES production function. The first problem is the elasticity of substitution that is constant along and across the isoquant. The second problem is that the researcher used more than two inputs. For example, if there are three inputs of CES production function that may yield three values of elasticity. However, according to the impossibility theorems of Uzawa and McFadden, it is impossible to get the value elasticity if the number of inputs used is more than two (Mishra, 2007). The next production function is the Variable Elasticity of Substitution (VES).
……… (18)
The random error (U) is independent and normally distributed. Equation 18 is then transformed to log as follow : -……... (19) where , and are the coefficient of the logarithm of capital-labour ratio. If the value is zero, then the model is reduced to Constant Returns to Scale (CES) production function as Equation 14 The elasticity of substitution for the VES production function can be expressed as follows: = …. (20) where is the ratio of total factor costs to the rental cost of capital? In mid-1970s, the generalized Cobb-Douglas's production function and the CES were almost complete. Both of these functions assume that the marginal rate technical of substitution (MRTS) of factors of production is contributed by changes in a factor price. In addition, both Cobb-Douglas's and CES production functions are free from the technical progress. These mean that any technological progress may not affect the labor and capital change in the production function. In technical terms, this situation is called Hicks-neutral. Basically, there are three types of neutrality; Hicks, Harrod, and Solow. Nonetheless, changes in technology may cause changes in production possibilities. Hicks-neutral situation is related to changing in technology. However, the changes in technology may not affect the capital-labor ratio if a factor price is unchanged. Meanwhile, a technological change is assumed to be Harrod-neutral if the changes in technology do not affect a capital-labor ratio when capital price is unchanged. In the meantime, the technological change is Solow-neutral if the labor is unchanged. The unchanged labor may cause a capital-labor ratio to be unchanged.
Method
There are several methods that are available to test the long-run relationship between regresor such as residual model by Granger (1987), and Johansen and Juselius (1990). However, the present study employed the autoregressive distributed lag (ARDL) approach developed by Pesaran, Shin, and Smith (2001). This technique has been popular for recent years and is often used to analyse the long-run relationship between the regressor in the empirical model. This technique also allows the dynamic interactions among the variables. There are a few reasons why this technique is chosen. First, ARDL model gives power and testing of the long-run relationship for the different order of integration, while the other method required all the explanatory integrated in the same order. Therefore, ARDL method is not required for pre-testing of the order of integration of the variables in the model. Hence, ARDL approach to cointegration can be applied regardless of whether the underlying explanatory variables are purely I(O), purely I(1) or mutually co-integrated (Verma, 2007). However, for the accurate result, the response variable needs to be integrated at order one I(1). According to Pesaran et al. (2001), when pre-testing is involved, a certain degree of uncertainty (I(O), 1(1) or mutually cointegrated with regard to the analysis of level relationships is created. Therefore, this situation may create problems to the researcher in selecting the appropriate method of analysis. Furthermore, numerous scholars claim that unit root tests lack power and have poor size property especially in small sample size series (Virmani, 2004
Results and Disccussions
The results of the correlation test between dependent variable and independent variables proved to be very useful in pre estimation analysis especially as regards potential relationships suggested by theories. Therefore prior to the econometrics analysis, the statistical correlation of the variables are examined which helped in determining the statistical relationships between and amongst the variables (see Table 1). ASEAN's optimum models selection was undertaken as depicted by Table 2 and the Table 3. The selected models are , ARDL (1,1,0,2,1,0,1,2,0,0,0,0) In present study, it has been observed that old and young farmers, capital, land, paddy price, and fertilizers are the significant paddy production inputs and are capable of influencing the production volume both in the short and the long run. Other than being the determinants of paddy production, these factors also play significant role in the productivity growth of paddy subsectors. The integration of these production factors has shown that Thailand occupies a productivity growth rate of less than 5%, which is un-favorable for the overall paddy sub-sectors' growth. However, in the long run, the lower levels of productivity growth will lead to greater dependency upon the rice import. The constantly obtaining lower paddy production levels may create several issues, such as insufficient food supply for satisfying people demand. Thus, the productivity of paddy must be improved to enhance revenues, using better technology, research and development, high quality seeds, and high capital investments. Furthermore, certain cultivation techniques must be introduced which could help in the rice breed germination such as, System of Rice Intensification (SRI) and transplanting.
Meanwhile, there is inelastic substitution between labor and capital with a value close to 1. This indicates that substitution between labor and capital is not complicated, which implies the willingness of Thai farmers to integrate new technology in their activities. Integration of technology and machinery into farming activities has gradually replaced the labors part in farming, which is further expected to help this sector to switch towards labor-saving technologies. Thus, this research observed that old and young farmers may act as perfect substitutes, thereby implying that no significant impact can be witnessed on the paddy yield with different farming experience, particularly because most farmers in Thailand use similar types of machinery and technology in farming processes. Furthermore, since labor can now be replaced with machines, therefore it is the reason why old and young farmers do not consider as a major area of concern in terms of paddy cultivation.
Conclusion
The current research has found that land, capital, young farmers, old farmers, fertiliser, and paddy price are the important inputs in paddy production. All these inputs can influence the volume of production either in the short-run or long-run. Apart from being the determining factors in paddy production, all these factors are also important in the paddy sub-sector productivity growth. By using all of these production factors, the study has found that the level of the productivity growth for all four Thailand regions is lower than 5 per cent. This situation is not favorable to the growth of the paddy sub-sector as a whole. In the long-term, if the productivity growth is low, then this will create a dependency on rice import. Even if the level of paddy production is still low, this situation will create problems of inadequate food supply to meet the demand of the people. Meanwhile, the substitution between capital and labour is inelastic and the value is near to one. These show that the substitution between capital and labor is not so difficult, which indirectly shows that the farmers in the Thailand areas are willing to accept the inclusion of technology in the farming activities. Gradually, the use of machinery and technology has replaced the role of labours in farming activities. This may help this sector towards the labour-saving technologies. Being concurrent with the above findings, the present study has found that young farmers and old farmers are a perfect substitute. These indicate that the difference in fanning experience does not give a significant impact to the paddy yield. This is because young and old farmers in Thailand regions basically use a homogeneous level of technology or machinery. The question of whether young or old farmers are not a major concern in the paddy cultivation in the Thailand areas is because machines can replace labors in a lot of ways. | 2020-03-26T10:26:19.847Z | 2020-03-30T00:00:00.000 | {
"year": 2020,
"sha1": "31647673319f49b50a1d4225b8a46831a430de22",
"oa_license": "CCBY",
"oa_url": "https://jssidoi.org/jesi/article/download/504",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "748d706fe2ea529cea74688154caf340c6318034",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
235283768 | pes2o/s2orc | v3-fos-license | A High-Availability and Integrity Layer for Cloud Storage, Cloud Computing Security: From Single to Multi-Clouds
The utilization of distributed computing has expanded quickly in numerous associations. Distributed computing gives numerous advantages regarding the ease and availability of information. Guaranteeing the security of distributed computing is a the central point in the distributed computing condition, as clients regularly store delicate data with cloud capacity suppliers, however, these suppliers might be untrusted. Managing ”single cloud” suppliers is anticipated to turn out to be less famous with clients due to dangers of administration accessibility disappointment and the probability of vindictive insiders in the single cloud. A the development towards ”multi-mists”, or at the end of the day, ”inter clouds” has developed as of late. High-Availability and Integrity Layer (HAIL), a conveyed cryptographic framework that allows a set of workers to demonstrate to a customer that a put-away record is flawless what’s more, retrievable. HAIL fortifies, officially brings together, what’s more, smoothes out unmistakable methodologies from the cryptographic furthermore, dispersed frameworks networks. Evidence in HAIL are proficiently process able by workers and profoundly reduced regularly tens or many bytes, regardless of record size. This paper reviews the ongoing examination identified with single what’s more, multi-cloud security and addresses conceivable arrangements. It is discovered that the examination into the utilization of multi-cloud suppliers to keep up security has gotten less consideration from the examination network than has the utilization of single mists. This work means to advance the utilization of multi-mists because of its capacity to decrease security dangers that influence the distributed computing client.
Introduction
Multicloud is the administration of divergent framework areas as though they were a solitary durable arrangement of assets, paying little heed to where those assets dwell, empowering clients to expend those assets from anyplace and in any condition. In such a manner, even though multicloud is worried about the underlay transport, it is fabricated principally around higher layer usefulness. Generally, multi-cloud is more an operational condition than a portrayal of There are different security issues which should be tended to from the purpose of cloud clients and cloud specialist co-ops. The most significant issues which should be tended to from the cloud specialist co-ops see are information protection and merchant lock-in. Distributed computing has likewise given another stage to information stockpiling as assistance. The dangerous development of computerized information requests huge capacity gadgets and broad information calculation. Henceforth, Cloud Computing worldview needs to give a made sure about information capacity and access. In distributed computing, the client's information is imparted to the outsider for putting away and completing calculations. In this way, the trustworthiness of the information on this the un-believed condition can be addressed. Ensuring private data subtleties like charge card subtleties, wellbeing records of the patients in any wellbeing arrangements from gatecrashers or then again programmers are of prime significance. One of the arrangements, which would improve the security of the information, is by relocating from single distributed storage to Multi-Cloud stockpiling. Examination identified with both single distributed storage and Multi-Cloud stockpiling is a significant issue. There is a need to address legitimate issues like the age of ace assistance level understanding and transportability for capacity as assistance. For Web application security that manages to limit the clients from getting access as once hacked information is uninhibitedly replicated by the gatecrasher. The cloud security is planned for distinguishing the blemishes and examining the related subset. Once the imperfection is distinguished it is then refined into issue explanation for the specialists. As these weaknesses are uncovered, it desires for some arrangement of conventions to be accommodated making sure about client's and customer's data. At the beginning of distributed computing, the innovation was accessible just to the corporate area. By the by, as the innovation has developed the potential clients have developed exponentially [3]. The arrangement of security includes that were inserted to give the required protection to the data on the cloud may not be attainable now. The measure of time spent on handling the information and the assets assigned make gigantic overhead costs that are payable by generally corporate yet is unsuitable for most clients. Thus, the conventional method of overhauling must be changed so the majority can be profited by the cloud administrations. This can be accomplished by moving the information from neighbourhood data centre to the cloud workers. This change will build the worry for security as the information is with the third individual and the current arrangement of security conventions utilized in a neighbourhood not, at this point pertinent. Clients putting away the information on the cloud are presently subject to security given by the specialist organization [4].
Related work
HAIL might be seen freely as another, the administration arranged form of RAID (Redundant Arrays of Inexpensive Disks). While RAID oversees document excess powerfully over hard-drives, HAIL oversees such excess across cloud capacity suppliers. Late multi-hour disappointments in S3 outline the need to secure against essential assistance disappointments in cloud situations. Considering the rich focuses for assault that distributed storage suppliers will introduce, HAIL is planned to withstand Byzantine enemies. (Assault is principally planned for crash-recuperation.) Data dispersal Distributed data dispersal calculations (IDA) that endure Byzantine workers have been proposed in both simultaneous systems, just as offbeat ones. In these calculations, record honesty is authorized inside the pool of workers itself. A few conventions ensure against defective customers that send conflicting offers to various workers. Interestingly, HAIL puts the errand of record honesty checking in the hands of the customer or some other trusted, outer help and maintains a strategic distance from a correspondence among workers. In contrast to past work, which confirms trustworthiness at the degree of an individual document squares, HAIL confirms the granularity of a full record. This distinction spurs the utilization of PORs in HAIL, as opposed to square level trustworthiness checks. Widespread Hash Functions Our IP-ECC crude circuits a few strings of examination that have developed freely [5]. At the core of this examination are Universal Hash-Capacities (UHFs). (In the dispersed frameworks writing, regular terms for variations of UHFs are arithmetical marks or homomorphic fingerprinting.) UHFs can be utilized to develop message-confirmation codes for an exhibition assessment of different plans). Specifically, a characteristic blend of UHFs with pseudorandom capacities (PRFs) yields MACs; these MACs can be collected over numerous information squares and subsequently, bolster reduced verifications over huge document tests. PORs and PDPs propose a POR convention and give formal security definitions. The primary JK convention bolsters just a predetermined number of difficulties, whose reactions are precomputed and affixed to the encoded document. Shacham andWaters (SW) utilize a verifiable Macintosh development that empowers a boundless number of questions, to the detriment of bigger stockpiling overhead. Their Macintosh development depends on the UHF + PRF worldview, in any case, they build an UHF dependent on an arbitrary straight capacity, as opposed to a more productive, standard blunder amending code. In simultaneous and free work, what's more, give general structures for POR conventions that sum up both the JK and SW conventions. Both papers propose the utilization of a mistake remedying code in registering worker reactions to customer challenges with the objective of guaranteeing record extraction through the test reaction interface. The focal point of is generally hypothetical in giving extraction ensures for foes answering effectively to a subjective little part of difficulties. Interestingly, Bowers et al. consider POR conventions of reasonable enthusiasm (for which foes with high defilement rates are distinguished rapidly) and show diverse boundary tradeoffs when structuring POR conventions. propose a firmly related development called proof of information ownership (PDP) [6]. A PDP identifies a huge division of record debasement, however, doesn't ensure document retrievability. Ensuing work shows how document refreshes maybe acted in the PDP model. proposed the expansion of PDPs to different workers. Their proposition includes computational cost decrease through PDP summons over various copies of a solitary document, instead of an offer based methodology. Portray a PDP plot dependent on full-document handling. consider asymmetric key variation, however, their plan just works for scrambled records and evaluators must keep up long haul state. expanding the memory-checking plans of portray a hypothetical model that might be seen as speculation of PORs. Appropriated conventions for dynamic record honesty checking propose an appropriated conspire in which squares of document F are scattered across n workers utilizing an (n;m)-eradication code (i.e., any m out of the n sections are adequate to recuperate the record). Workers spot-check the honesty of each other's pieces utilizing message verification codes (MACs). propose a plan that guarantees document respectability through conveyance over various workers, utilizing blunder revising codes and square level record uprightness checks. They utilize keyed arithmetical encoding and streamfigure encryption to recognize record defilements. Their keyed encoding capacity is identical to a Reed-Solomon code in which codewords are created through keyed determination of image positions. Their debasement identification framework is in this view the message-verification code (MAC) the development proposed. We embrace a few thoughts of concurrent MACing and blunder rectifying in our HAIL developments, be that as it may, we characterize the development thoroughly and officially break down its security properties. Proactive cryptography Our antagonistic model is roused by the writing on proactive cryptography started by , which has yielded conventions resilient to portable enemies for mystery sharing just as a signature plans. Proactive recuperation has been proposed for the BFT framework by Castro and Liskov. Their framework develops a the repeated state machine that endures 33 percent of broken copies in a window of weakness, however any number of shortcomings over the lifetime of the framework. In past proactive frameworks, the key trade-off is a quiet occasion; thusly, these frameworks must redistribute shares consequently and give proactive securities. Defilement of a putaway document, be that as it may, is not a quiet occasion. It brings about an adjustment in worker express that a verifier can distinguish. Thus, HAIL can depend on remediation that is receptive. It need not naturally invigorate document shares at every stretch, except just on identifying a shortcoming [7].
Emerging Issues
Distributed computing and Internet of things (IoT) innovation have gotten the trendy expressions of the computerized world today which are producing advanced information in a great many terabytes day by day. This information could be put away safely by building a private cloud in the venture. These private mists will have their constraints because of confined adaptability and versatility as a result of the constrained limit. These associations will not like to relocate to open mists that will conquer every one of these confinements due to business progression dangers of seller lock-in. Half and half mists possibly join the advantages of private and open (outer) mists. Seller locking can stay away from by viably misusing the highlights of numerous outside mists and putting away the information by choosing different mists for capacity. Generally, most business endeavors have their own Information Technology (IT) division to help the everyday business exchanges and procedures. For little also, medium undertakings, possessing the IT framework is an extravagant suggestion as they need to contribute enormously on the IT equipment and programming, yet additionally on the IT staff, upkeep, and everyday operational expenses. With the rise of Cloud, the greater part of the associations is presently relocating towards redistributing their IT necessities for receiving the cost sparing rewards. Over the period, numerous associations have picked cloud administrations [8]. This has empowered clients to pick From the cluster of the specialist organization at a serious expense. In the current decade the exploration is more centered around the work that is identified with the security of the information put away on the cloud. Even though there are numerous systems proposed by analysts that manage these issues, there has been a constrained work done to the address made sure about information stockpiling on the Multi-Cloud stage that utilizations document stockpiling. Document capacity is considered as one of the venture answers for capacity needs. It is very basic for the undertakings to guarantee information security before putting away the information on open mists. In the present IT condition, assortments of capacity needs are emerging out of estimating, execution, uptime, and accessibility needs. In record stockpiling, space is gotten to utilizing document framework. Numerous specialists have worked for a security model for object capacity, which is more appropriate for the capacity and recovery of individual information. Most importantly, object stockpiling isn't appropriate for fractional withdrawal and refreshing of the information object. Most of the structures proposed by different scientists for capacity in Multi-Cloud condition like MCDB, Hail, RACS, Cloud-Raid, Depsky-A, Depsky-CA, McCloud what's more, Triones, have concentrated on either classification or respectability or accessibility upgrade or address seller lock-in issue or ideal stockpiling. Every one of these methodologies has tended to just the item stockpiling innovation.
Multi-Cloud usage is probably the most recent region of enthusiasm among analysts in cloud innovation. Anyway choosing cloud suppliers to assemble Multi-Cloud is presenting challenges as developing ubiquity of Cloud Computing has given different options of Cloud Service Providers (CSPs) with a wide scope of cost and Quality of Service (QoS) contributions. Its element provisioning model of cloud-like, multi-occupancy, virtualization and asset sharing raises certain challenges in charging estimation during the plan and organization period of the application. This is one of the principles reasons for the shoppers to stay away from relocation to cloud or Multi-Cloud. Notwithstanding, there are a few different explanations behind the need of Multi-Cloud system, particularly in Capacity as a Service model, for example, merchant lockin, accessibility, classification and security. Combined Cloud and Multi-Cloud are two kinds of conveyance models for different mists, both vary as for understanding between different cloud suppliers included. In the event of Federated Clouds, there is a shared understanding among different cloud suppliers, whereas Multi-Cloud, needs no understanding. As there is no need to consent to an arrangement before the development of Multi-Cloud more alternatives of cloud suppliers are accessible which builds unpredictability of keeping up singular Service Level Agreement (SLAs). Ace SLA age will ease intricacy of following SLAs. Multi-Cloud arrangements were proposed. By the by, the greater part of them have not examined the effect of individual CSP's Service Level Agreement on Multi-Cloud arrangement [8]. Moreover, not many methodologies have proposed SLA age for multi-cloud arrangements yet again the greater part of them have not talked about its effect on execution methods of SLA traits. In Storage as a Service Multi-Cloud situation, it is important to present SLA's in two layers, one to channel SLAs of individual cloud suppliers that are a piece of Storage as a Service Multi-Cloud and others to oversee ace SLA of the client. Multi-Cloud execution utilizing deletion coding method circulates client information in an assorted cloud framework with different QoS qualities. From client point of view, it tends to be considered as Composite Infrastructure Service with a set of utilitarian and non-practical prerequisites for putting away client information. As a consequence of the administration decent variety in cloud suppliers, choosing cloud suppliers dependent on client necessity, master rating and past execution of specialist co-ops, with appropriate help set to structure Multi-Cloud is a testing task. Besides, QoS boundaries referenced in the client the prerequisite can be a clashing or varying level of significance among different cloud suppliers or utilizing various techniques for ascertaining properties. Due to support assorted variety in cloud suppliers, choosing cloud suppliers dependent on every one of the three boundaries referenced above is a difficult assignment for the arrangement of Multi-Cloud conditions. Makris [3]. Anyway, every one of these arrangements is proposed for a single cloud specialist co-ops. Likewise, the secrecy of the re-appropriated information should be upgraded. The undertaking of concealing the data can be accomplished by utilizing an encryption calculation. As all the encryption calculations need the gatherings to trade the keys it makes its own one of a kind worry of losing the way into a third individual. So the issue identified with dissemination of keys should likewise be tended to. In the perspective above articulations, the the challenge is to give all the security highlights in a savvy way. Consequently, the whole cloud network relies upon the arrangement of calculations that can procedure the information and secure it without expending numerous assets. As the systems utilize numerously calculations to make sure about their information, the client's first undertaking [11] is to break down the arrangement of calculations being used. Execution examination of the calculations isn't palatable for the associations to move their information on to distributed storage. A few variables gave inspiration to researching there [9]. A portion of the most significant inspiring elements are: A. Exponential Growth of organizational data. Because of the development of distributed computing and IoT innovation the following decade will produce a few quintillion bytes of information every day. Late study completed by divider road diary says that 64 B. Trust Deficit with Single Cloud Provider. As the classified information is moved to the cloud by the undertakings or people, the re-appropriated information should be shielded from un-approved presentation.
C. Data in Storage exposed to security threat for longer duration. As the vast majority of the file information should be put away for longer length security of the information might be addressed when re-appropriated to a solitary cloud administration supplier. Likewise, the unwavering quality of the information may not be guaranteed if there should be an occurrence of the disappointment of this single cloud specialist co-op. Consequently, there is a need to structure a Multi-Cloud system. D. Heterogeneous Nature of Cloud Service Provider. Need to consent to an arrangement between the cloud client and specialist co-op. As the cloud specialist [12] co-ops are expanding, the opposition has expanded. The shoppers need to have a few strategies to choose the suppliers to suit their prerequisites. Each supplier may have a one of a kind method of ascertaining the uptime, stockpiling cost, outbound information cost or throughput.
E. Security Threat for Data In-flight. Because of exponential development in information and different gadgets, there is a need to plan a quicker lightweight encryption calculation which utilizes the least number of assets for execution.
F. Improve the privacy, public verifiability and Authorized Access to Data in Multi-Cloud. While getting to the information it might be presented to security assaults or might be lost because of equipment disappointments. Consequently, the information should be confirmed and gotten to by the approved client.
Importance of MultiCloud
So as to comprehend the longing for multi-cloud, it's critical to initially talk about why organizations would embrace cloud innovation in the first place. The craving to move from heritage foundation to cloud regardless of whether open or private is normally determined by either cost or deftness. For organizations that find keeping up heritage foundation restrictively costly, the idea of satisfying innovation needs through a supplier intended for that design is alluring since it is accepted costs will be lower, permitting the business to concentrate on the center skills that drive important worth. However, concentrating essentially on cost neglects the principle advantage of the cloud. Undertakings are but rather a re-appropriating foundation they are utilizing the readiness of cloud tasks. On the off chance that endeavors can convey programming and administrations all the more rapidly, they put themselves in a better situation to manage the pace of progress surrounding them. What's more, it is the large scale pattern of progress that represents the genuine danger to the undertaking business. While most organizations start with unassuming cloud desire, for someeven those of moderate sizethis exertion will, in the end, lead to multicolored. Regardless of whether it's the quest for a double merchant procedure, the need to help separated abilities, or the acknowledgment that old and new [13] advancements should calmly coincide (at any rate incidentally), most endeavors will wind up supporting a blend of different private furthermore, open mists. To understand the full guarantee of cloud, endeavors must consider their definitive goal even as they embrace littler cloud developments today. In the below section subtlety why the present cloud arrangements will at last advance into multi-cloud. The reason for wide range of usage of Multicloud is discussed below. A.Economics. Picking among various cloud suppliers makes a financial influence. Further, cloud the travel industry is a developing wonder in which endeavors exploring different avenues regarding cloud experience the ill effects of to a great extent ungraceful arrangements, as well as significant expenses, brought about by industrious applications, making them investigate the parity among private and open framework. B.Capabilities. As endeavors create various requirements, they are probably going to find contrasts in cloud contributions. For instance, Amazon AWS contends on expansiveness and adaptability of administration contributions, Microsoft Azure on big business applications, Google Cloud on AI, and Oracle on ERP programming. C.Availability. cloud administration disappointments. While this enhancement occurs across mists, it likewise unavoidably occurs inside a solitary supplier impression as endeavors utilize various cloud records, areas, and accessibility zones. D.Data Privacy.The need to screen where information began, particularly for worldwide endeavors, will support distinctive nearby cloud suppliers so outstanding burdens can be gathered with the information they administer. E.Proximity.Where execution is significant, associations may decide to run a remaining task at hand in various mists. For example, neighbourhood mists will offer shorter full circle times for traffic, while the ascent of multi-get to edge registering (MEC) may lead to cloud examples running on-premises or in a conveyed telco cloud. F.Transition.The choice to receive multi-cloud may be driven by a need to finish the change from private to open, in which case ventures should connect the two designs for a dubious measure of time.
The Challenges of MultiCloud
Endeavors moving to multi-cloud share a typical arrangement of difficulties that restrain selection. This area will concentrate on these hindrances, making a difference modelers perceive potential issue territories and comprehend the limitations under which they should work to plot their way forward when planning a multi-cloud future. Multidomain Connectivity: While the facts confirm that multi-cloud is essentially an answer for an operational issue, there is as yet a basic need to consistently interface and secure islands of assets. Since multi-cloud is likewise about controlling the start to finish framework, this availability should likewise, at the very least, length server farm, open cloud, and grounds and branch passages, with full grounds and branch joining a more drawn out term objective. Crossing such a wide arrangement of system areas, both on-premises and in the cloud, requires a scope of physical and virtual structure factors what're more, capacities. For instance, overseeing traffic over a high-limit WAN regularly requires custom silicon, while server farm switches influence basic vendor silicon and branch boxes use x86 CPUs. Typical textures the board stage will likewise ease organize virtualization with consistent availability across spaces. The virtualized overlay organize must traverse various sorts of workers just as virtual/physical systems administration and security gadgets with the goal that outstanding tasks at hand or clients are most certainly not presented to contrasts in spaces. Notwithstanding the structure factor, these gadgets should constantly speak with northward programming layers like coordination, permeability, and security, putting engineering necessities on the fundamental availability components, particularly around programmability and telemetry. Multivendor Orchestration:Since a scope of structure elements and abilities are required, multicloud arrangements will be multivendor, representing a test for ventures hoping to bind together foundation. Giving a typical coordination layer that sits on a heterogeneous underlay has verifiably demonstrated troublesome; multivendor component the executive's frameworks (EMS), for example, have a woeful history. Basically, the present transcendent operational model, with its reliance on pinpoint control and manual gadget by-gadget the board, will not endure the transition to multicloud. In the multicloud, application discharges and where application remaining tasks at hand run can change from second to second, requesting a system that will adjust similarly as fast. All things considered, multicloud is a huge takeoff from the present operational model, where the estimation of progress is all the more frequently in months. Engineers must be set up to structure around the new operational acts of applications supporting the computerized business. End-to-End Visibility:In the event that multi-cloud is predicated on regarding a blend of the framework as a solitary element, at that point permeability can't be restricted to singular spaces. For some existing arrangements, bringing together permeability represents a critical innovative hindrance. Further confusing the change is the way that multi-cloud will be exceptionally com- puterized. Since the essential reason of mechanization is to see something, accomplish something, this puts a premium on unifying checking and permeability and extending the surface zone of what can be watched and followed up on in a reliable manner. In multicolored, the target ought to be to diminish the requirement for clients to find issues and to soothe activities groups of the need to perform monotonous, tedious errands across space limits to decide the underlying driver at the point when an issue happens. At the point when situations are confined, it is hard to connect occasions across space limits, which significantly decreases the span of any operational controls. In the event that telemetry is diverse across unique mists, it must be standardized and deciphered before it very well may be utilized for multicloud purposes. Ongoing examination with expectation based checking and cautions for hailing issues, alongside the information-driven limit wanting to comprehend where assets should be spun up or turned down, must be a central necessity of any multi-cloud plan. Pervasive Security:While organizing has generally regarded security as an edge issue, today it has obviously stretched out past the edge. To oversee assaults that happens inside the system, security must gander at traffic both inside and between different system subsets. It's not, at this point enough to put firewall channels and ACLs between organizing fragments; we should have the option to separate traffic streams inside those portions, a training regularly known as micro-segmentation. For multicolored to work in a strong manner, steady insurance plans must be applied over the whole framework, making a safer stance for traffic inside the grounds, across branches, among mists, and inside the server farm. Strategy and control must exist at both the occupant and the application level, and it must be overseen in a bound together manner so that the operational weight of appropriating and refreshing security strategy doesn't deliver the arrangement operationally unviably. This requires a solitary purpose of the board and a solitary purpose of start to finish approval to guarantee guideline consistency. Complexity:Maybe the greatest test to working a multi-cloud is intricacy. In the present condition, intricacy is as of now so common and crippling, systems are incredibly delicate and narrow-minded of progress; without a decent primary structure, systems working over numerous mists will add to this unpredictability, intensifying the issue. Sadly, multifaceted nature is inescapable as a component of the number of gadgets, clients, applications, apparatuses, etca state of the given biological system that can't be completely killed. Great structure and computerization, notwithstanding, can offload a lot of that multifaceted nature from ordinary operational undertakings. To accomplish this, all parts of an environment must be structured in view of effortlessness. One approach to do this is to expel pieces, unique usefulness, and computerize work processes. Predictable geographies and basic approaches, alongside robotization to guarantee proactive administration and simplicity investigating, are basic. Such a methodology brings about solid, disentangled arrangements over the whole multi-cloud condition.
END TO END AND TOP TO BOTTOM
While talking about multi-cloud, modelers must be mindful so as not to limit the plan degree and hazard fueling the difficulties nitty gritty above. Since the two clients and applications can be anyplace, a genuine multi-cloud configuration must incorporate a total start to finish point of view to oblige full correspondence. Clients, for example, can get to applications from a grounds, distant branch, home office, or open space, while applications might be sent in onpremises server farms, in different open mists, or progressively moved to edge process just as serverless situations where they may just live for a brief timeframe. Since applications must be accessible to all clients, in any case of area, the multicloud engineering should contact all spots in the system. Modelers should likewise think about something beyond availability among clients and applications. Just permitting bundles to stream isn't sufficient; the multicolored additionally requires organizing capacities to reach out through and through so as to make sure about everything, screen execution, and arrange strategy. Dealing with a multi-cloud situation in a concentrated manner expects to start to finish arrangement to execute strategies and empower robotization, just as start to finish permeability to comprehend where assets, clients, and applications dwell. Start to finish security is likewise required to secure clients, applications, and information. Figure 2 delineates a full start to finish, though, and through the multi-cloud plan.
MULTICLOUD ARCHITECTURAL OVERVIEW
Taking the degree and expected difficulties of multi-cloud structure into thought, the essential structure obstructs for multi-cloud can be broken into three layers (see Figure 3): Foundational Resources: The basic register, stockpiling, system, and security components that structure the establishment for any outstanding task at hand framework. The remaining task at hand Management: The outstanding task at hand develops, for example, virtual machines (VMs), compartments, and serverless designs, just as more extensive outstanding burden life-cycle the board structures like OpenStack, Kubernetes, OpenShift, and different open mists. Administration Consumption: Ultimately, clients expend administrations, commonly through applications. This layer decouples framework and administrations by straightforwardly abstracting the essential and outstanding task at hand administration layers into a lot of administrations for every application.
Foundational Resources: From a systems administration point of view, the primary assets are principally answerable for transport. Occupants, clients, and applications get legitimate overlay availability over a mutual underlay framework. This layer likewise incorporates significant administration develops, for example, SDN regulators (counting overlay supervisors) to oversee everything. Since the fundamental system must traverse the server farm, cloud, and anyplace applications and clients live, it incorporates a blend of physical and virtual gadgets sent both on-premises and in the open cloud. The requirement for a typical coordination layer requests that the underlay gadgets have principles-based, programmable interfaces; a blend of shut, restrictive frameworks would basically prompt greater multifaceted nature. Workload Management: The organization gives the system reflection expected to meet the dynamic needs of utilizations, regularly without requiring reconfiguration of the hidden system. All things considered, arrange coordination must work working together with the mechanized tasks for propelling and tearing down the new outstanding task at hand occasions. There are three fundamental outstanding tasks at hand administration techniques utilized by ventures moving to multi-cloud today, in addition to one rising innovation. Most undertakings are likely utilizing a blend of exposed metal workers (some of the time some portion of a stage as-an administration Figure 3. Multicloud architectural overview offering), VMs (ordinarily overseen by VMware or OpenStack items), and compartments (ordinarily Docker runtime oversaw by Kubernetes and OpenShift). These innovations are accessible both on-premises and in broad daylight cloud contributions. Undertakings likewise can utilize the developing serverless methodology, where sections of utilizations can be executed on request in various cloud workers instead of running as a stone monument or in a couple of levels. The test is that, even in modestly measured conditions, the application scene likely uses a blend of these advances. As an expanding number of virtualization and distributed computing advances and administrations are presented, systems and system security must change the manner in which they bolster remaining tasks at hand. Services Consumption:A definitive proportion of accomplishment for any multicloud arrangement is whether the fundamental foundation is straightforward to the client. The the objective of multicloud is to permit remaining tasks at hand to be sent anyplace dependent on business and user needs, for example, cost. The client ought to not have the option to tell whether an outstanding task at hand is served out of a private or an open cloud. For this to be conceivable, the system should at last coordinate into the application layer, both as far as availability and security, also as how new applications and administrations are sent and devoured. This top layer of the multicloud design decouples foundation whats more, benefits by straightforwardly abstracting the lower layers into the arrangement of administrations required for every one of the applications. By decoupling administrations from the fundamental framework, the structure sets up secure multitenancy, detaching administrations from one another. The confinement system just permits administrations to impart dependent on the administrator's predefined expectation, all the while implementing both availability and security strategies.
Conclusion
Application advancement is driving the present computerized business. The speeding up of the application advancement and refreshing procedure, coupled with the adaptability of where remaining tasks at hand can run, presents critical intricacy especially operational into the system. Difficulties emerge when associations start relocating their application remaining tasks at hand from cloud to multicloud, just as to more virtualized advances. Not exclusively should the system bolster these new operational situations, it should safely bolster correspondences with the 12 current applications and information. With application situations and prerequisites developing quickly, undertakings require new ways to deal with organizing plan, security, and tasks. A multicloud fit for interfacing and making sure about applications start to finish across numerous mists, as just as though they were one, lets associations upgrade assets as a solitary, durable framework with predictable activities all through. This implies administrators can comprehensively deal with the system for remaining tasks at hand, running on VMs, holders, or exposed metal workers, on-premises furthermore, in the open cloud, while dealing with the overlay alongside the underlay. They can arrangement, execute work processes, and screen everything starts to finish dependent on expectation is driven heading pertinent to their job. This transition to multicloud is something other than innovative; it expects endeavors to develop their structures, procedures, and individuals. | 2021-06-03T01:38:31.011Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "3544b106594c624c717069587d5e74dd9d56e5cb",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1921/1/012072",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "3544b106594c624c717069587d5e74dd9d56e5cb",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
30146540 | pes2o/s2orc | v3-fos-license | Factors Positively Influencing Health Are Associated with a Lower Risk of Development of Metabolic Syndrome in Korean Men: The 2007–2009 Korean National Health and Nutrition Examination Survey
Background The prevalence of metabolic syndrome (MetS) has risen rapidly worldwide, including in South Korea. Factors related to lifestyle are closely associated with the development of MetS. The aim of this study was to investigate the association between MetS and a number of factors positively influencing health, namely non-smoking, low-risk drinking, sufficient sleep, regular exercise, and the habit of reading food labels, among Korean men. Methods This cross-sectional study included 3,869 men from the 2007–2009 Korean National Health and Nutrition Examination Survey. Information on five factors positively influencing their health was obtained using a self-reported questionnaire. We categorized subjects into four groups, depending on the number of positive factors reported (group I, 0–1 factor; group II, 2 factors; group III, 3 factors; group IV, 4–5 factors). Results Men who reported a greater number of positive health factors had better laboratory and anthropometric values than men who reported fewer positive health factors. The prevalence of MetS was 29.1, 27.2, 20.7, and 14.6% in groups I to IV, respectively. Compared to group I, odds ratios (95% confidence intervals) for MetS were 0.96 (0.78–1.19) in group II, 0.67 (0.52–0.87) in group III, and 0.52 (0.35–0.76) in group IV, after adjusting for confounding factors. Odds ratios for abdominal obesity, glucose intolerance, and hypertriglyceridemia were statistically significant. Conclusion A greater number of positive lifestyle factors influencing health were associated with a lower risk of developing MetS, in a nationally representative sample of Korean men.
INTROduCTION
Metabolic syndrome (MetS) refers to the combination of obesity-related disorders of abdominal obesity, elevated blood pressure, glucose intolerance, and atherogenic dyslipidemia. It is well-known that individuals with MetS are at a higher risk of developing type 2 diabetes or cardiovascular disease. The prevalence of MetS in adults ranges from 20% to 30% depending on age, ethnicity, and how to define MetS, and is rising rapidly worldwide, including in South Korea. 1) This growing prevalence of MetS poses a significant threat to public health.
Factors related to lifestyle, such as tobacco smoking, 2) heavy alcohol consumption, 3) lack of physical activity, 4) and insufficient sleep 5) are closely associated with the development and progression of MetS, although the underlying pathophysiology has not yet been fully elucidated. Moreover, several previous studies have reported that the practice of reading food labels is associated with improved dietary intake. 6,7) In 2010, the Korean Ministry of Health and Welfare reported that men aged 70 years or less had a seven-fold higher risk of developing MetS than women aged 70 years or less. Furthermore, men are likely to engage less than women in activities associated with improving their health in a positive manner, such as non-smoking, low-risk alcohol drinking, and regular exercise. 8 Our aim in this study was therefore to explore the association between MetS and a number of factors that influence health positively, namely non-smoking, low-risk alcohol drinking, sufficient sleep, regular exercise, and the habit of reading food labels in a weighted, representative sample of Korean men. In addition, we investigated the relationship of each component of MetS to these positive health factors.
Study population
This cross-sectional study was based on data obtained from the Kore- Households were selected as sampling units through a stratified, multistage, probability sampling design that was based on information such as geographic region, sex, and age group, obtained using household registries. Sampling weights, which indicates the probability of being sampled, were allocated to every subject. Thus, results based on sampling weights are representative of the entire Korean population.
During the survey period, all subjects were notified that they were randomly selected, and agreed voluntarily to participate in this nationally representative survey. Subjects were asked to complete four parts (a health interview survey, a health behavior survey, a health examination survey, and a nutrition survey) of a questionnaire. Subjects had the right to decline participation in this survey based on the National Health Enhancement Act, and submitted informed consent documents prior to participating in the study. In addition, subjects also approved the use of their blood samples for other academic purposes. A variety of information regarding the subjects was collected during the survey, including their medical history, results of physical examinations, health-related behavior, and anthropometric and biochemical data. Trained medical staff conducted physical examinations following standard methods.
Subjects answered questions regarding their lifestyle, including questions about cigarette smoking, alcohol consumption, sleep duration, physical activities, and the use of food labels. Subjects were divided into two categories: non-smokers and smokers. Drinking patterns of the subjects were assessed based on the Alcohol Use Disorders Identification Test (AUDIT) questionnaire that includes 10 points for screening subjects, and is commonly used to identify heavy drinkers. 8) Sleep duration was assessed by the following self-reported question: "On average, how many hours and minutes do you sleep per night?" To evaluate levels of physical activity, a short form taken from the International Physical Activity Questionnaire was adopted, and was translated into Korean. The survey contains questions as to whether the subjects reads food labels or not, when buying food products.
Based on the answers, we classified subjects as 'food label users' or not.
We excluded female subjects, individuals younger than 20 years of age, those without complete anthropometric data and lifestyle behavior data, and those who did not fast overnight prior to blood sampling.
After these exclusions, 3,869 men were included in the final analysis.
The institutional review board of the Korea Centers for Disease Control and Prevention approved this study.
anthropometric measurements and Laboratory data
Subjects' body weight and height were measured in light indoor clothing without shoes, to the nearest 0.1 kg and 0.1 cm, by trained medical staff. Body mass index was calculated as the ratio of weight (kg)/height (m 2 ). The medical staff measured waist circumference at the narrowest region between the iliac crest and the 12th rib using a non-elastic tapeline. Blood pressure was measured on the subject's right arm using a mercury sphygmomanometer (Baumanometer; Baum, Copiague, NY, USA). Systolic and diastolic blood pressures were measured twice at 5-minute intervals and the average of the two measurements was recorded. After overnight fasting, blood samples were collected from the subjects through an antecubital vein puncture. Levels of glucose, triglycerides, high-density lipoprotein (HDL) cholesterol, and total cholesterol were measured using an ADVIA1650 auto-analyzer (Siemens Medical Solutions Diagnostics, Erlangen, Germany) and a Hitachi Automatic Analyzer 7600 (Hitachi Co., Tokyo, Japan). Data from a food method. Daily energy intake was calculated using the Can-Pro ver. 2.0 software (Korean Nutrition Society, Busan, Korea), which was developed by the Korean Nutrition Society.
definition of metabolic Syndrome and positive health Factors
According to the 'clinical practice guideline of prevention and treatment for MetS in Korea, 9) MetS is diagnosed when a subject meets three or more of the following criteria: (1) In this study, we defined the following five factors influencing health positively: non-smokers, defined as individuals who had smoked less than 100 cigarettes in their lifetime; low-risk drinkers, defined as individuals with an AUDIT score between 0-14; adequate sleepers, defined as individuals who slept for 6-8 hours a day on average; regular exercisers, defined as individuals who engaged in moderate to vigorous-intensity physical activity three or more days per week; and food label users, defined as individuals who usually read food labels when buying food products.
Finally, we categorized the subjects into four groups according to the number of positive practices that they engaged in on a daily basis: group I, 0-1 positive practice; group II, 2 positive practices; group III, 3 positive practices; group IV, 4-5 positive practices.
Statistical analysis
For
RESuLTS
General characteristics of the study population (unweighted number: 3,869 men) and sub-group characteristics, based on the number of
The Effect of Cigarette Smoking on metabolic Syndrome
Cigarette smoking is the most important preventable cause of death and illness. Prohibiting cigarette smoking or motivating its cessation promotes public health. A number of studies have revealed that cigarette smoking is related to a higher risk of developing MetS in a positive, dose-dependent manner. 10) While cigarette smoking, including second-hand smoking, impairs insulin sensitivity and elevates triglyceride levels and blood pressure, 11) cessation of smoking improves insulin sensitivity and lipoprotein profiles, despite a slight gain in weight. 12) Values are presented as odds ratios (95% confidence intervals). Model 1 was unadjusted. Model 2 was adjusted for age. Model 3 was adjusted for age, education level, daily energy intake, and household income. II III IV I II III IV I II III IV I II III IV I II III The effects of cigarette smoking on glucose and lipid metabolism may contribute to the stimulation of cortisol circulation and release of the growth hormone, which have insulin-antagonistic effects. 13) Even though cessation of cigarette smoking might cause weight gain, 14) the beneficial effects of cessation outweigh its adverse effects.
The Effect of alcohol Consumption on metabolic Syndrome
Alcohol consumption is very popular in Korea; 81.6% of men and 52.4% of women in Korea have lifestyles that involve alcohol consumption. The amount of alcohol consumed daily per capita is 30.1 g for men and 6.6 g for women. Numerous epidemiological studies have reported a J-or U-shaped association between alcohol-drinking and cardiovascular morbidities/mortality. Heavy alcohol intake can cause metabolic dysfunction and is associated with a higher risk of cardiovascular events, while light to moderate alcohol intake can lower cardiovascular events and elevate HDL cholesterol levels. 15,16) However, conflicting results have been reported regarding the relationship between alcohol consumption and MetS. 17) In addition to the total amount of alcohol intake, unhealthy drinking patterns are considered an important risk factor for MetS in men. 18) We categorized subjects as high-risk drinkers or others according to the AUDIT score. Because the AUDIT questionnaire consists of 10 screening questions including frequency of drinking, quantity of drinks consumed, adverse events after drinking, among others, it not only assesses the amount of alcohol consumed, but also identifies heavy drinkers. 19) Drinking alcohol stimulates the appetite through reduction of glycemic levels, and causes obesity-related disorders including dyslipidemia and diabetes mellitus. 20) Chrysohoou et al. 21) reported that chronic alcohol consumption is associated with glycemic levels, triglyceride levels, and arterial blood pressure in a J-shaped manner. Our results are therefore consistent with those reported by previous studies. High-risk drinking patterns may contribute to a higher prevalence of MetS.
The Effect of Regular Exercise on metabolic Syndrome
It is well-known that increased physical activity maintains and promotes health and reduces mortality, regardless of a reduction in body weight. 22) Several studies have confirmed that being physically active is associated with a lower risk of developing MetS, in a dose-responsive manner. 23) Interventions to increase physical activity in individuals with MetS reduced the prevalence of MetS and improved its diagnostic components. 24,25) A meta-analysis of seven randomized-controlled trials by Pattyn et al. 25) demonstrated that waist circumference and blood pressure were significantly reduced in healthy adults with MetS after endurance exercise, while HDL cholesterol levels were increased. Aerobic exercise training is a useful tool to manage MetS, as well as to decrease abdominal obesity, glucose intolerance, blood pressure, and dyslipidemia. 24)
The Effect of Sleep duration on metabolic Syndrome
Although several epidemiological studies have suggested that the du-
The Effect of Reading Food Labels on metabolic Syndrome
Disclosure of nutritional information on food products is expected to decrease calorie consumption and fat intake, and have a positive effect on public health; 29) however, the effectiveness of food labeling is still unclear. 30 [34][35][36] In this study, we suggest that lifestyle factors should be considered together, rather than in isolation when researching MetS. | 2018-04-03T01:34:43.486Z | 2017-05-01T00:00:00.000 | {
"year": 2017,
"sha1": "c34fd80fd4c654e0f190a800cb17eeb2301cfae5",
"oa_license": "CCBYNC",
"oa_url": "http://www.kjfm.or.kr/upload/pdf/kjfm-38-148.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c34fd80fd4c654e0f190a800cb17eeb2301cfae5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119280894 | pes2o/s2orc | v3-fos-license | Refined energy inequality with application to well-posedness for the fourth order nonlinear Schrodinger type equation on torus
We consider the time local and global well-posedness for the fourth order nonlinear Schrodinger type equation (4NLS) on the torus. The nonlinear term of (4NLS) contains the derivatives of unknown function and this prevents us to apply the classical energy method. To overcome this difficulty, we introduce the modified energy and derive an a priori estimate for the solution to (4NLS).
In this paper we consider the time local well-posedness for (1.1) on the Sobolev spaces H m (T). Our notion of well-poseness contains the existence and uniqueness of the solution and the continuity of the data-to-solution map. We also consider the persistent property of the solution, that is, the solution describes a continuous curve in H m (T) whenever φ ∈ H m (T). Our motivation to consider the time local well-posedness for (1.1) is that we are interested in the stability of the standing wave solution ψ(t, x) = e iωt ϕ ω (x) to (1.1). When (1.1) is completely integrable (see the later half of this section below for the detail), (1.1) has the sech-type standing wave solution. The orbital stability in H m (R) of the sech-type standing wave solution is proved by [20]. On the other hand, we easily see that (1.1) has a exact periodic standing wave solution of the form ψ(t, x) = κe iτ x+iωt for some real constants κ, τ and ω. It is interesting that the sech-type standing wave and the periodic standing wave correspond to the tornado like curve and the helicoid curve in the motion of the vortex filament, see Kida [16].
As the first step to show the orbital stability of the sech-type and the periodic standing wave, we need to prove the global well-posedness for (1.1) in the Sobolev spaces on the real line R and on the torus T, respectively. Concerning the local well-posedness of (1.1) on real line R, Segata [23,24,25] and Huo-Jia [11,12] proved that the initial value problem of (1.1) is locally well-posed in Sobolev space H s (R) with s > 1/2 by using the Fourier restriction method introduced by Bourgain [3] and Kenig-Ponce-Vega [14,15]. As far as we know, there is no result on the well-posedness of (1.1) under the periodic boundary condition.
In this paper we focus on the well-posedness of (1.1) on the torus. There is a large literature on the well-posedness for the dispersive equations in the torus. See for instance [6,13,19,21] for the linear dispersive equations and [1,3,4,8,10,22,26,27] for the non-linear dispersive equations. We summarize the well-posedness on the derivative nonlinear Schrödinger equation with the periodic boundary condition. Tsutsumi-Fukuda [26,27] proved the local and global well-posedness for the Schrödinger equation with some nonlinearity on the torus by using the classical energy method. Grünrock-Herr [8] and Herr [10] obtained sharp well-posedness results for some derivative nonlinear Schrödinger equation on the torus by using the Fourier restriction method. The well-posedness of the Schrödinger equation for more general derivative nonlinearity in the n-dimensional torus was given by Chihara [4]. We notice that the classical energy method does not works for his setting. In [4] he conquered this problem by using the pseudo-differential operators with non-smooth coefficients on the torus.
As we shall see below, the dispersive equations on the torus do not have fine properties compared to the real line case. Therefore the proof of the well-posedness on the torus become increasingly harder than the real line case. To state our results more precisely, we introduce several notations. Given a function ψ on T, we define the Fourier coefficient of ψ, byψ Let m be a non-negative integer. H m (T) denotes the all tempered distributions on T satisfying where n = √ 1 + n 2 . The main result in this paper is the following: The difficulty in the proof of time local well-posedness of (1.1) arises in so called "loss of a derivatives". More precisely, the standard energy estimate gives only the following: Since the first and second terms in the right hand side of (1.3) contain the (m + 1)-st derivatives of ψ, we cannot control those factors in terms of H m norm of ψ. Therefore this estimate does not give an a priori estimate for the solution.
For the real line case, the unitary group {e it(∂ 2 x +ν∂ 4 x ) } t∈R generated by the linear operator i∂ 2 x +iν∂ 4 x gains extra smoothness in space variable, see Kenig-Ponce-Vega [13]. Thanks to this smoothing property for {e it(∂ 2 x +ν∂ 4 x ) } t∈R , in [23,11,24,12] they could overcome a loss of derivatives and guarantee the well-posedness of (1.1) on R. However, for the periodic case the corresponding unitary group does not have such a fine properties (see e.g., [6]) and it is not likely that the contraction mapping principle guarantees the well-posedness for (1.1) on T. Since this is the case we abandon making use of the property of the unitary group {e it(∂ 2 x +ν∂ 4 x ) } t∈R and try to this issue by a different approach. Let us return the estimate (1.3). If we contrive to eliminate the worst terms, we can obtain an a priori estimate of solution. In this paper we take a hint from Kwon [17] which is concerned with the well-posedness for the fifth-order KdV equation on R, we introduce the"modified" energy: where C m is a sufficiently large constant depending only on m so that E m (ψ) is positive.
Thanks to the correction terms we can eliminate the worst factors in (1.3) and evaluate the H m norm of the solution ψ to (1.1) in terms of the H m norm of the initial data φ. This is a crucial point in the proof of Theorem 1.1. It is known that (1.1) is completely integrable if and only if λ 1 = −1/2, λ 2 = −3ν/8, λ 3 = −3ν/2, λ 4 = −ν, λ 5 = −ν/2 and λ 6 = −2ν. In this case (1.1) has infinitely many conservation quantities, see Langer and Perline [18]. The first three conservation quantities for (1.1) are given by In general, the conservation quantities for (1.1) are expressed as where Q m are some polynomials in (ψ, ψ, . . . , ∂ m−1 x for some α m > 0 and 0 < β m < 2. Therefore combining Theorem 1.1, the conservation laws I m (ψ)(t) = I m (ψ)(0) and Young's inequality, we obtain the global existence theorem for (1.1) in H m (T): Finally we point out that by combining our proof with the estimates for the fractional derivatives we may well be able to extend Theorem 1.1 to the case where m is not an integer . In this paper we do not touch on this issue.
The plan of this paper is as follows. Section 2 is devoted to the parabolic regularization associated to (1.1). In Section 3, we introduce the modified energy and give an a priori estimate for the solution to (1.1). Then we shall prove the existence of solution to (1.1). In Section 4 we give the proofs of the uniqueness and the persistent properties of solution to (1.1), and the continuous dependence of the solution to (1.1) on the initial data.
Parabolic Regularization
In this section, we consider the parabolic regularization of (1.1) in H m (T). We first give the Gagliardo-Nirenberg inequality for the periodic functions.
Proof. We shall prove (2.1) by using the Banach fixed point theorem. Let {W ǫ (t)} t≥0 be the contraction semi-group generated by the linear operator i∂ 2 Then, the initial value problem (2.1) is rewritten as the integral equation We shall show that the map is a contraction on X r T for choosing T suitably.
We easily see that By Plancherel's identity, we obtain We can easily check that Φ(ψ ǫ ) ∈ C([0, T ); H m (T)). Therefore, by choosing T ǫ > 0 sufficiently small so that C(T ǫ + ǫ −1/2 T 1/2 ǫ )(1 + r 2 )r 2 < 1 we have ψ ǫ ∈ X r T . By a similar way, for ψ 1 ǫ , ψ 2 ǫ ∈ X r T , we have sup Consequently, we have that Φ is a contraction on X r T . The Banach fixed point theorem implies the unique existence of solution to (2.1) in X r T which completes the proof of Lemma 2.2.
Modified Energy
In this section, by using the modified energy, we give an a priori estimates for the solution to (2.1) obtained by Lemma 2.2.
Let m ≥ 1 be an integer. We introduce the modified energy: where C m is a sufficiently large constant depending only on m so that E m (ψ) is positive. This is possible because of the following reason. The Gagliado-Nirenberg inequality (Lemma 2.1) implies L 2 x with some positive constant D m depending only on ν, λ 3 , λ 4 , λ 5 , λ 6 and m. Hence we obtain ) be a solution to (2.1). Then, there exists positive constants C and Proof. We first evaluate [E m (ψ)](t). Applying the m-th derivative the both sides of (2.1), taking the inner product of the resultant equation with ∂ m x ψ, and adding the complex conjugation of the produce, we obtain Using the Leibniz rule, we obtain where P 1 is a linear combination of the cubic terms ∂ j 1 x ψ ǫ with j 1 + j 2 + j 3 = m + 2 and j 1 + j 2 + j 3 ≤ m, and the quintic terms x ψ ǫ with j 1 + j 2 + j 3 + j 4 + j 5 = m. Hence the Hölder and Gagliardo-Nirenberg (Lemma 2.1) inequalities imply In the last inequality we used the inequalities
An integration by parts yields
We can express I 2 , I 4 and I 5 in terms of I 1 by using an integration by parts:
An integration by parts yields where R 5 and R 6 satisfy Integrating by parts, we also obtain Substituting above equations into (3.7), we have By an argument similar to (3.8), we obtain where R 8 satisfies Finally, we obtain Collecting (3.7), (3.8), (3.9), and (3.10), we have Since the sum of the first and second terms in the right hand side of (3.11) are bounded by ǫ ∂ m+2 We note that the constant C is independent of ǫ ∈ (0, 1]. From the above inequality we have . Combing this inequality, φ ǫ H m ≤ φ H m for any ǫ ∈ (0, 1] and (3.1), we see x . Then for any 0 < t < min{T ǫ , T }, we have If T ǫ < T , we can apply Lemma 2.2 to extend the solution in the same class to the interval [0, T ). Therefore we obtain the desired result. Proof. Let φ ∈ H m (T) and let {φ ǫ } ǫ ⊂ H ∞ (T) be a Bona-Smith approximation of φ. Then by Lemma 2.2 there exists a unique solution ψ ǫ ∈ C([0, T ǫ ); H m (T)) to (2.1). Lemma 3.1 yields that there exists T = T ( φ H m x ) > 0 which is independent of ǫ such that {ψ ǫ } ǫ is uniformly bounded in L ∞ (0, T ; H m (T)) with respect to ǫ ∈ (0, 1]. By a standard limiting argument, it is inferred that a subsequence of ψ ǫ convergence in L ∞ (0, T ; H m (T)) weak * to a solution ψ of (1.1) such that ψ ǫ ∈ L ∞ (0, T ; H m (T)). We omit the detail.
Proof of Theorem 1.1
In the preceding sections, we proved the existence of the solution to (1.1). In this section, we complete the proof of Theorem 1.1 by showing the following three assertions (i) uniqueness of the solution (ii) persistent properties of the solution (iii) continuous dependence of the solution upon initial data 4.1. Uniqueness. Let ψ 1 and ψ 2 be two solutions to (1.1) with same initial data satisfying sup t∈[0,T ) ψ j (t) H m x < ∞, j = 1, 2. We shall show that ψ 1 ≡ ψ 2 for t ∈ [0, T ). To prove this, it suffices to show that ψ = ψ 2 − ψ 1 satisfies ψ(t) H 1 x ≡ 0 because this identity and ψ(0) ≡ 0 implies ψ ≡ 0. The reason we prove ψ(t) H 1 x ≡ 0 instead of driving ψ(t) L 2 x ≡ 0 is that the corresponding modified energy for L 2 involves the anti-derivatives of ψ. | 2012-02-15T06:09:47.000Z | 2012-02-15T00:00:00.000 | {
"year": 2012,
"sha1": "db2be99322d6f9399616c94280cf1dd3fb91b73d",
"oa_license": "elsevier-specific: oa user license",
"oa_url": "https://doi.org/10.1016/j.jde.2012.02.016",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "db2be99322d6f9399616c94280cf1dd3fb91b73d",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
269150297 | pes2o/s2orc | v3-fos-license | The Prevalence of Shoulder Pain and Awareness of Frozen Shoulder Among the General Population in Taif City, Saudi Arabia
Background The global prevalence of shoulder pain varies widely across countries. Additionally, shoulder pain and frozen shoulder can significantly affect patients' quality of life due to high levels of pain and disability. Objective This study aimed to investigate the prevalence of shoulder pain and its risk factors. It also aims to assess the level of knowledge regarding frozen shoulders and its related factors in Taif City, Saudi Arabia. Methods A cross-sectional observational study was conducted in Taif City in December 2023 using a validated questionnaire comprised of socio-demographic characteristics, the prevalence of shoulder pain, and the awareness of frozen shoulders. Results A total of 378 participants enrolled in the study, with 54.8% being male and 62.7% being graduates and having jobs equally distributed among office (24.9%) and in the field (24.9%). Most participants were smokers (75.9%) and did not engage in body-building activities (79.6%). Around 26.5% of them had diabetes. The prevalence of shoulder pain was 32.8%. Aging from 35 to 44 years (p<0.001), having a higher salary from 6000 to 10000 SAR (p<0.001), retirement (p<0.001), engaging in body-building activities (p=0.035), having diabetes (p<0.001), and having other comorbidities (p<0.001) are significantly impacted having shoulder pain. Increased knowledge about the frozen shoulder is correlated with aging from 25-34 (p=0.026), smoking (p=0.002), engaging in bodybuilding (p<0.001), having diabetes (p=0.010), and having other medical conditions (p=0.010). Conclusion The study has shown that shoulder pain is prevalent among Taif City's population. Nevertheless, a low level of knowledge was observed. Therefore, enhancing the national educational programs is needed to increase public awareness of frozen shoulders.
Introduction
Shoulder pain is a common problem worldwide that is prevalent among approximately 16% of the general population [1].Frozen shoulder, also known as adhesive capsulitis, is a condition that causes pain and stiffness in the shoulder joint.It is characterized by severe pain and inability to move the shoulder.Frozen shoulders develop in three stages: freezing, frozen, and thawing.The condition usually improves within 12 to 18 months, but severe or persistent symptoms may require treatment [2].
The global prevalence and incidence of shoulder pain vary widely across countries.It is reported to be the third most common musculoskeletal symptom presenting for health care and makes up an estimated 4% of annual consultations by adults in UK primary care [3].The prevalence of frozen shoulder is estimated to be between 2% and 5% of the general population globally [4].Most patients are between 40 and 60 years old at diagnosis.However, some evidence suggests that frozen shoulders can occur later in life.Risk factors for developing frozen shoulders include diabetes, thyroid problems, hormone changes, shoulder injury, shoulder surgery, open heart surgery, and cervical disk disease of the neck [5].
In Saudi Arabia, the condition has been prominent in several areas.A study conducted in the Western region of Saudi Arabia discovered that the prevalence of frozen shoulders among diabetic patients was 31.6% [6].Another retrospective study conducted in the Qassim region discovered that the prevalence rate of frozen shoulders was 13.2% [7].
Shoulder pain and frozen shoulders can have a significant impact on patients' quality of life as they may experience high rates of pain and disability compared to the general population [8].Complications that may arise include residual shoulder pain and stiffness, humeral fracture, rupture of the biceps and subscapularis tendons, and labral tears [9].The long-term effects of frozen shoulders vary depending on the individual and the severity of the condition, which may affect all facets of life, including work impact, sleep, personal hygiene, interpersonal relationships, and independence [10].
There is not enough recent or updated data about the condition's prevalence among the general population in Saudi Arabia and Taif City.Therefore, this study aimed to assess the prevalence of shoulder pain and its associated factors.Furthermore, it aims to discuss the knowledge level and its correlated factors with frozen shoulders awareness among the general population in Taif City, Saudi Arabia.
Study design and duration
A cross-sectional observational study was performed among the general population in Taif City, Saudi Arabia.The data was collected in December 2023.
Study population
The study included a general population of 16 years or older living in Taif City who are willing to participate in the study.
Sampling technique
A simple random sampling technique was used in this study to select the participants
Sample size calculation
The Raosoft sample size calculator (2004, Raosoft, Inc., Seattle) was used online to calculate the sample size.
Based on a confidence level of 95%, a margin of error of 5%, and a maximum uncertainty of 50% for positive responses, a minimum of 377 participants should be included in this study.
Data collection method
Data was collected through a structured and self-administered electronic questionnaire distributed to adults in Taif City as a link to Google form using social media platforms (e.g., Twitter, Instagram, Linked-in, WhatsApp, etc.).
Data collection tool
The data collection tool was a structured questionnaire designed based on a previous study entitled "The Prevalence of Shoulder Pain and Awareness of Frozen Shoulder Among the General Population in Assir Region" [6].The questionnaire was validated in the Assir Region in Saudi Arabia, and it included three sections: the first section comprised participants' characteristics, the second section focused on the habits and diseases of the participants, and the third section addressed knowledge about frozen shoulder.The demographics included age, gender, educational status, working style, and income.Habits and diseases included questions about smoking status, diabetes and comorbidities, and exercise activity.Knowledge about frozen shoulder included questions related to age groups, disease signs and symptoms, gender, risk factors, anxiety, and depression.
Pilot study
The questionnaire was administered to 10 respondents to ensure that it was easily understood and to estimate the time needed to fill it out.Data collected in the pilot study were not included in the statistical analysis.
Data entry and analysis
The data was extracted and revised in an Excel sheet.Statistical analysis was conducted using the SPSS (IBM Corp. Released 2019.IBM SPSS Statistics for Windows, Version 26.0.Armonk, NY: IBM Corp).Categorical variables were expressed in numbers and percentages, while continuous variables were checked for normality.The Chi-square test was used to compare different variables with shoulder pain and awareness of frozen shoulders.The statistical significance was established by considering p-values below 0.05.
Ethical considerations
Approval was given by the Research and Ethics Committee of Armed Forces Hospitals, with the application number 2023-822.
TABLE 2: Habits and diseases of the participants (N=378)
As illustrated in Figure 1, the prevalence of shoulder pain in Taif City was (124, 32.8%).
FIGURE 1: Prevalence of shoulder pain in Taif City
Table 3 illustrates the participants' knowledge about frozen shoulder.Most of them did not know frozen shoulders (321, 84.9%).More than half of the participants reported that individuals aged between 40 and 60 are the most affected by the frozen shoulder (208, 55%); also, (217, 57.4%) reported all the signs and symptoms of a frozen shoulder.Also, the most reported risk factors for a frozen shoulder were immobilization for the long term (182, 48.1%), followed by diabetes mellitus and shoulder trauma (155, 41%).Finally, anxiety (167, 44.2%) and depression (134, 35.4%) were reported to increase the risk of a frozen shoulder.Additionally, regarding individuals' habits and comorbidities (Table 5), practicing bodybuilding [odds ratio(OR)=1.42,p=0.035], having diabetes (OR=1.88,p<0.001), and having comorbidities (OR=2.27,p<0.001) showed a significant correlation with shoulder pain.However, smoking status showed no significant correlation with shoulder pain.
TABLE 6: Correlated factors associated with knowledge about frozen shoulder
p-value <0.05 is considered statistically significant.
Discussion
This study aimed to assess the prevalence of shoulder pain and awareness of frozen shoulders among the general population in Taif City, Saudi Arabia.The prevalence of shoulder pain in Taif City was (124, 32.8%).Shoulder pain significantly correlated with age, income, working style of the participants, practicing bodybuilding, diabetes, and comorbidities.Additionally, age, smoking status, bodybuilding, diabetes, and comorbidities showed a significant correlation with knowledge about frozen shoulders.
According to our study, shoulder pain in Taif City, Saudi Arabia, was (124, 32.8%), similar to Japan's (30%) [11].However, the prevalence in Taif City was higher than in the urban areas of Bauru [12](24%) and the urban regions of India (2%), as well as in rural areas of India (7.4%) [13].Conversely, the prevalence in Taif City was lower than in the Netherlands [14](48%) and China [15] (48.7%).These variations in prevalence rates may be because there is no uniform definition for the clinical condition and anatomical area of the cervical and shoulder regions across different studies [12].
Our research indicated that shoulder pain is commonly experienced by individuals aged 35-44 years, 45 years or older, and those who retire (above 60 years of age).These findings are similar to studies conducted in Bauru [13] and the Netherlands [16], which also reported that shoulder pain was linked with people aged 60 years or older.However, shoulder pain was observed more frequently in Japan in young adults [16].The increased prevalence of shoulder pain in older adults can be attributed to degenerative changes in muscles, tendons, ligaments, and joints that naturally occur with aging, chronic overload for the elderly worker, and prolonged exposure to occupational risk factors [16].
Unfortunately, (321, 84.9%) of individuals in Taif did not know frozen shoulders.However, in a previous study in Aseer in Saudi Arabia, 69.31% had sufficient knowledge about frozen shoulders [6].Regarding the symptoms of frozen shoulders, the Aseer region study [6], reported that about half believed that multiple symptoms aligned with our study, where most participants reported all symptoms (shoulder pain, start decreasing shoulder range of motion (ROM), stiffness, and swelling).Additionally, the most reported risk factors in our study for a frozen shoulder were immobilization for the long term, diabetes mellitus, thyroid diseases, and shoulder trauma, which is per the published literature [17,18].In our study, a low percentage agreed that anxiety and depression increase the risk of a frozen shoulder, which was in contrast to Aseer's study, which demonstrated that a much higher percentage of individuals thought that depression and anxiety could increase the chance of shoulder diseases [6].
In our study, age was a significant factor impacting the awareness about frozen shoulders in Taif City, where the participants aged from 25 to 34 years had higher knowledge.However, the Aseer region study [6].reported that gender was a significant factor in awareness about frozen shoulders.It could return to the culture of regions in Saudi.Additionally, smoking, practicing bodybuilding, having diabetes, and other comorbidities had a higher knowledge about frozen shoulders than others.It could return to the possibility of having frozen shoulders with such factors.
Limitations
These results were obtained only from Taif City and cannot be generalized to the entire Saudi population.More extensive studies may help better understand the factors influencing the prevalence of shoulder pain and awareness about frozen shoulders in Saudi Arabia.
Conclusions
Our findings have shown widespread shoulder pain among the general population in Taif City.Certain habits and comorbidities had an impact on the prevalence of shoulder pain.Additionally, most of Taif's population had a low awareness of frozen shoulders.Therefore, it is essential to provide campaigns to increase awareness about frozen shoulders, their symptoms, habits, and diseases affecting shoulder pain.
TABLE 3 : Respondents' knowledge about frozen shoulder
Factors correlated with shoulder pain were assessed in Table4.Shoulder pain has a significant correlation with age (p<0.001),income (p<0.001),and working style (p<0.001) of the participants.A higher prevalence of shoulder pain was observed in participants aged 35 to 44 years, with income from 6000 to 10000 SAR, and retired participants.However, the gender and education level of the participants showed no significant correlation with shoulder pain.
TABLE 4 : Correlation between the shoulder pain and respondents' characteristics:
N: number, SAR: Saudi Riyal.p-value <0.05 is considered statistically significant.
TABLE 5 : Association between shoulder pain and the habits and diseases of the participants
Concerning associated factors with increased awareness of frozen shoulder (Table6), no significant correlation was seen between the individuals' characteristics and shoulder pain except for age (P=0.026).The participants aged 25 to 34 years (13, 23.2%) had the highest knowledge about frozen shoulders.
N: number, p-value <0.05 is considered statistically significant. | 2024-04-16T15:04:59.961Z | 2024-04-01T00:00:00.000 | {
"year": 2024,
"sha1": "9d8417c1f169df71d9b54760997a2f218ab5f55f",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/original_article/pdf/243002/20240414-8668-11ftrdy.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0e18b2e186d4c59e5bb2429d3ff4d03ff3eaad6c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
4543530 | pes2o/s2orc | v3-fos-license | Does Epstein-Barr virus infection have an influence on the development of laryngeal carcinoma? Detection of EBV by Real-Time Polymerase Chain Reaction in tumour tissues of patients with laryngeal carcinoma
Epstein-Barr virus (EBV) is a well-known carcinogenic virus, and the association of EBV with some tumours suggests that there may also be an association between laryngeal carcinoma and EBV. Objective The aim of this study is to determine the role of EBV in the aetiology of laryngeal carcinoma. Method Prospective investigation the EBV with real time polymerase chain reaction in tumour tissues of 25 patients with laryngeal carcinoma and 17 patients with benign laryngeal lesions, and investigation of the relationship between the presence of viral DNA and patients’ smoking habits, alcohol consumption, localization and differentiation of the tumour. Results There was no significant difference between the control group and patient group in terms of EBV polymerase chain reaction positivity (p > 0.05). Also we couldn't find a statistically significant relationship between EBV positivity and differentiation of the tumour, localization of the tumour, smoking and alcohol consumption habits (p > 0.05). Conclusion Our results suggest that, although EBV is present in some of the squamous cell laryngeal carcinomas, its presence has no effect on the pathogenesis of laryngeal carcinomas.
INTRODUCTION
Head and neck malignancies accounts for 4% of all types of cancers, and laryngeal carcinoma accounts for 25% to 40% of head and neck malignancies 1 . The role of many factors, especially tobacco use and alcohol consumption, has been clearly shown in the development of laryngeal carcinoma. Also it is known that certain viruses have oncogenic potentials, and the relationship between laryngeal carcinoma and viruses has been a popular subject of research for many years [2][3][4] .
Epstein-Barr virus (EBV) is present in all populations, infecting more than 95% of human beings within the first decade of life 5 . The host range of the Lymphocryptovirus genus, which also includes EBV, is generally restricted to primate B lymphocytes, which are also the site of latent virus infection in vivo. Infection of primate B-lymphocytes with lymphocryptoviruses typically results in a latent infection characterized by persistence of the viral genome with expression of latent gene products that contribute to the transformation process and cell proliferation 6 . The close relationship between EBV infection and nasopharyngeal carcinoma has been widely accepted 7 . Carcinomas that share the histological features of undifferentiated nasopharyngeal carcinomas have been identified in other sites of body, including the thymus, larynx, tonsils, salivary glands, lungs, skin, uterine cervix, bladder, and stomach 2,[7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23] . Recently, studies have reported polymerase chain reaction detection of EBV in a significant percentage of breast and hepatocellular carcinomas 5,7,8,23 . Also some studies showed a possible role of EBV in the development of squamous cell laryngeal carcinoma 8 .
Objective
We investigated the DNA of the EBV with a sensitive and specific molecular method, real time polymerase chain reaction (RT-PCR), in tumour tissues of patients with laryngeal carcinoma to determine the role of EBV in the aetiology of laryngeal carcinoma. We also analyzed the relationship between the presence of viral DNA and patients' smoking habits, alcohol consumption, localization (glottic, supraglottic or subglottic) and differentiation of the tumour (well, moderate, poor).
METHOD
Samples taken from fresh tumour tissues of randomly selected 25 patients that attended to the Department of Otolaryngology-Head and Neck Surgery between November 2007 and November 2008 with complaints of hoarseness, dyspnea, cough, sore throat and diagnosed as laryngeal carcinoma based on pathology results following laryngectomy or biopsy were included to the study.
Control group was composed of fresh tissue samples taken from patients that operated for benign laryngeal lesions like laryngeal polyps, nodules, cysts or granulomas. Also, biopsies taken from patients with diagnosis of laryngeal cancer that subsequently revealed to be a benign lesion after pathologic examination were included to the control group. A total of 17 samples obtained from patients with benign lesions consisted the control group. Biopsies taken from premalignant lesions like leukoplakia or dysplasia were not included to the study. The study was done with the approval of ethics committee of the institution. (approval nº 09-230). Every patient was informed about the study preoperatively and a signed consent was taken.
All samples were taken in operation room from fresh tissue biopsies just before formalin fixation of the tissue, in a sterile manner to avoid contamination risk.
Patients underwent thorough head and neck examination including indirect laryngoscopic evaluation and they were assessed in terms of localization of the tumour, smoking and alcohol consumption habits, duration of smoking and alcohol consumption and histopathological type of the tumour. Data for smoking and alcohol consumption habits were collected preoperatively with the help of a specific questionnaire that included questions about the duration and amount of consumption. Necessary imaging studies were done in patients suspected to have cancer before direct laryngoscopic biopsy.
Presence of EBV DNA in tissue samples was investigated by quantitative PCR using RT-PCR technique.
Obtaining DNA
EBV DNAs were obtained from the samples using QIA amp DNA minikit (Qiagen, Germany) in accordance with the user manual of the kit.
Multiplication of DNA
DNAs obtained from tissue samples were multiplied with Rotor-Gene 6000 (Corbett research, Australia) device using Qiagen Artus EBV RT PCR kit (Catalog Number 4501263) (Lot Number 130162115) (Sensitivity 3.8 copy/ µl). In every study, one negative control was used to avoid contamination risk.
Evaluation of Data
Data analyses were performed using Software version Rotor-Gene 1.7.75. EBV quantitation kit included 2 fluorescent dyes, JOE and FAM. While JOE ensures visible internal control, FAM indicates EBV DNA positivity. JOE is checked in yellow channel at a wavelength of 530-555 and FAM is checked in green channel at a wavelength of 470-510.
Results were interpreted as: 1. If FAM channel is positive: EBV DNA is positive in the sample. If positivity is very high, JOE channel may be negative; 2. If FAM channel is negative and JOE channel is positive, EBV DNA is negative in the sample. If JOE channel is negative too, the reaction was thought to be inhibited by an inhibitor and thus analysis was repeated.
Statistical analysis
The Chi-square test was used to compare the EBV PCR positivity of the study and control groups and to determine the association of EBV PCR positivity and patients' smoking habits, alcohol consumption, localization of the tumour (glottic, supraglottic, subglottic) and differentiation of tumour tissue (well, moderately, poorly). All statistical calculations were performed using commercially available software (SPSS version 15.0 for Windows; SPSS Inc, Chicago, Illinois) and p < 0.05 was considered to be significant.
RESULTS
Samples taken from 25 male patients (aged between 42 to 67 years, with a mean of 54.6) that attended to Otorhinolaryngology clinic and diagnosed as laryngeal carcinoma after direct laryngoscopic biopsy were included to the study. Samples were taken during partial or total laryngectomy in 13 patients and during laryngoscopic biopsy in 12 patients. Control group was composed of tissue samples taken from 17 patients (mean age 48.8, 13 males (76.5%) and four females (23.5%)) that had attended to clinic with hoarseness and found to have a benign laryngeal lesion (eg. laryngeal polyp, granuloma, cyst) after laryngoscopic examination and biopsy results. After head and neck examinations and direct laryngoscopy, patients that had laryngeal carcinoma were divided into 3 groups as glottic, supraglottic and subglottic based on the localization of the tumour. The tumour was glottic in 64% (16/25) of the patients, while supraglottic laryngeal carcinoma was detected in 36% (9/25) of them. No subglottic lesion was detected in our study group.
Tissue samples taken from the patients were studied by quantitative PCR using RT-PCR technique for detection of EBV DNA. Internal control was found to be positive in all of the patients.
EBV PCR positivity was found in 40% (10/25) of the laryngeal carcinoma patients. EBV DNA was £10 3 copy/ml in three patients, 10 3 -10 5 copy/ml in six patients and 10 5 copy/ml in one patient. It was positive in 66.7% (6/9) of the supraglottic tumours and in 25% (4/16) of the glottic tumours. In the control group, 52.9% (9/17) of the patients were EBV PCR negative while the remaining 47.1% (8/17) were EBV PCR positive. In five of these patients, EBV DNA was £10 3 copy/ml, and in the remaining three, EBV DNA was 10 3 -10 5 copy/ml. There was no significant difference between the control group and patient group in terms of EBV PCR positivity, and no direct correlation was found between EBV and the pathogenesis of laryngeal squamous cell carcinoma (SCC) (p > 0.05) (Table 1). Also, there was no significant relationship between EBV DNA positivity and localization of the tumour (p > 0.05). Pathologic investigation of the samples taken from the patients revealed that, 44% (11/25) of the patients had well differentiated SCC, 32% (8/25) had moderately differentiated SCC, 20% (5/25) had poorly differentiated SCC and 4% (1/25) had basaloid type SCC. EBV DNA positivity was found in 27.3% (3/11) of the patients that had well differentiated SCC, in 37.5% (3/8) of the patients that had moderately differentiated SCC, in 60% (3/5) of the patients that had poorly differentiated SCC and it was also positive in the only patient that had basaloid type SCC (Table 1). We could not find a statistically significant relationship between EBV positivity and differentiation of the tumour (p > 0.05).
Patients that have cancer and controls were interviewed for determination of smoking and alcohol consumption habits. 96% (24/25) of the patients in the laryngeal carcinoma group were smokers while there was only one patient who was a non-smoker. Smoking period ranged from 15 to 43 years (mean 30 ± 4.5 years) with 20 to 40 cigarettes daily (mean 24.5 ± 8.25 cigarettes). Sixty percent (15/25) of the patients were drinking alcohol on regular basis (at least twice a week) for a duration of 10 to 30 years (mean duration of alcohol consumption was 22.5 years). In the control group, 76.5% (13/17) of the patients were smokers. Smoking period ranged from 14 to 39 years (mean 27.3 ± 3.8 years) with 20 to 40 cigarettes daily (mean 23.5 ± 7.5 cigarettes). Of these patients, 17.6% (3/17) were found to consume alcohol regularly and the mean duration of alcohol consumption was 28.3 years (20 to 35 years). EBV DNA positivity was 37.5% (9/24) in tissue samples of smoking cancer patients. In control group, EBV DNA positivity was found to be 46.2% (6/13) in smokers. In cancer group, samples taken from the patients that consume alcohol revealed an EBV DNA positivity of 40% (6/15). EBV DNA was not found in samples taken from control patients that consume alcohol. There was no statistically significant association between EBV DNA positivity and smoking and alcohol consumption habits (p > 0.05) ( Table 2). been shown in samples taken from laryngeal carcinoma patients in some studies, HPV has not been considered to have a strong carcinogenic effect in development of laryngeal carcinomas because these observations were exceptional 28,29 . Recently, there have been some reports presenting the association of EBV with laryngeal carcinoma 8 , and a number of reports refuting these data 19,30 .
Gök et al. 20 investigated the presence of EBV DNA in formalin-fixed, paraffin-embedded tissue samples from 22 patients with squamous cell carcinoma of the larynx and from 17 patients with vocal cord nodules by PCR. Polymerase chain reaction showed EBV DNA in 11 patients (50%) with laryngeal carcinoma and in seven patients (41.2%) with vocal cord nodules. They could not find any significant difference between groups in terms of EBV DNA positivity and duration of smoking, the number of cigarettes consumed daily, localization of the disease and tumour stage, which are consistent with our results.
Goldenberg et al. 21 also could not find any significant relationship between EBV and tumour development in the study they performed on three hundred patients with head and neck cancer, including larynx, hypopharynx, oropharynx and oral cavity tumours 21 . They also could not find any correlation between EBV positivity and tobacco exposure, alcohol consumption or tumour grade. They found low quantities of EBV detected in a minority of head and neck cancers and they connected this to the presence of EBV genome in rare lymphoid or epithelial cells adjacent to the primary head and neck cancer.
In the study of de Oliveira et al. 2 , EBV was studied with molecular biological techniques in parafinnized tumour tissues taken from 110 patients having squamous cell laryngeal carcinoma, and EBV was detected in none of the patients. Similarly, Atula et al. 19 suggested that EBV was not associated with laryngeal carcinoma after they analyzed EBV DNA in 79 frozen biopsy samples of head and neck cancer patients with Southern blot hybridization and PCR.
In their study, Vlachtsis et al. 22 demonstrated EBV DNA positivity in 39 (43.3%) of 90 laryngeal SCC patients while both HPV and EBV positivity was found in 19 (21.1%) of them. It is impossible to determine the effect of EBV on laryngeal carcinoma regarding to their results because they did not have a control group, but their EBV DNA positivity rate in laryngeal SCC was similar to what we found.
Kiaris et al. 8 have also studied the incidence of EBV in SCC of larynx. They analyzed EBV DNA presence by sensitive PCR and used RFLP (restriction fragment length polymorphism) for further confirmation of the specificity of the PCR-amplification reaction. EBV DNA was positive in 9 of the 27 tumour tissues while only four (15%) specimens from adjacent normal tissue exhibited evidence of EBV infection. Three samples were EBV positive for both normal and tumour tissue. Researchers have found a relatively high incidence of EBV in the tumour tissue
DISCUSSION
EBV is present in all populations, infecting more than 95% of human beings within the first decades of life 7 . In developing countries, certain cultural practices often lead to EBV exposure in early childhood, and primary EBV infection in young children is typically associated with an unremarkable acute syndrome. In more developed countries, however, infection is often delayed, and acute primary EBV infection occurring in adolescence or adulthood can result in a self-limiting lymphoproliferative disorder known as infectious mononucleosis (IM) 6 .
The strongest environmental factor in the pathogenesis of laryngeal carcinoma is smoking. Gastroesophageal reflux, radiation, consuming fruits and vegetables which are rich in carotenoids and exposure to wood dust, heavy metals and coal dust are among suspected etiological factors 26 . Additionally, it has been thought that viral factors could play a role in the etiology of laryngeal carcinoma too. Some studies, with different results, have been conducted on the role of viral factors, mainly investigating the effects of Human Papillomavirus (HPV) and EBV in the etiology of laryngeal carcinoma 27 . Although various HPV types have (33%) of patients with laryngeal cancer, as compared to the low incidence (15%) of the virus genome detected in the adjacent normal tissue, which indicates a probable role of EBV in the development of the disease. However, they found no association with EBV positivity and stage of the disease and histological differentiation.
In our study, we could not find a significant relationship between the control group and patient group in terms of EBV PCR positivity and viral load of EBV. Similarly there was no direct association between EBV and pathogenesis of laryngeal squamous cell carcinoma. Most of the studies that performed previously support our results, but there are few studies that demonstrate an association between EBV and laryngeal SCC. We believe that such contradictory results are due to the small sample size and variety in sensitivity and specificity of the methods used for determination of EBV.
We also could not find a significant association between EBV positivity and localization and differentiation of the tumour (p > 0.05). Furthermore, since smoking and alcohol consumption are well-established risk factors for development of laryngeal carcinoma, we investigated the association between EBV positivity and these factors, and we could not demonstrate any relationship between smoking and alcohol consumption habits and both EBV positivity and viral load (p > 0.05). These results suggest that EBV does not play a synergistic role in development of laryngeal carcinomas with irritative factors like smoke and alcohol.
The main advantage of our study was the usage of fresh tissue samples for the determination of EBV DNA. Most of the previous studies investigating presence of EBV were performed on samples taken from formalin fixed, paraffin embedded tissues and formalin is a known inhibitor for PCR 31 . Hence, some false negative results could have been obtained in some of the previous studies.
In this study, surgical specimens of patients with benign laryngeal lesions were accepted as controls since taking tissue samples from healthy volunteers was not reasonable. Interestingly, EBV DNA was found to be positive in 47.1% of patients in this group, and this ratio was higher than the study group. So, it can be suggested that EBV is a very common virus that could stay latent in mucosal cells of the upper airway in considerable proportion of the population.
CONCLUSION
Recently established association of EBV especially with the non-differentiated nasopharyngeal carcinoma has let to the consideration that there may also be an association between laryngeal carcinoma and EBV. But our results, in concordance with the results of the majority of the previous studies, suggest that, EBV is a very common virus that can be found in the mucosal cells of the upper airway in considerable proportion of the population, and although EBV is present in cancer tissues of some of the squamous cell laryngeal carcinomas, its presence has no effect on the pathogenesis of laryngeal carcinomas. Further multicentric studies with large sample sizes have to be performed to demonstrate the relationship between EBV and squamous cell laryngeal carcinoma clearly. By this way, with any possible association that could be found, positive steps can be taken in terms of prevention and management of laryngeal carcinomas. | 2018-04-02T08:45:06.455Z | 2013-05-01T00:00:00.000 | {
"year": 2013,
"sha1": "642beff3ea16dfc1a79c5224ea794a9ed95808a4",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5935/1808-8694.20130075",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "680d438e8116dd7c21974931981de29d844a8755",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236371671 | pes2o/s2orc | v3-fos-license | Progression of a Muscular Dystrophy Due to a Genetic Defect in Membrane Synthesis is Driven by Large Changes in Neutral Lipid Metabolism
CHKB encodes one of two mammalian choline kinase enzymes that catalyze the first step in the synthesis of the major membrane phospholipid, phosphatidylcholine (PC). In humans, inactivation of the CHKB gene causes a recessive form of a rostral-to-caudal congenital muscular dystrophy. Using Chkb knockout mice, we reveal that at no stage of the disease is PC level significantly altered. Instead, at early stages of the disease the level of mitochondrial specific lipids acylcarnitine (AcCa) and cardiolipin (CL) increase 15-fold and 10-fold, respectively. Importantly, these changes are only observed in affected muscle and contribute to the decrease in the skeletal muscle functional output in these mice. As the disease progresses, AcCa and CL levels normalize and there is a 12-fold increase in the neutral storage lipid triacylgycerol and a 3-fold increase in its upstream lipid diacylglycerol. Our findings indicate that the major changes in lipid metabolism upon loss of function of Chkb is not a change in PC level, but instead is an initial inability to utilize fatty acids for energy resulting in shunting of fatty acids into triacyglycerol.
Introduction
Phosphatidylcholine (PC) is the major phospholipid present in mammalian cells, comprising approximately 50% of phospholipid mass. Choline kinase catalyzes the phosphorylation of choline to phosphocholine and is the first enzymatic step in the synthesis of PC 1 . There are two genes that encode human choline kinase enzymes, CHKA and CHKB. Monomeric choline kinase proteins combine to form homo-or heterodimeric active forms 2 . CHKA and CHKB proteins share similar structures and enzyme activity but display some distinct molecular structural domains and differential tissue expression patterns. Knock-out of the murine Chka gene leads to embryonic lethality 3 .
Chkb deficient (Chkb -/-) mice are viable, but noticeably smaller than their wild type counterparts, and show severe bowing of the ulna and radius at birth. By 2-3 months of age Chkb -/mice lose hindlimb motor control, while the forelimbs are spared 4,5 .
Inactivation of the Chkb gene in mice would be predicted to decrease PC level, however, reports indicate no, or a very modest, decrease in PC level in Chkb -/mice, and this decrease is similar in both forelimb and hindlimb muscle 6,7 . The very small decrease in PC mass, and the fact that there is no rostral-to-caudal change in PC, suggest a poor correlation of the anticipated biochemical defects and observed rostralto-caudal phenotype of this muscular dystrophy 5 . It is unclear how a defect in a gene required for the synthesis the major phospholipid in mammalian cells causes a muscular dystrophy, especially in light of the fact that global inactivation of the CHKB/Chkb gene (human or mouse) does not affect the level of the product of its biochemical pathway, PC.
Muscular dystrophies have been mapped to at least 30 different causal genes 15 . The most common types of muscular dystrophy result from mutations in genes coding for members of protein complexes which act as linkers between the cytoskeleton of the muscle cell and the extracellular matrix that provides mechanical support to the plasma membrane during myofiber contraction 16,17 . Muscular dystrophies result in fibrofatty replacement of muscle tissue, progressive muscle weakness, functional disability and often early death 18 19,20 .
Skeletal muscle accounts for 20-30% of whole body basal metabolic rate 21 . Fatty acid oxidation is the major source of ATP for skeletal muscle during the resting state 22 .
Fatty acids can be synthesized de novo by cells or can be obtained extracellularly, with the bulk of lipids delivered to cells through the circulation via serum albumin or lipoprotein/lipoprotein receptors. For fatty acids to be metabolized they are first activated by esterification to fatty acyl-CoA. Subsequently, they have divergent fates depending on the metabolic status of the cells (Fig. 1). The three major fates of fatty acids are 1. conversion to fatty acyl carnitine for subsequent mitochondrial β-oxidation to provide energy, 2. the synthesis of neutral lipid species for storage as triacylglycerol (TG) rich cytoplasmic lipid droplets, or 3. metabolism into phospholipids, such as PC, to maintain membrane integrity. Fatty acids can also directly bind peroxisome proliferatoractivated receptors (PPARs), key players in the regulation of lipid metabolism by altering the expression of genes required for the conversion of fatty acids to fatty acyl-CoA for phospholipid and TG synthesis, and for fatty acid activation to acylcarnitine (AcCa) for entry into mitochondria and subsequent fatty acid β-oxidation 23 .
In the present study, we use mouse and cell models to investigate the temporal changes in lipid metabolism in the absence of the Chkb gene. Results demonstrate that PC level remains essentially unchanged. Instead, this genetic defect in PC synthesis drives large fluctuations in mitochondrial lipid metabolism with an inability to use fatty acids for mitochondrial β-oxidation resulting in a temporal shunting of fatty acids into TG and their storage as lipid droplets. These changes were specific to affected muscle.
This study provides insight into the surprising biochemical phenotype whereby a genetic block in a lipid metabolic pathway does not directly affect the product of its pathway, and instead alters tangential pathways in a manner that explains the rostral-to-caudal gradient of a genetic disease.
Choline kinase deficient mice display hallmark muscular dystrophy phenotypes
To address the extent that mice lacking Chkb function display gross muscular dystrophy phenotypes, we tested muscle function in Chkb +/+ , Chkb +/and Chkb -/mice from 6 weeks to 20 weeks of age using a grip strength assay and a total distance run to exhaustion test. Body weight was also recorded each week at similar times over the duration of the phenotyping experiments. Body weight of the Chkb +/+ and Chkb +/mice showed no difference between groups (Fig. 2a). The Chkb -/mice weighed significantly less than their wild type counterparts at all time points. The average body weight of 6 Chkb -/mice was 33% to 42% less than that of Chkb +/+ mice at week 6 and week 20, respectively.
Forelimb grip strength measurements were performed at three different timepoints and normalized to body weight. The Chkb -/mice had significantly lower (less than half) the normalized forelimb strength than wild type mice at all three timepoints (week 6, 12 and 18) (Fig. 2b). Another measure of neuromuscular function is the resistance to treadmill running, evaluated as the total distance that each mouse is able to run until exhaustion. The test was performed in all groups at three timepoints (Week 7, 13 and 19). The total distance covered by the wild type mice before exhaustion was similar at all 3 time points (Fig. 2c). There was no significant difference between Chkb +/+ and Chkb +/groups, these mice maintained the ability to cover the same total distance before exhaustion (week 7 vs. week 19; non-significant). At week 7, the Chkb -/mice showed a basal level of total distance run that was 50% that of the wild type or Chkb +/mice. Moreover, the Chkb -/mice showed a decline in running performance from week 7 to week 19, with an almost complete inability to run observed by week 19. Gross measurements of neuromuscular strength in whole mice demonstrate that mice heterozygous for Chkb gene display similar phenotypes to wild type mice. Notably, mice lacking both copies of the Chkb gene display a significant decrease in overt neuromuscular phenotypes.
The level of circulating creatinine kinase (CK), a biomarker of sarcolemmal injury, was determined in Chkb +/+ , Chkb +/-, and Chkb -/mice. No significant change in the serum level of CK was observed in Chkb +/heterozygous mice when compared to the wild type. CK activity was 2.5-fold higher in Chkb -/null mice than that of wild type mice ( Fig. 2d).
To determine if the decreased neuromuscular phenotypes observed in the Chkb -/mice were due to a direct effect on muscle itself, maximal specific force generated by freshly isolated extensor digitorum longus (EDL) muscle from the hindlimb of Chkb +/+ , Chkb +/-, and Chkb -/mice at week 20 was determined. EDL muscle fatigue was measured with 60 isometric contractions for 300 ms each, once every 5 sec, at 250 Hz.
There was no significant difference between wild type and heterozygous Chkb mice in regard to specific force decrease during fatigue and specific force generation, (Fig. 2e, f). Chkb -/mice displayed a specific EDL force that was 10% that of Chkb +/+ or Chkb +/mice. In addition, Chkb -/mice were at maximally fatigued levels, that is those observed in Chkb +/+ or Chkb +/mice after 60 muscle stimulations, at the first stimulation. Hindlimb muscle from Chkb -/mice produce less force, and are much more easily fatigued, than that of wild type or Chkb heterozygous mice.
Similar to humans 8,10 , mice with one functional copy of the CHKB gene do not possess any obvious overt muscle dysfunction, whereas mice that are homozygous null for functional copies of the Chkb gene display hallmark muscular dystrophy phenotypes.
Chka protein expression is inversely correlated with the rostro-caudal gradient of severity in Chkb-mediated muscular dystrophy
Consistent with the rostral-to-caudal nature of Chkb associated muscular dystrophy, transmission electron micrographs of 115 day old Chkb -/mice show extensive injury in hindlimb (quadriceps and gastrocnemius) but not the forelimb (triceps) (Supplementary 8 Fig. 1a-c). Chkb encodes choline kinase b, the first enzymatic step in the synthesis of PC, the most abundant phospholipid present in eukaryotic membranes. A second choline kinase, Chka is present in mouse (and human) tissues. We investigated whether the lack of dystrophic phenotypes in Chkb +/mice, and the rostro-caudal gradient of muscular dystrophy in Chkb -/muscle, can be explained by compensatory changes in Chkb or Chka protein levels using western blot. In Chkb +/mice, there was a ~50% decrease in Chkb protein detected in both the forelimb and hindlimb muscles of Chkb +/mice compared to wild type ( Fig. 3a, b). There was no change in Chka protein level in hindlimb muscle of Chkb +/mice compared to wild type, and a small but statistically insignificant increase in Chka level in forelimb muscle.
In Chkb -/mouse forelimb or hindlimb muscle, Chkb protein expression was undetectable consistent with the allele not producing Chkb protein. In forelimb muscle from Chkb -/mice there was a compensatory upregulation of Chka protein expression to almost 3-fold that observed in wild type mice. In contrast, in hindlimb muscle from Chkb -/mice Chka protein expression was decreased to less than 10% that observed in wild type mice. A compensatory level of Chka protein expression inversely correlates with the rostro-caudal gradient of severity in Chkb -/associated muscular dystrophy.
Loss of Chkb activity exerts a major effect on neutral lipid abundance
PC synthesis is integrated with the synthesis of other major phospholipid classes, as well as AcCa, fatty acids and the neutral lipids diacylglycerol and triacylglycerol (Fig.1).
Lipidomics was used to determine if complete loss of Chkb function, and the associated upregulation of Chka in the forelimb but not hindlimb muscle of Chkb -/mice, differentially altered lipid metabolism. The levels of the major glycerophospholipids, neutral lipids and acylcarnitine in hindlimb and forelimb muscle isolated from 12 day old and 30 day old Chkb +/+ and Chkb -/mice were quantified.
In the forelimb and hindlimb muscle of both 12 day old and 30 day old Chkb -/mice, the level of PC was the same as wild type mice (Fig. 4a-d). In 12 day old Chkb -/mice the largest change observed was a 15-fold increase in AcCa level in hindlimb muscle, and to a lesser degree (~2-fold increase) in forelimb, compared to their wild type littermates. The second largest change in 12 day old mice was a 10-fold increase in the level of cardiolipin (CL) in hindlimb muscle that was not present in forelimb muscle of Chkb -/mice. Phosphatidylethanolamine (PE) and phosphatidylinositol (PI) levels were also slightly increased (~1.5 fold) in both forelimb and hindlimb muscles of 12 day old Chkb -/mice. The large changes in lipid levels in hindlimb muscle, versus forelimb, of Chkb -/mice are consistent with the rostral-to-caudal nature of the muscular dystrophy observed in these mice.
Considering the progressive nature of the disease, we tracked the changes in the lipid profile in the hindlimb of 30 day old Chkb -/mice, when muscle injury is more pronounced. In sharp contrast to 12 day old mice, AcCa and CL levels were no longer increased and were at the same level as wild type mice. Instead, there was a 12-fold increase in the neutral storage lipid TG and a 3-fold increase in its precursor DG in the hindlimb samples of Chkb -/mice (Fig. 4e, f). PE and PS levels were 2-3-fold higher in the hindlimb samples from 30 day old Chkb -/mice compared to wild type littermates.
There is a temporal shift from a 12 to 15 fold increase in CL and AcCa, to a similar increase in TG, only in affected muscle in Chkb -/mice.
As AcCa levels are many fold higher than wild type mice in the early stage of Chkb -/muscular dystrophy, this implies that the affected muscles are defective in using fatty acids for the production of cellular energy by mitochondrial β-oxidation. As Chkb -/muscular dystrophy progresses, the affected muscles appear to adapt to this inability to consume fatty acids by transitioning toward energy storage indicated by the large increase in TG.
Increased intramyocellular lipid droplet accumulation and enlarged mitochondria in hindlimb muscles from Chkb -/mice
To understand early ultrastructural pathological changes, and to further explore the associated with mitochondria ( Fig. 5a and Supplementary Fig. 2c). In 115 day old Chkb -/mice, cytoplasmic lipid droplets increased substantially in size (Fig. 5a).
We also evaluated TG accumulation in muscle using confocal microscopy by staining hindlimb muscle sections of 30 day old Chkb -/mice with BODIPY 493/503 (Fig. 5b). Concanavalin A dye conjugate (CF™ 633) and Dapi were used to stain membrane (Red) and nucleus (Blue) respectively. Consistent with our TEM and lipidomics results, BODIPY-stained lipid droplets were noticeably more frequent and larger in Chkb -/hindlimb muscles compared to the wild type littermates. The same pattern of lipid droplet staining was observed using Nile red staining ( Supplementary Fig. 2a, b).
The lipidomics results point to large changes in mitochondrial specific lipids at the early stages of Chkb associated muscular dystrophy. To further explore the nature of these changes, we investigated the temporal development of morphological changes in mitochondria in hindlimb muscle of Chkb -/mice using standard TEM stereological methods 25 . The results show that at 12 days of age, the size of mitochondria increased
Chkb deficiency results in increased lipid droplet accumulation in differentiated myocytes in culture
To address if the observed increase in TG in Chkb -/mice was due to muscle specific events or was due to larger physiological changes that then impact muscle physiology, we assessed TG level in primary cultured muscle cells subsequent to myoblast differentiation.
We first determined if Chkb deficiency alters differentiation in primary myoblasts.
Primary muscle cell cultures were examined for their transition from a single cell proliferative condition to differentiated multinucleated myotubes. During the process of differentiation, mononuclear myoblasts fuse to form myocytes (myotubes), which are large multinucleated cells. We isolated skeletal myoblasts from Chkb +/+ and Chkb -/mice and induced differentiation by switching to low growth factor serum. Representative light micrographs of cultures of dissociated myogenic cells from skeletal muscle of Chkb +/+ and Chkb -/mice at 0, 3 and 5 days after switching to differentiation media show a similar degree of myotube formation (Fig. 6a). Chkb deficiency resulted in a compensatory upregulation of Chka gene expression as well as a significant increase in the markers of myocyte injury, namely Icam1 and Tgfb1 26 (Fig. 6b). We calculated the fusion index, which is nuclei distribution, to determine the extent of myotube differentiation, by immunofluorescence staining. There was no difference between the Ppara and Pparb/d primarily regulate the expression of genes required for fatty acid oxidation, with Pparb/d also regulating genes required for mitochondria biogenesis.
Pparg is primarily expressed in adipose tissue and regulates insulin sensitivity and glucose metabolism 27 . Using reverse transcription (RT) qPCR, we determined that the expression of Ppara and Pparb/d were 4-fold and 6-fold lower, while Pparg was 2-fold higher, in the hindlimb muscle of 30 day old mice Chkb -/mice compared to wild type Ppara and Ppar b/d are the major transcriptional reporters that regulate expression of fatty acid metabolizing genes. The many-fold decrease in the expression of these Ppars that was specific to affected muscle, along with their coreceptors and downstream target genes corroborate the lipdomics data that suggest that the major change in lipid metabolism in Chkb mediated muscular dystrophy is an inability to metabolize fatty acids via mitochondrial β-oxidation resulting in shunting of excess fatty acid into TG rich lipid droplets.
Discussion
Lipid metabolism is highly integrated. Fluctuating levels of lipid metabolites can not only alter shunting of lipids between tangential pathways, but lipids can also directly bind to transcription factors and alter gene expression of lipid metabolic genes. This study highlights these metabolic factors by determining that inactivation of a gene for PC synthesis does not alter PC level. Indeed, the changes in the level of PC do not appear to contribute to the disease phenotype. This study proposes (1) that a change in PC level is not the major metabolic driver behind this disease despite the fact that the genetic defect lies within the major metabolic pathway for the synthesis of PC, (2) a mechanistic model for defective muscle lipid metabolism in Chkb -/mice in which the balance between storage and usage of fatty acids is disrupted, and (3) a mechanism for the rostral-to-caudal gradient for Chkb mediated muscular dystrophy.
Importantly, we report that at an early stage of Chkb mediated muscular dystrophy, there is a 12-to 15-fold increase in the levels of the mitochondrial specific lipids CL and AcCa. Importantly, these changes were observed only in affected muscle of Chkb -/mice. As the disease progresses, AcCa and CL levels return to wild type, and a 12-fold increase in the storage lipid TG occurs. The increase in the mitochondrial specific phospholipid CL is quite telling as far as disease progression. Our TEM of mitochondria in affected muscle during the early stage of Chkb mediated muscular dystrophy revealed a similar number of mitochondria with intact cristae in compared to wild type, however, there was a substantive increase in large mitochondria in affected muscle of Chkb -/mice. We propose that the large increase in CL in affected muscle of Chkb -/mice in the early stage of the disease are mainly driven by the increase in mitochondrial size. As the mice aged the level of CL decreased and had returned to that of wild type by 30 days. At 30 days, mitochondrial size was still increased, however, the number of mitochondria, as well as their cristate (where the bulk of CL resides) were substantively decreased, providing a reasonable explanation for CL mass being reduced to wild type level as the Chkb -/mice aged. Previous observations of mitochondria in Chkb -/mice have only been determined in mice with advanced disease 6,30 , where similar changes in mitochondrial morphological features and numbers were observed. Indeed, one would predict that as Chkb -/mice aged there would be a further decrease in CL mass as mitochondrial numbers further decrease.
Beyond the large increase in CL mass, the other major change in lipid level at the early stage of Chkb mediated muscular dystrophy was a 15-fold increase in AcCa level in affected muscle. This implies that there is either a decreased ability to transport of AcCa into mitochondria for subsequent fatty acid β-oxidation, and/or incomplete βoxidation resulting in a backup of substrate within this pathway. In support of this idea the expression of many of the enzymes required for fatty acid transport into mitochondria and subsequent fatty acid b-oxidation were decreased many fold in affected muscle of Chkb -/mice. The increase in AcCa level at the early stage of Chkb mediated muscular dystrophy, and the decreased expression of genes required for its synthesis and use, is consistent with an inability to import AcCa into mitochondria for fatty acid b-oxidation.
As Chkb -/mice aged, AcCa and CL and levels in hindlimb muscle returned to wild type and by 30 days a dramatic 12-fold increase in TG level was observed. The increase in TG level is consistent with impaired AcCa uptake into mitochondria resulting in a shunting of fatty acids from energy source to energy storage 31 . This observation is consistent with other reports showing that inhibition of PC biosynthesis in mouse liver, and cell culture, significantly increased TG level 32,33 . One interesting additional observation from our study was that ~ 80% of the photographed lipid droplets from Chkb -/hindlimb muscles were closely associated with mitochondria ( Fig. 5a and Supplementary Fig. 2c)
Acknowledgments
We acknowledge funding support from the Canadian Institutes for Health Research (to CRM) and the Atlantic Innovation Fund (to CRM and EH). We thank Gregory Cox for sharing Chkb mice.
DECLARATION OF INTERESTS
The authors declare no competing interests.
In vivo grip strength and fatigability measurements.
Forelimb grip strength was measured using a grip strength meter (Columbus Instruments, Columbus, OH, USA) at 3 time points (6, 12, 18 weeks old) as previously described 40 . All mice were acclimated for a period of five consecutive days before testing. For each time point, Force measurements were collected in the morning hours over a 5-day period, with maximum values for each day over this period averaged to obtain absolute GSM values (Kgf) or normalized to BW (recorded on the first day of testing) for normalized GSM values (Kgf/kg). For the treadmill exhaustion assay, mice are subjected to an enforced running paradigm that tests the resistance level of fatigue in mice. The exhaustion test was performed at 3 time points (7, 13, 19 weeks old) in each group. Groups of mice were made to run on a horizontal treadmill for 5 min at 5 m/min, followed by an increase in the speed of 1m/min each minute. The total distance run by each mouse until exhaustion was measured. Exhaustion was defined as the inability of the mouse to continue running on the treadmill for 30 seconds, despite repeated gentle stimulation.
Primary myoblast isolation, culture and differentiation.
We followed a protocol outlined in Shahini et al. 41 for isolation of myoblast by enabling the outgrowth of these cells from muscle tissue fragments of Chkb +/+ and Chkb -/mice.
Briefly, the mice were euthanized via CO2, were sprayed with 70% ethanol and transferred to a sterile hood. The forelimb and hindlimb muscles were removed, finely minced into small pieces and transferred to a 50 ml conical tube. 1 ml enzymatic solution of PBS containing collagenase type II (500 U/mL), collagenase D (1.5 U/mL), dispase II (2.5 U/mL), and CaCl2 (2.5 mM) was added to the tube. The muscle mixture was placed in a water bath at 37°C for 60 minutes with agitation every 5 minutes. The suspension was centrifuged for 10 minutes at 300 g. Following centrifugation, the supernatant was removed and discarded, and the pellet was resuspended in to allow attachment of the tissues to the surface and subsequent outgrowth and migration of cells. The myogenic cell population was further purified with one round of pre-plating on collagen coated dishes to isolate fibroblasts from myoblasts. To induce differentiation into multinucleated myotubes, the cells were seeded at 10000 cells/cm 2 on plastic coverslip chambers coated with Matrigel and the medium was replaced by differentiation medium containing DMEM with high glucose and 5% HS.
Ex vivo force measurement.
At the end of the in vivo phase (Week 19), mice were deeply anesthetized with ketamine and xylazine (80 and 10 mg/kg). The extensor digitorum longus (EDL) muscle of the right hindlimb was removed for comparison of Ex vivo force contractions between groups as previously described 42,43 . Briefly, the EDL muscle was securely tied with braided surgical silk at both tendon insertions to the lever arm of a servomotor/force transducer (model 305B) (Aurora Scientific, Aurora, Ontario, Canada) and the proximal tendon was fixed to a stationary post in a bath containing buffered Ringer solution (composition in mM: 137 NaCl, 24 NaHCO3, 11 glucose, 5 KCl, 2 CaCl2, 1 MgSO4, 1 NaH2PO4 and 0.025 turbocurarine chloride) maintained at 25˚C and bubbled with 95% O2 -5% CO2 to stabilize pH at 7.4. At optimal muscle length, the maximal force developed was measured during trains of stimulation (300 milliseconds, ms) with increasing frequencies up to 250 Hz or until the highest plateau was achieved. The force generated to obtain the highest plateau was used to determine specific force (maximal force normalized to cross-sectional area of the muscle). Finally, the muscle was subjected to a fatigue protocol consisting of 60 isometric contractions for 300 ms each, once every 5 seconds. The frequency at which the EDL muscles were stimulated is 250 Hz. The force was recorded every 10th contraction during the repetitive contractions and again at 5 and 10 min afterward to measure recovery. Samples were assigned to controls and test groups. CT values were normalized based on a/an Manual Selection of reference genes. The data analysis web portal calculates fold change/regulation using delta delta CT method, in which delta CT is calculated between gene of interest (GOI) and an average of reference genes (HKG), followed by delta-delta CT calculations (delta CT (Test Group)-delta CT (Control Group)). Fold Change is then calculated using 2^ (-delta delta CT) formula. The data analysis web portal to plot scatter clustergram, and heat map.
Lipid extraction
We performed lipid extractions using the modified Bligh and Dyer extraction for LC-MS analysis of lipids protocol 44 . All reagents were of LC-MS grade. Briefly, the muscle tissue (~10mg) was homogenized with a steel bead in 1 ml of cold 0.1 N HCl:methanol 70:30%) was injected onto the column. The following system gradient was used for separating the lipid classes and molecular species: 30% solvent B for 3 min; then solvent B increased to 50% over 6 min, then to 70% B in 6 min, then kept at 99% B for 20 min, and finally the column was re-equilibrated to starting conditions (30% solvent A) for 5 min prior to each new injection.
High resolution tandem mass spectrometry and lipidomics.
Lipid analyses were carried out using a Q-Exactive Orbitrap mass spectrometer controlled by X-Calibur software 4.0 (ThermoScientific, MO, USA) with an acquisition HPLC system. The following parameters were used for the Q-Exactive mass spectrometer -sheath gas: 40, auxiliary gas: 5, ion spray voltage: 3. First, the individual data files were searched for product ion MS/MS spectra of lipid precursor ions. MS/MS fragment ions were predicted for all precursor adduct ions measured within ±5 ppm. The product ions that matched the predicted fragment ions within a ±5 ppm mass tolerance was used to calculate a match-score, and those candidates providing the highest quality match were determined. Next, the search results from the individual positive or negative ion files from each sample group were aligned within a retention time window (±0.2 min) and the data were merged for each annotated lipid.
Data cleanup and statistical analysis of lipids.
Lipid concentrations extracted from the LipidSearch software were further analyzed with an in-house script using the R programming language. The data was filtered to exclude any peak concentration estimates with a signal to noise ratio (SNR parameter) of less than 2.0 or a peak quality score (PQ parameter) of less than 0.8. If this exclusion resulted in the removal of two observation within a biological triplicate, the remaining observation was also excluded. The individual concentrations were then gathered together by lipid identity (summing together the concentration of multiple mass spectrometry adducts where these adducts originated from the same molecular source and averaging together biological replicates) and grouped within the broader categories of AcCa, TG, DG, PC, PE, PG, CL, PI, PS. The result was nine groups containing multiple lipid concentrations corresponding to specific lipid identities, which were then compared between wild type and KO samples using a (paired, non-parametric) Wilcoxon signed-rank test at an overall significance level of 5% (using the Bonferroni correction to account for the large number of tests performed). As the Bonferroni correction is fairly conservative, significant differences are reported at both precorrection (*) and post-correction (***) significance levels. Microscope at 80kV. Images were captured using a Hamamatsu ORCA-HR digital camera. Three mice per genotype for each timepoint were evaluated. The mitochondrial content was determined from the images at 10,000× magnification using Image J software and calculated as mitochondria count/field by blinded investigators. Point counting was used to estimate mitochondrial volume density and mitochondrial cristae density based on standard stereological methods 46,47 . Only mitochondria profiles of acceptable quality defined as clear visibility and no or few missing spots of the inner membrane were included. Using ImageJ software, a point grid was digitally layered over the micrographic images at 20,000x or 40,000x magnification for mitochondrial volume density and cristae density calculations respectively. Grid sizes of 85 nm x 85 nm and 165 nm x 165 nm were used to estimate mitochondria volume and cristae surface area, respectively. Mitochondria volume density was calculated by dividing the points assigned to mitochondria to the total number of points counted inside the muscle. The mitochondrial cristae surface area per mitochondrial volume (mitochondrial cristae density) was estimated by the formula: mitochondrial cristae density = (4/π) BA, where BA is the boundary length density estimated by counting intersections on test lines multiplied by π/2. In brief, we counted the intersections I(imi) between the inner mitochondrial membrane trace and the test lines and measured the total length of the test line within the mitochondria profile to calculate mitochondrial cristae density =2. I(imi)/L(mi).
Western blot analysis (WB) and quantification.
The muscle tissue (~100mg) was homogenized with a steel bead in 1 ml of cold RIPA
Quantification and statistical analysis.
All experiments were repeated 3 or more times. Data are presented as mean ± SEM or mean ± SD, as appropriate. For comparison of two groups the two-tailed Student's t-test was used unless otherwise specified. Comparison of more than two groups was done by one-way ANOVA followed by the Tukey's Multiple Comparison test. P values <0.05 were considered significant.
Data availability
All data that support the findings of this study are available from the corresponding authors upon request. | 2021-07-27T00:06:21.322Z | 2021-05-19T00:00:00.000 | {
"year": 2021,
"sha1": "a269024c0a870a2b1607dec72ba071c692b129bd",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-022-29270-z.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "d8d18569a6ee95bff45a0e873d29707cd24e9ed2",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
14505061 | pes2o/s2orc | v3-fos-license | Glaucoma Alters the Circadian Timing System
Glaucoma is a widespread ocular disease and major cause of blindness characterized by progressive, irreversible damage of the optic nerve. Although the degenerative loss of retinal ganglion cells (RGC) and visual deficits associated with glaucoma have been extensively studied, we hypothesize that glaucoma will also lead to alteration of the circadian timing system. Circadian and non-visual responses to light are mediated by a specialized subset of melanopsin expressing RGCs that provide photic input to mammalian endogenous clock in the suprachiasmatic nucleus (SCN). In order to explore the molecular, anatomical and functional consequences of glaucoma we used a rodent model of chronic ocular hypertension, a primary causal factor of the pathology. Quantitative analysis of retinal projections using sensitive anterograde tracing demonstrates a significant reduction (∼50–70%) of RGC axon terminals in all visual and non-visual structures and notably in the SCN. The capacity of glaucomatous rats to entrain to light was challenged by exposure to successive shifts of the light dark (LD) cycle associated with step-wise decreases in light intensity. Although glaucomatous rats are able to entrain their locomotor activity to the LD cycle at all light levels, they require more time to re-adjust to a shifted LD cycle and show significantly greater variability in activity onsets in comparison with normal rats. Quantitative PCR reveals the novel finding that melanopsin as well as rod and cone opsin mRNAs are significantly reduced in glaucomatous retinas. Our findings demonstrate that glaucoma impacts on all these aspects of the circadian timing system. In light of these results, the classical view of glaucoma as pathology unique to the visual system should be extended to include anatomical and functional alterations of the circadian timing system.
Introduction
The hallmark of glaucoma is the degenerative loss of retinal ganglion cells (RGCs) and their optic nerve fibers. In the absence of adequate treatment glaucoma inevitably leads to blindness and is expected to affect more than 60 million people worldwide by 2010 [1]. Although glaucoma results from multiple factors and likely comprises a family of diseases, raised chronic intraocular pressure (IOP) represents a significant risk factor [2][3][4][5][6][7]. The primary symptom of glaucoma is an initial reduction of the peripheral visual field with the degree of loss evolving proportional to RGC loss [8][9]. Previous studies in both humans and in monkey models of glaucoma initially reported that larger soma sized RGCs were primarily susceptible to injury or damage [10][11][12][13]. However, a recent study found that all classes of RGCs follow a similar time course of degeneration beginning from the early stages of the disease [14].
In the mammalian retina, the light sensitive RGCs that express the photopigment melanopsin [15][16][17][18] comprise a specific subset (,1%) of large sized ganglion cells that regulate the photic synchronisation of circadian rhythms and more generally, a spectrum of ''non-visual responses'' to light including the acute suppression of melatonin, pupillary constriction, alertness and masking [19][20][21][22][23]. These RGCs mainly project to ''non imageforming'' structures including the suprachiasmatic nucleus (SCN), intergeniculate leaflet (IGL), olivary pretectal nucleus (OPN), lateral hypothalamus and preoptic regions [16,[24][25]. Minor input has also been demonstrated to the dorsal lateral geniculate nucleus (dLGN [18]) and superior colliculus (SC [24]). Functional invalidation of melanopsin photopigment in Opn 4 2/2 mice leads to attenuated circadian [16,20,21,26], pupillary [27] and electrophysiological responses to light [28] while the absence of functional rods, cones and melanopsin results in a total inability to respond to light [21,26]. Ocular pathologies and blindness in humans are also associated with circadian and sleep disorders that depend on the degree of conserved light perception [29][30][31]. Presently, however, there is no clear support for the commonly held hypothesis of the impact of glaucoma on circadian and/or sleep disturbances and the degenerative loss of melanopsin RGCs in this pathology is controversial. One study [14] provided evidence that melanopsin RGC degeneration is proportional to that of the entire ganglion cell population, while a second report [32] claimed selective sparing of melanopsin neurons.
To explore the hypothesis that glaucoma leads to alterations in both the visual and non-visual systems we used a rat model of glaucoma with laser-induced chronically elevated IOP [33] in a strategy that combines three novel functional and behavioral approaches. We address several questions that have not previously been probed in studies of glaucoma. Based on findings that RGC somas degenerate in glaucoma, to what extent are their axonal fiber projections to different brain target structures affected? If the photic input pathway to the SCN is altered in glaucoma, does this impact on the capacity for light entrainment by the circadian timing system? How do degenerative changes of glaucoma alter photopigment expression in the inner and outer retina?
Animals
Male Wistar rats (n = 30) were housed individually in propylene cages, under a 12:12 LD cycle, with food and water ad libitum. Eye surgery for inducing experimental glaucoma was performed at 3-6 months of age. Anatomical and behavioral investigations were undertaken on animals between the ages of 6-12 months. Experiments were conducted in accordance with current national (Décret No. 87-848), international (EU guidelines) and institutional regulations for animal husbandry and surgical procedures.
Laser Technique for raised intra-ocular pressure Argon laser treatment (blue-green argon laser; Coherent, Palo Alto, CA) of the episcleral veins (responsible for aqueous outflow tissue) was used to induce chronic elevation of IOP according to previous methods [33]. Rats were anesthetized with a mixture of ketamine (50 mg/kg), acepromazine (1 mg/kg), and xylazine (25 mg/kg). Laser treatment was performed unilaterally (n = 12, for the anatomical study) or bilaterally (n = 5, for the behavioral study) in three stages on the episcleral veins within 0.5 to 0.8 mm from the limbus and on the veins in the limbus. The first two surgeries occurred one week apart and the third was performed 2 months after the second one. The amount of energy used was 1W for 0.2 seconds, delivering a total of 230 to 550 spots (50-100 mm spot size) with the laser treatments. IOP was monitored using a tonometer (Tono-Pen; Mentor, Norwell, MA). Figure 1 illustrates the progression of IOP in the 12 rats with monocular laser treatments measured from the operated and unoperated eyes. In this model, retinal ganglion cell (RGC) loss was previously demonstrated to be roughly 33% after 3 weeks of elevated IOP.
Anterograde tracing of retinal projections, CTb immunocytochemistry
Injection of Cholera toxin fluorochrome conjugates for qualitative study. Rats with monocular laser surgery (n = 4 of the 12 rats described in laser treatment) were anesthetized with a mixture of ketamine (50 mg/kg) and xylazine (25 mg/kg). Eyes were additionally anesthetized locally using topical application of oxybuprocaïne chlorhydrate. Two rats received a 0.5 mg intraocular injection (6 ml) of Cholera toxin subunit b (CTb) Alexa Fluor 594 conjugate (red fluorescence, #C-22842, Molecular Probes, CA) in the operated right eye and a 0.5 mg intraocular injection (6 ml) of CTb Alexa Fluor 488 conjugate (green fluorescence, #C-22841, Molecular Probes, CA) in the unoperated (control) left eye. The 2 other rats received the same two intraocular injections of CTb fluorochromes but with the reverse Alexa Fluor 488 conjugate in the operated right eye and Alexa Fluor 594 conjugate in the control left eye.
Injection of Cholera toxin for quantitative analysis. Rats with monocular laser surgery (right eye, n = 8 of the 12 rats described in laser treatment) and controls with no laser surgery (n = 8) were anesthetized with a mixture of ketamine (50 mg/kg) and xylazine (25 mg/kg). Eyes were additionally anesthetized locally using topical application of oxybuprocaïne chlorhydrate. Rats with monocular induced glaucoma received a 0.5 mg injection (6 ml) of CTb (#130A, List Biological Laboratories Inc) in the operated eye. Control rats with no laser surgery received the same injection in their right eye.
Treatment of Brain Sections and CTb
Immunocytochemistry. For both the CTb fluorescence and diaminobenzidine (DAB) studies, 48 hr after the injection, all animals were deeply anesthetized with a lethal intraperitoneal injection of sodium pentobarbital (150 mg/kg) and perfused intracardially with warm (37uC) heparinized saline followed by 300 ml of Zamboni's fixative at 4uC. Brains were removed, and post-fixed overnight in a mixture containing the same fixative with 30% sucrose for cryoprotection at 4uC. Serial coronal sections were made at 50 mm on a freezing microtome and all brain sections were collected. Sections from the animals injected with the CTb anterograde tracer coupled to a fluorochrome were directly mounted on slides, dehydrated and coverslipped. All sections from all animals injected with CTb were processed at the same time to obtain identical levels of tissue staining for data analysis. Endogenous peroxidase was first suppressed using a solution of 50% ethanol in saline with 0.03% H 2 O 2 . Free-floating sections were rinsed briefly in PBS (0.01 M, pH 7.2) containing 0.3% Triton and blocked with 1% bovine serum albumin. Sections were incubated in the anti-CTb primary antibody (dilution 1:3,000) for 3 days at 4uC. Immunoreactivity was visualized using a Vectastain ABC Elite kit (PK-6100, Vector Laboratories, Burlingame, CA), followed by incubation in 0.2% 3,39-DAB with 0.5% ammonium nickel sulfate and 0.003% H 2 O 2 in Tris buffer (0.05 M, pH 7.6). Sections were then mounted on slides, dehydrated and coverslipped.
Quantification of retinal projections
Retinal projections to the brain were quantified on sections processed for DAB immunocytochemistry, based on a published methodology [34] that has been applied in several previous studies of brain organization [35][36][37]. Briefly, quantitative levels of the optically dense DAB immunolabel product were measured using computer-assisted image analysis (Biocom, Les Ulis, France). Optical density of label was measured bilaterally in each structure from digitized images. Quantification of label is determined from the total integral optical density of labeling. Integral density takes into account both the surface area and pixels density. The integral optical density was obtained by first subtracting the background density value. Optical density was measured from all the sections of each structure that receives RGC fiber projections except for the superior colliculus (1 of 2 sections).
Analysis of Retinal Opsins
At the end of the behavioral study the two retinas were pooled from each of the animals (n = 5 binocular induced glaucoma rats and n = 5 control rats, age 14-16 months). Retinas were collected from the animals between ZT8-ZT9, after the animals had been re-entrained to a 100 lux LD cycle. Total RNA was extracted using GenEluteTM Mammalian Total RNA Miniprep Kit (Sigma) according to the manufacturer's instructions and subsequently subjected to DNase digestion. Total RNAs was reverse transcribed using random primers and MMLV Reverse Transcriptase (Invitrogen). Real-time PCR was then performed on a Light-Cycler TM system (Roche Diagnostics) using the light Cycler-DNA Master SYBR Green I mix. The efficiency and the specificity of the amplification were controlled by generating standard curves and carrying out melting curves and agarose gels of the amplicons respectively. Relative transcript levels of each gene were calculated using the second derivative maximum values from the linear regression of cycle number versus log concentration of the amplified gene. Amplification of the non cyclic control gene 36B4 was used for normalization. Each reaction was performed in duplicate. Primer sequences were the following:
Entrainment of circadian locomotor activity rhythms
For monitoring locomotor activity, rats operated bilaterally to induce experimental glaucoma were used. A total of 10 rats (n = 5 with binocular raised IOP, n = 5 controls) were housed individually in cages equipped with passive infrared motion captors placed over the cages and a computerized data acquisition system (CAMS, Circadian Activity Monitoring System, INSERM, France). Rats were initially maintained for 26 days under a 12:12 LD cycle with broad-band white light (100 lux). Animals subsequently underwent a 6 hr phase delay of the LD cycle, associated with successive decreases of light intensity (from 100 to 10 lux and subsequently from 10 to 1 lux).
Activity records were analyzed with the Clocklab software package (Actimetrics, Evanston, IL). The time of locomotor activity onsets was determined using the Clocklab onset fit algorithm. Animals were considered to be entrained when the fit of the least squares regression line of the activity onsets was stable in relation to lights-off for at least 7 days of in LD. The phase angle, defined as the time difference between the onset of the activity rhythm and time of lights-off was determined for each animal (see [38]).
Statistical Analysis
Data were analyzed using the SigmaStat software (Systat Software Inc., Point Richmond, CA). The Shapiro-Wilk W test was used to test for normality. For comparisons between two groups, parametric t-test was used when data were normally distributed and non-parametric Mann-Whitney test when normality was not achieved. Data is expressed as mean6S.E.M.
Consequences of chronic elevated IOP on retinal projections
All operated eyes of animals treated with laser surgery displayed a consistent chronic increase in IOP (Fig 1). To assess the consequences of chronic hypertension on injury to RGC axonal projections to the brain, intraocular injection of two different fluorescent anterograde tracers were made in the same individual: a green fluorescent tracer in the laser-operated glaucomatous eye and a red fluorescent tracer in the opposite intact eye. Using this coupled, double fluorescence technique, the patterns of retinal projections from the normal and glaucomatous eye can be examined on the same brain section by simply changing the fluorescence excitation filters. Comparison of the retinal fiber innervations (Fig. 2) shows a marked reduction in the density and topographical coverage of the retinal fibers emanating from the glaucomatous eye to all structures examined (SCN, dLGN, SC; see below for IGL ventral lateral geniculate nucleus: vLGN, pretectum: PRT). The loss of retinal innervation from the glaucomatous eye is particularly evident for the SCN where few retinal fibers are present. The dLGN is also affected, with an absence of label in the ventrolateral portion of the contralateral nucleus and a noticeable reduction of the ipsilateral component of the projection. The SC shows a complete loss of fibers in the lateral region. This lateral part of the SC and the ventrolateral part of the dLGN correspond to the temporal and dorsal parts of the visual field [39][40]. Although there was variability between individuals in the topographical pattern and degree of reduction of RGC fiber innervation, all 4 cases examined demonstrated a uniform decrease in innervation of the SCN coupled with a patchy distribution or empty regions in the dLGN and SC.
Quantification of retinal projections to visual and nonvisual structures
While the double fluorescent tracing method allows a direct qualitative assessment of the topographical distributions of RGC fibers from glaucomatous and control eyes, the use of different fluorescent filters precludes a precise quantitative comparison. For this reason we used optical density analysis of digitalized images from brain sections that were processed for DAB immunocytochemistry [34,36]. All control rats (8/8) Chronically elevated IOP alters these topographical distributions and causes a reduction of the RGC projections in brain structures, but these features vary depending on the individual case. Some laser operated animals showed a uniform reduction of retinal projections in all structures, although the fiber distribution appeared almost normal within the nuclei (Fig. 3B). Other animals displayed a patchy reduction of the density of retinal projections in specific sub-regions of the different brain structures (Fig 3C). In this case, the pattern of the projections onto the contralateral SC suggested that the retinal quadrants most affected were located ventral and lateral to the optic disc. Finally, other individuals with laser-induced glaucoma had severe reductions of retinal projections to all structures, although some sparse patches were observed in the contralateral SC (Fig. 3D). Based on the fiber pattern distribution in the contralateral SC, two of these rats conserved sparse projections from the dorso-temporal region of the retina and the other rat from dispersed retinal regions.
The quantitative analysis of the mean densities of retinal projections to different brain structures for operated and control groups and the relative percent reduction (compared to the average values for the controls) are illustrated in Figure 4A-B. Two operated animals omitted from the analysis due to an unacceptable number of missing sections showed qualitative are taken from the same sections but using different excitation filters. For each of the structures illustrated, the brain hemisphere ipsilateral to the injected eye is to the left and contralateral to the injection is to the right. Thus, the ipsilateral dLGN and SC (red: control) correspond to the same hemisphere as the contralateral dLGN and SC (green: operated). Note that projections to the SCN and SC (montage of 3 photomicrographs) are markedly reduced and that the topographical distribution of the projection in the dLGN is also considerably altered. Part of the material shown is modified from a previous review [69]. doi:10.1371/journal.pone.0003931.g002 alterations similar to the other individuals. Despite the high degree of variability in the glaucomatous rats, the reduction in retinal fiber density in different structures ranged from 49.7612.6 % (vLGN) to 71.769.4% (SCN) and was significant for each structure (Mann Whitney; p,0.05) and for the total density summed for all the structures (60.166.9%, Mann-Whitney, p,0.05; Fig 4B).
Light entrainment of locomotor activity is altered in glaucoma
We then assayed the ability of rats with binocular induced glaucoma to entrain their daily locomotor activity to successive 6 hr delays of LD cycles each coupled with 1 log unit decreases in light levels (100, 10 and 1 lux). Measures of IOP in both glaucomatous eyes showed increases comparable to those recorded for the monocularly operated animals, ranging from 16.161.14 mm Hg before surgery to 31.061.90 mm Hg following the surgical procedures. Locomotor activity rhythms (double plotted actograms) are shown for a control and operated rat with binocular experimental glaucoma (Fig. 5A). Both control and operated rats were capable of entrainment to the LD cycles at all light levels. However, the animals with binocularly induced experimental glaucoma displayed a behavioral pattern that was characterized by a delay to entrain to the new LD cycle and a high degree of variability for activity onsets in relation to the beginning of the dark phase. This is illustrated more clearly in the group analysis in Figure 5B where the phase angles of individual activity onsets are shown. The results indicate that the glaucomatous rats require on the average more time to entrain to the new LD cycle but this was only significant when the light level was lowered to 1 lux (control = 5.860.86 days, operated = 9.861.50 days, t-test, p,0.05; Fig. 5C). The histograms in Figure 5C also show that in comparison to controls, rats with binocular elevated IOP are unable to precisely synchronize their activity to the LD cycle at all light levels, expressed as a significantly greater variability in activity onsets with respect to the beginning of the dark phase (100 lux: control = 0.1760.02 hrs; operated = 0.3660.08 hrs, t-test, p,0.05; 10 lux: control = 0.2360.03 hrs; operated = 0.466 0.10 hrs, t-test, p,0.05; 1 lux: control 0.3560.02 hrs; operated = 0.6460.06 hrs t-test, p,0.005).
At the end of the experiment, rats were released in constant darkness to assess whether the animals were entrained to the previous LD cycle at 1 lux light level or if there was a masking component (Fig. 5A-B). Both the control and operated rats were entrained prior to release in constant darkness since the onset of free-running activity extrapolated back to the first day in darkness started to derive from the previous onset point. There was no difference between the two groups in their endogenous free-running periods (control 24.0960.09 hrs; operated = 24.1760.71 hrs; Mann-Whitney, p = 0.421) or in the variability of activity onsets (t-test: p = 0.533). This suggests that the onset variability of operated animals in LD is related to an alteration of light input in the glaucomatous rats rather than of circadian clock function itself.
Alteration of retinal opsin mRNAs and ganglion cell markers in glaucoma
Finally, real time quantitative PCR was used to evaluate the expression of different retinal opsins as well as the expression of a specific ganglion cell marker (Thy1) and pituitary adenylate cyclase peptide (PACAP) a neurotransmitter co-expressed in melanopsin RGCs (Fig. 6). As expected, mRNA of melanopsin and Thy1 are significantly decreased in eyes with raised IOP (ttest, p,0.01). In contrast, PACAP mRNA expression remains unchanged (t-test, p = 0.831). However, an unexpected finding was that all opsin mRNAs from the outer retina (SW, MW opsins and
Discussion
Previous studies of animal models of experimental glaucoma secondary to chronically elevated IOP have demonstrated significant RGC loss ranging from 30% to 90% depending on the method employed, time course and experimental model [14,33,[41][42][43]. The assay of RGC degeneration in the retina is a gold standard for assessing the consequences of experimental glaucoma, but in the present study we focused on the loss of central RGC fiber projections in order to gain a comprehensive view of the quantitative and qualitative alterations in the target structures that process visual and non-visual information in the brain. To our knowledge, only two (contradictory) studies have examined the consequences of ocular hypertension on the alteration of RGC fiber projections. One HRP-tracing study in the rat SC [44] claimed a complete lack of anterograde transport, while the second study found 20-90% losses of retinal projections in the magno-and parvocellular layers of the monkey dLGN (silver grain counts; [45]).
Anterograde tracing in the hypertensive rat model indicates that experimental glaucoma induces an overall reduction of the retinal projections to the brain (,60%) while the same analysis following severe degeneration of outer retinal photoreceptors reveals no effects [37]. The reduction is particularly significant for the SCN, the main target of an almost exclusive innervation from melanopsin RGCs [26] that shows a mean reduction of ,71%. The dLGN; IGL and PRT show mean reductions ranging from ,60-65% while the vLGN and SC show a reduction of roughly 50%. This suggests that melanopsin RGCs are equally susceptible to degeneration as other RGCs in raised IOP, consistent with the 65-80% reduction of melanopsin RGCs in the study by Jakobs et al. [14]. These two results contrast with a third report describing a resistance of the melanopsin cell population to IOP induced injury [32]. This discrepancy may arise from the fact that the latter study used a small sample size, short survival periods (,14 days) and different methods to quantify the reduction in melanopsin RGCs and in the total RGC population.
A consistent feature of the reduction of RGC projections, despite the reliable increase of IOP induced by the laser surgery, was the variability observed between individual animals in both the extent and the topographical pattern of loss of RGC fibers. The variation of RGC, optic nerve fiber and visual field loss following increased IOP is a characteristic feature in both experimental and human glaucoma [4]. For example, in mouse models of hereditary increased IOP, 28% of the mice show little or no indication of Figure 6. Quantification of mRNA expression of retinal opsins, Thy1 and PACAP. Short wavelength cone (SW), mid-wavelength cone (MW) opsins, rhodopsin and melanopsin mRNA expression are all significantly reduced in experimental glaucoma. The specific RGC marker Thy1 was also significantly reduced whereas PACAP expression is unchanged (t-test, * p,0.05; ** p,0.005.) Both of the retinas from each of the binocularly operated (n = 5) and control rats (n = 5) were used. doi:10.1371/journal.pone.0003931.g006 Figure 5. Representative double plot actograms and phase angle plots for a control rat and a rat with experimentally induced binocular glaucoma. Animals are first entrained under a 12:12 light:dark (LD) cycle at 100 lux light (actograms shown in A). After 26 days, the LD cycle was shifted 6 hrs (delay) and the light level was decreased to 10 lux. After 35 days, the light LD cycle was again shifted by 6 hrs and the light level decreased to 1 lux (45 days). Animals were then released into constant darkness to assess whether the animals were entrained to the previous LD cycle. The three successive 12L:12D light cycles (from 100-10-1 lux) are shown above the actograms and the days corresponding to the lux levels of the light phase are indicated on the right. The black bars indicate constant darkness. Although both groups of animals entrained to each of the shifted LD cycles, glaucomatous rats show a greater variability in locomotor activity onsets with respect to the beginning of the dark phase. Some glaucomatous rats also show variability in activity offsets at lights on and components of activity drift during the dark phase. This is illustrated in B for the phase angle plots of the activity onsets of individual control rats (n = 5, left) and rats with binocular glaucoma (n = 5, right). (C) Quantification of the number of days necessary to entrain to a new light-dark cycle (left) and quantification of the activity onset variability with respect to the beginning of the dark phase (right) for both groups. Onset variability was calculated from the last 15 days of each LD cycle, when all animals displayed stable entrainment. (t-test, * p,0.05; ** p, = 0.005). doi:10.1371/journal.pone.0003931.g005 glaucomatous damage while 66% showed severe damage, including differences between the left and right eyes in a single individual [14]. Furthermore, the topographical pattern of degeneration across the retinal surface can show considerable variation [14,46]. The nature of the mechanistic link between high intraocular pressure and loss of retinal ganglion cells is still not fully understood and although raised IOP is clearly an important risk factor, patients with ocular hypertension do not invariably progress to clinical glaucoma and RGC degeneration, even over long-term periods [2,4,[47][48][49].
The reduction in mRNA of melanopsin and of the ganglion cell marker Thy1 also confirms the overall decreases of the RGC population and of melanopsin RGCs. In contrast, PACAP, which is co-expressed in melanopsin RGCs [50] remains unchanged. However, it is unclear whether PACAP is also expressed in other retinal neurons and a decrease in melanopsin mRNA without a concomitant decrease in PACAP is observed in rats treated with N-methyl-N-nitrosourea (MNU), a pharmacological agent causing photoreceptor apoptosis [51].
An unexpected but significant finding is that mRNA opsin expression from outer retinal photoreceptors (MW cones, SW cones, rods) were all found to be under expressed. Although the question of whether other retinal cell types are altered in glaucoma is still a matter of debate, Jakobs et al. [14] using well-characterized cell markers for specific amacrine and bipolar cells coupled with morphological analysis of soma and dendritic architecture argue that glaucoma affects exclusively the RGCs. Most early histopathological studies reported no photoreceptor loss in human eyes with primary open angle glaucoma [52] or in a monkey model of glaucoma [53]. More recent data describes minor abnormalities without cell loss in the outer retina, including swollen photoreceptors in both the human disease and in the monkey [54]. Furthermore, multifocal ERG studies have shown that the outer retina is functionally affected in experimental glaucoma [55][56]. Finally, a recent investigation [57] using in situ hybridization showed a reduction in the expression of MW/LW and SW cone opsin mRNAs in monkeys with chronic ocular hypertension, consistent with our results. In contrast no changes in the rod opsin mRNA level were observed. Taken together, these data suggest that although the outer retina is not anatomically altered in experimental or human glaucoma, cone and rod opsin mRNAs are under expressed and may be functionally impaired. The reduced level of mRNA opsins may result directly from the chronic increase in IOP or, alternatively, may be an indirect effect related to the partial loss of melanopsin RGCs and a subsequent disruption of the circadian regulation of retinal physiology and outer retinal photoreceptor processes [58]. If the retinal clock is deregulated, this may have potentially important consequences on gene cycling, photopigment regeneration and retinal function [59] in severe glaucomas.
The reduction of melanopsin RGCs and their innervation of the SCN impacts on the ability of glaucomatous rats to entrain to light. We used an entrainment paradigm since this assay is reported to be more sensitive for detecting entrainment deficits compared to single light-pulse phase shifts in both animals [60] and humans [61] and is more relevant to real-life conditions that human patients are exposed to. Rats with binocular hypertension require more time to re-entrain to a shifted LD cycle at low light levels compared to control rats and display greater variability in the activity onsets. Mice invalidated for melanopsin show a deficit in their ability to entrain at low light levels, to phase shift to light [20,21,26,38], have impaired masking responses to light [22] and show severely reduced photic responsiveness in the SCN [28]. However, the total loss of melanopsin photopigment in Opn 4 2/2 mice is not equivalent to the situation in glaucomatous rats (or human glaucoma), where a variable proportion of melanopsin RGCs (and their rod/cone inputs) are absent. Our results are similar to the anatomical and behavioral alterations recently reported using a targeted saporin-based immunotoxic technique that results in a partial ablation of melanopsin RGCs [62].
In humans, it has been reported that patients with different degrees of blindness resulting from various ocular pathologies show sleep disturbances and abnormal circadian rhythms [29][30][31]. These disturbances were attributed to a lack of light input to the circadian clock and are correlated with the degree of loss of light perception. Patients with the lowest level of conscious light perception show the greatest degree of sleep impairments. In the study by Tabandeh et al. [31] the patient group with optic nerve disease/glaucoma showed a higher likelihood of sleep disorders, although this group contained individuals with light perception and others with no light perception. Glaucoma patients have been shown to exhibit relative afferent pupillary defects in early stages [63][64][65] and a prevalence of sleep disorders in later stages [66][67]. In our laboratory, preliminary studies of patients with severe open angle bilateral glaucoma show a high variability in both the normal time to bed and in the dim-light melatonin onset [68] which we speculate represent the main manifestations of circadian disorders in this group.
Here, we provide evidence that chronic increase IOP in a rodent model of glaucoma leads to a decrease in the melanopsin RGC axonal innervation of the SCN and an alteration in the photic response to light. The alteration of entrainment of locomotor activity in this model suggests that patients with severe bilateral glaucoma may show an increased propensity for chronobiological disturbances. Our data and that of previous studies support the idea that RGC loss due to glaucoma affects both visual and non-visual functions. Concerns for health and quality of life for patients with glaucoma should thus not be limited to the detection and prevention of visual impairments but should also consider the potential impacts on altered circadian entrainment and sleep disturbances. | 2014-10-01T00:00:00.000Z | 2008-12-12T00:00:00.000 | {
"year": 2008,
"sha1": "5572deff8df5db42fc29eef014a22701bf64567b",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0003931&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "171747c87a39e2641283d4384527c453e1314f1e",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
248085847 | pes2o/s2orc | v3-fos-license | Gravitational Waves from Domain Walls in Pulsar Timing Array Datasets
We present a model-independent search for the gravitational wave background from cosmic domain walls (DWs) in the NANOGrav 12.5 years dataset and International PTA Data Release 2. DWs that annihilate at temperatures $\sim 20-50~\text{MeV}$ with tensions $\sim (40-100~\text{TeV})^3$ provide as good a fit to both datasets as the astrophysical background from supermassive black hole mergers. DWs may decay into the Standard Model (SM) or a dark sector. In the latter case we predict an abundance $\Delta N_{\text{eff}}$ of dark radiation well within the reach of upcoming CMB surveys. Complementary signatures at colliders and laboratories can arise if couplings to the SM are present. As an example, we discuss heavy axion scenarios, where DW annihilation may interestingly be induced by QCD confinement.
To encompass such a variety of possible origins, in this work we perform model independent searches, while also obtaining specific results for Z N symmetries embedded in U (1) global transformations [45,52], of the type arising e.g. from axions [18]. Our work presents important novelties compared to previous searches for primordial sources [11][12][13][14][15]: (a) we perform model-independent analyses in both the NG12 and IPTADR2 datasets; (b) we properly account for cosmological constraints and include the temperature dependence of the number of relativistic degrees of freedom in the plasma (according to [53]); (c) we discuss a particle physics realization with signatures at colliders and other experiments. Additionally, compared to [11,15], we include the GWB from SMBHBs in the search for the DW signal.
Gravitational waves from DWs
In the absence of significant interactions with the surrounding plasma, a generic DW network that forms after inflation quickly reaches a scaling regime with energy density [18] ρ DW = c σH, (2.1) and with O(1) walls of size H −1 moving at relativistic speed, where H is the Hubble rate, c ∼ O(few) is a model-dependent prefactor and σ is the DW surface energy density, or tension. In scalar field models, the DW width is of the order of the inverse mass m of the constituent field. According to (2.1), DWs tend to rapidly overclose the Universe [19]. This can however be avoided if the network starts annihilating at some time t [45,52,54,55]. Assuming radiation domination, i.e. H(T ) = π 2 g * /90 T 2 /M p with M p ≡ (8πG) −1/2 , the fraction of the total energy density in DWs at t is where (g s * (T )) g * (T ) is the number of (entropic) relativistic degrees of freedom (we approximate g * ,s = g * ) at the annihilation temperature T . The temperature normalization roughly corresponds to the region preferred by the data (see below). The normalization for σ 1/3 then corresponds to an upper limit on the DW tension for annihilation around this temperature.
The network has a large time-varying quadrupole that efficiently radiates GWs [54,[56][57][58][59], with a fraction ρ GW /(3H 2 M 2 p ) ∼ 3/(32π) α 2 of the total energy density that is maximal at T . Most GWs are emitted at a frequency f p H, the inverse length of the walls in the scaling regime, that redshifted to today corresponds to: The relic abundance today, Ω GW,DW (f ) ≡ dρ GW /d log(f )/(3H 2 0 M 2 p ), can then be expressed as where˜ 0.1 − 1 is an efficiency parameter to be extracted from numerical simulations [59] and the function S(x) describes the spectral shape of the signal. A useful parametrization (see e.g. [60]) is: where γ, β capture the spectral slopes at f f 0 p and f f 0 p respectively, and δ the width around the maximum. While γ = 3 because of causality (e.g. [60]), numerical analyses are needed to determine δ and β. Most recent simulations [59] find δ, β 1, although results for Z N hybrid string-wall networks suggest that β decreases with increasing N [61]. The spectrum is cut off at frequencies larger than the (redshifted) inverse of the wall width ∼ m. We stress that the above estimates only account for emission during the scaling regime. The subsequent annihilation of the network may further source GWs if sufficiently violent, but we neglect such contributions here since they are model-dependent and have not yet been numerically investigated. 1 Our discussion has so far been independent of the specific DW annihilation mechanism, and so will be most of the results presented in this work. Additionally, we will consider a well-studied annihilation mechanism in more detail: explicit symmetry breaking by a tiny (vacuum) energy density gap ∆V between vacua [55,64] (see e.g. [65] for other possibilities). This results in a pressure p ∼ ∆V . The annihilation temperature can then be estimated using ∆V ρ DW : For consistency with the GW estimates above, we neglect the contribution of ∆V to the energy density in the DW network, which is at most comparable to ρ DW . We thus see that the typical scale of the energy gap suggested by the data is 10 MeV. Overall, the GW signal from DWs depends only on T and α . Given the current uncertainties in the determination of δ, β from simulations, we also consider slight deviations from their reference unit value, thereby allowing for a total of four parameters in our analysis of GWs from DWs. In models with a gap, it is useful to replace T and α with the DW tension σ and ∆V , by means of (2.2), (2.6).
Cosmology
Most energy density from DW annihilation is typically released in mildly (or non-) relativistic quanta of the wall constituents [66]. When these are stable, they dilute as matter and rapidly dominate over the radiation background, since the DWs of interest make up a significant fraction of the total energy density at T , thereby spoiling Big Bang Nucleosynthesis (BBN) and Cosmic Microwave Background (CMB) anisotropies. Thus, they must decay to dark radiation or to SM particles. We consider such decays to be efficient at T , which leads to the weakest constraints on the DW interpretation of PTA data.
We first discuss the scenario of decay into dark radiation (henceforth, the DR scenario). The abundance of DR is commonly expressed as the effective number of neutrino species Simons Observatory Planck18+BAO Decay to Dark Radiation IPTADR2 NG12 Decay to Standard Model ∆N eff ≡ ρ DR /ρ ν (ρ ν being the energy density of a single neutrino species). The constraint from BBN (CMB+BAO) reads ∆N eff ≤ 0.39 [67] (0.29 [68]) at 95% C.L. Setting ρ DR ρ DW at T , gives ∆N eff 13.6 g * (T ) −1/3 α , (3.1) thus the bounds above translate to α 0.06 (0.05) × 10.75/g * (T ). We do not consider the case of DW decay after e + e − annihilation (i.e. T 500 keV), nor the case of DW constituents diluting as matter and decaying after T , as both cases would lead to a larger ∆N eff .
On the other hand, when decay occurs to SM particles (henceforth, the SM scenario), BBN imposes T 2.7 MeV for any relevant value of α [70,71]. We also cautiously impose α ≤ 0.3 to avoid deviations from radiation domination, which require dedicated numerical studies. This also ensures that the GWs emitted from DWs respect the aforementioned DR bound, since ∆N eff, gw 0.2 α 2 (g * (T )/10.75) −1/3 , see (2.4).
Data Analysis
GW searches at PTAs are performed in terms of the timing-residual cross-power spectral den- where h c (f ) 1.26 · 10 −18 (Hz/f ) h 2 Ω GW (see e.g. [72]) is the characteristic strain spectrum and Γ ab contains correlation coefficients between pulsars a and b in a given PTA. We performed Bayesian analyses using the codes enterprise [73] and enterprise extensions [74], in which we implemented the DW signal (2.4),(2.3),(2.5), and PTMCMC [75] to obtain MonteCarlo samples. We derive posterior distributions using GetDist [76]. We include white, red and dispersion measures noise parameters following the choices of the NG12 [4] and IPTADR2 [7] searches for a common spectrum. Furthermore, we limit the stochastic GW search to the lowest 5 and 13 frequencies of the NG12 and IPTADR2 datasets respectively to avoid pulsar-intrinsic excess noise at high frequencies, as in [4,7]. We fix˜ = 0.7 according to [59] and discuss different choices below. Further details and prior choices are reported in Appendix A.
Posterior distributions are shown in Fig. 1. In both scenarios, NG12 is well fitted by the high frequency tail of the spectrum, i.e. by a simple power law (β = 1 or γ = 6 in the notation of [4]). On the other hand, IPTADR2 [7] prefers the region of the spectrum around the peak. We find almost flat posteriors for β and δ, see Appendix A.
For the DR scenario, Fig. 1 (left), a significant portion of the parameter space is constrained by the BBN prior. We find ∆N eff ≥ 0.26 (0.15) at 95% C.L. from IPTADR2 (NG12). These values are close to the current bound from Planck18+BAO (dashed line, 2σ) and well within the reach of the upcoming Simons Observatory [69] (dotted line, 2σ). However, note that CMB bounds only apply if the decay products remain relativistic until recombination. We also find T ∈ [23, 93] (≤ 51) MeV at 95% C.L. from IPTADR2 (NG12).
Next, we search for GWs from DWs in the presence of a stochastic background from SMBHBs, whose strain we take to be given by the simple power law h c (f ) = A GWB (f /yr −1 ) −2/3 (see e.g. [10]), assuming the SM scenario. 2 The 2D posterior distribution of α and A GWB are reported in Fig. 2 (left panel). In particular, the central values of A GWB agree with [4,7], and we find broader posteriors due to our additional background from DWs. When the PTA excess is mostly modeled by SMBHBs, the DW parameter α is only limited by our priors and can be large when the peak of the spectrum from DWs is located out of the PTA sensitivity band. We compare models using the Bayes factors log 10 B i,j of model j with respect to model i. For NG12, we find: log 10 B SMBHBs, DW 0.16, log 10 B DW, DW+SMBHBs 0.07. For IPTADR2, we find: log 10 B DW, SMBHBs 0.48, log 10 B DW, DW+SMBHBs 0.38. Thus we find no substantial evidence for one model against any other one in the datasets.
The maximum likelihood GW spectra from DWs (SM scenario), and for comparison from SMBHBs (as obtained in our DW+SMBHBs analysis), are shown in Fig. 2 (right panel). Spectra with δ, β = 1 are also displayed, to show the minor effect of these parameters on the quality of the fit.
We then specify our analysis of the SM scenario to the case of network annihilation due to a gap ∆V , by sampling the tension σ ∈ [10 10 , 10 18 ] GeV 3 rather than α and deriving posteriors of ∆V 1/4 using (2.6). We restrict our analysis to GWs from DWs only (our results will thus show the values of DW parameters which can provide a good interpretation of the data, in alternative to the SMBHBs GWB). We take c = 2.2 (obtained from stringwall networks with N = 3 [66]). Fig. 3 shows that both datasets are well modeled when σ (40 − 100 TeV) 3
Particle Physics Interpretations and Discussion
Now that we have identified the properties of the DW networks that provide a good modelling of the data, let us briefly discuss interesting microphysical origins and other potential observable signatures of such DWs. We focus here on scenarios with DW annihilation induced by a gap ∆V . Intriguingly, the preferred values of ∆V and of the DW tension σ shown in Fig. 3 fall in the ballpark of two particularly interesting energy scales.
Decay to Standard Model˜ A realization of the latter idea may consist of a heavy axion field a with Z N symmetry, N > 1, and decay constant F a , coupled to the topological term of a confining dark gauge sector H. Upon H-confinement at some scale Λ H Λ QCD , the Z N symmetry is spontaneously broken and a hybrid string-wall network forms with DW tension σ 8m a F 2 a , where m a Λ 2 H /F a is the axion mass. If a also couples to QCD, it receives an additional potential around the QCD PT with size set by the topological susceptibility ∆V 1/4 QCD 75 MeV [81]. This can induce annihilation when its periodicity differs from that of the H-induced potential. While a detailed exploration of this scenario is beyond the aim of this work (see however Appendix B), note that solving the strong CP problem requires either a fine alignment between the potentials induced by H and QCD, which might be challenging (see e.g. [82][83][84][85][86] for recent work), or a second axion that couples only (mostly) to QCD (see e.g. [79,80,87]). Alternatively, annihilation may occur due to (and/or in) a dark sector, see Appendix B.
We present the region of the m a − F a parameter space for which a heavy axion can model PTA data in Fig. 4, assuming decay to SM particles. Degeneracy between parameters can be clearly observed, as the GW signal only depends on σ and T . We also show existing constraints and future detection prospects from collider, astrophysics and laboratory experiments. Remarkably, for m a ∼ 100 MeV − 20 GeV and F a ∼ 10 5 − 2 · 10 7 GeV, a heavy axion can be discovered at DUNE ND [90] and/or MATHUSLA [91] and/or HL-LHC [84], while also fitting current PTA data.
Additional observable signatures and constraints may arise from the dark confining sector, with a scale Λ H ∼ 1 − 50 TeV from Fig. 4 (e.g. GWs in the LISA range [92] if H undergoes a first order PT [93], the presence of a dark matter candidate [94], or signatures at colliders [95]).
We also note that collapsing structures during DW annihilation might form primordial black holes (PBHs) [96,97], whose masses depend substantially on the annihilation temperature, giving M PBH ∼ ∆V H −3 ∼ O(10 − 10 4 ) M . Intriguingly, this encompasses the LIGO BH mass range. A dedicated numerical study is however required to assess the PBH abundance.
Finally, PTAs are expected to settle whether the currently observed common-process spectrum is due to GWs in the near future. Shall this be the case, obtaining the detailed spectral shape of the GW signal from DWs, including the annihilation phase, will be crucial to distinguish it from other candidate sources. Alternatively, our work can be used to constrain scenarios with spontaneously broken discrete symmetries.
Acknowledgments
We thank Joachim Kopp for help with CERN LXPLUS cluster, on which the Bayesian analyses presented in this work were performed, and Marianna Annunziatella for suggestions on matplotlib [98], which was used for plots. The enterprise code used in this work makes use of libstempo [99] and PINT [100,101]. The chain files to produce the free power spectrum violins in Fig. 2 were downloaded from the publicly available IPTA DR2 and NG12 data releases. This work is partly supported by the grants PID2019-108122GBC32, "Unit of Excel-
A Numerical Strategy
Here we provide further details of our numerical analysis. For the noise analyses, we followed closely the strategies outlined by the NG and IPTA collaborations in their searches for a common spectrum signal in [4] and [7], respectively. We use the datasets released in [102] for NG12 and in [103] for IPTADR2 (Version B, we use par files with TDB units).
In particular, for both datasets we consider three types of white noise parameters per backend/receiver (per pulsar): EFAC (E k ), EQUAD (Q k [s]) and ECORR (J k [s]). The latter is only included for pulsars in the NG12 dataset and for NG 9 years pulsars in the IPTADR2 dataset. Additionally, we included two power-law red noise parameters per pulsar in both datasets: the amplitude at the reference frequency of yr −1 , A red , and the spectral index γ red . For the IPTA DR2 dataset, we additionally included power-law dispersion measures (DM) errors (see e.g. [7]) (in the single pulsar analysis of PSR J1713+0747 we also included a DM exponential dip parameter following [7]).
In our searches for a GWB, we fixed white noise parameters according to their maximum likelihood a posteriori values from single pulsar analyses (without GWB parameters). In practice, for the NG12 dataset (45 pulsars with more than 3 years of observation time), we used the publicly released white noise dictionary [102]. For IPTADR2, on the other hand, we built our own dictionary by performing single pulsar analyses for each pulsar with more than 3 years of observation time (we only included those in our search for a GWB, as in [7], for a total of 53 pulsars). We used the Jet Propulsion Laboratory solar-system ephemeris DE438, as well as the TT reference timescale BIPM18, published by the International Bureau of Weights and Measures.
The choice of priors for the noise parameters in our analyses are reported in Table 1, together with the priors for parameters of the GWB from DW annihilation and from SMBHBs. With this strategy and priors for noise parameters, we are able to reproduce the results of [4] and [7] for a common-spectrum red-noise process with excellent agreement. We obtain more than 10 6 samples per each analysis presented in this work and discard 25% of each chain as burn-in. Following the strategy of [4,7], we use only auto-correlation terms in the Overlap Reduction Function (ORF) in our search, rather than the full Hellings-Downs ORF, to reduce the computational time. We checked in specific cases that this has a minor impact on posterior distributions with the NG12 and IPTADR2 datasets, in agreement with the findings of [4,7]. As described in the main text, we consider two separate cases in our search for GWs from DWs. If DW constituents decay to dark radiation (DR), we express the GW signal in terms of logarithmically sampled parameters ∆N eff and T , with relevant priors set according to BBN constraints [67] and electron-positron annihilation respectively (the lower and upper bounds on ∆N eff and T , respectively, are not important).
If constituents decay to SM particles, we express the GW signal in terms of logarithmically sampled parameters α and T , with relevant priors set according to deviation from the radiation dominated background and BBN respectively (the lower and upper bounds on α and T , respectively, are again not important). In this case, we also perform a separate analysis expressing the signal in terms of the wall tension σ and T . The upper boundary of the prior on σ is imposed such that there are no deviations from the radiation dominated background. We then obtain posteriors on the derived parameter ∆V 1/4 using (2.6) and c = 2.2 (and also c = 4.5 for string-wall networks with N = 6 [66]) and fixing g * (T ) = 15. A different choice for g * (T ) in the range given by lattice calculations does not significantly affects the results, given the very mild dependence of ∆V 1/4 on g * (T ).
In all cases, we vary the spectral shape parameters β and δ, with priors as in Table 1. For the SM scenario, we also obtain results with the standard choices δ = β = 1, to check the minor effect of these parameters on the quality of the fit. Priors for the heavy axion analysis are described in the Appendix below.
Further 1-and 2-dimensional posterior distributions for the DW annihilation and SMB-HBs parameters are reported in Figures 5 and 7. In particular, we observe broad posteriors for the spectral shape parameters δ and β, with IPTADR2 very mildly preferring a broader spectral peak. The reference values δ, β = 1 are both in the 1σ region of the posteriors in all cases. The effect of fixing these parameters to their unit value is also shown in Fig. 6, for the SM scenario.
Mean ±2σ errors, or upper/lower 95% C.L. bounds, are reported in Table 2 for selected DW parameters. [66] with N = 3 and N = 6 respectively). In both analyses, decay to SM particles has been assumed in imposing priors, see Tables 1 and 3 Size of misaligned potential -derived parameter, one for PTA Table 3. GWB parameters used in our analysis of GWs from heavy axion DW annihilation, together with their prior ranges.
B The Heavy Axion
Here we provide more details on the heavy axion origin of DWs. We consider a global U (1) symmetry, spontaneously broken in the early Universe after reheating. To fix ideas, this can arise from a complex scalar field Φ with potential V (Φ) = λ(|Φ| 2 − v 2 /2) 2 . The axion field a is the resulting Goldstone boson with a periodicity 2πv. At this stage, topological defects, known as cosmic strings and mostly made of the massive radial mode, appear. In the presence of a coupling to a confining dark gauge sector H, e.g. a SU (n) gauge theory with no massless fermions, a receives a periodic potential of the form: where m a κ H Λ 2 H /F a and F a ≡ v/N is the axion decay constant. N is an integer that depends on the matter content of the dark sector that is charged under the U (1) symmetry. We included here the factor κ H 1 that can arise e.g. in case that this sector includes a light fermion of mass m q , giving κ H ∼ m q /Λ H , see [86] (in the main text we set κ H 1 to estimate Λ H from Fig. 4). We focus on the case N > 1 (which arises e.g. when there is more than one vector-like fermion pair charged under the U (1)), that leads to a residual Z N symmetry. The latter is spontaneously broken at temperatures around H confinement, and a long-lived network of DWs attached to the previously formed strings, with N walls attached to each string (see [18] for a review), is formed. The network rapidly enters the scaling regime in the absence of thermal friction.
We consider two possibilities to induce DW annihilation (see [86] for more details). First, the existence of another confining sector at a scale Λ c Λ H . Second, the presence of high scale U (1)-breaking effects which manifest themselves either via higher-dimensional operators in Φ or via direct non-perturbative contributions to the axion potential (see e.g. [104,105]). In both cases, the axion potential is corrected by a term of the form where M is an integer. In the first case, µ b √ κ c Λ c where κ c m c /Λ c is again the mass of the lightest state below Λ c in the additional confining sector. In the second case, µ b c 1/4 n (N F a /Λ) n/4 Λ for operators of dimension n with coefficients c n and suppressed by a high scale Λ, or µ b e −S/4 Λ for non-perturbative contributions (see e.g. discussion in [86]). The phase δ is a generic misalignment with respect to H. When M = 1 or is co-prime with N , V b lifts the degeneracy of the N minima. When µ 4 b m 2 a F 2 a , the energy difference between two neighboring minima is estimated as and the temperature T can be determined by means of (2.6). In our numerical search, we express the GW relic abundance in (2.4) and frequency in (2.3) in terms of three parameters (T , F a , m a ) in order to perform a comparison with other searches and experiments. We report priors for those parameters in Table 3. The lower boundaries of the prior ranges for T and m a are chosen according to BBN constraints [106]. We then obtain the size of the misaligned potential µ b as a derived parameter, by means of (2.6) and (B.3). We fix M = 1, N = 6 as example values and correspondingly set c = 4.48 according to the simulations [66]. We also vary β, δ as in Table 1. Posterior distributions are shown in Fig. 8.
Let us now discuss the possibility that ∆V originates from QCD (this is not required by PTA data). This corresponds to setting µ b 75.5 MeV, for which one finds ∆V 80 MeV, for example values N = 3, M = 1, and ∆V 60 MeV, for N = 6, M = 1. These values fit nicely inside the marginalized 2σ posteriors for ∆V inferred from IPTADR2 (and may also fit NG data well if future noise analyses of NG12 data find better agreement with IPTADR2).
Of course, if there is no other axion field, a needs to solve the strong CP problem and thus H and QCD need to be aligned down to δ 10 −10 . This can be realized in so-called heavy QCD axion scenarios (see e.g. [82][83][84][85] for recent work). However, such alignment is typically ensured by means of a symmetry (e.g. Z 2 ), and thus it is often the case that M = N and QCD cannot actually induce DW annihilation. If this is the case, annihilation needs to occur due to other sources of U (1) breaking, such as those considered above.
On the other hand, if a second axion b which couples only (mostly) to QCD and solves the strong CP problem exists, the two sectors can be generically misaligned and unrelated (see [87] for a discussion). This case then appears more promising to realize DW annihilation from QCD instantons. Let us mention that GWs from multi-axion DW networks have been considered in the so-called clockwork model [79] (see also [80]). Beyond a specific form of the axion potentials, these works also assumed that the U (1) symmetries generating the axion fields are broken at the same scale, and concluded that the network is long-lived only when the number of axions is at least three. It would be interesting to extend the analysis of [79] to the more general two-axion string-wall network considered in our work and to understand whether an additional axion is required in our case as well.
Whenever the axion a couples to QCD, it can efficiently decay to SM gluons or photons, as described in [89], with a decay rate Γ a→gg,γγ ∝ m 3 a /F 2 a . We verified that for most of the parameter space in Fig. 4, apart from a small corner in the upper left part, such decays are efficient at T . | 2022-04-12T01:16:05.776Z | 2022-04-08T00:00:00.000 | {
"year": 2022,
"sha1": "21dac671bf0a972b3ad3b652e20e681708e0f506",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "21dac671bf0a972b3ad3b652e20e681708e0f506",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
109687235 | pes2o/s2orc | v3-fos-license | The Performance of Panel Unit Root and Stationarity Tests: Results from a Large Scale Simulation Study
This paper presents results on the size and power of first generation panel unit root and stationarity tests obtained from a large scale simulation study. The tests developed in the following papers are included: Levin et al. (2002), Harris and Tzavalis (1999), Breitung (2000), Im et al. (1997 2003), Maddala and Wu (1999), Hadri (2000), and Hadri and Larsson (2005). Our simulation set-up is designed to address inter alia the following issues. First, we assess the performance as a function of the time and the cross-section dimensions. Second, we analyze the impact of serial correlation introduced by positive MA roots, known to have detrimental impact on time series unit root tests, on the performance. Third, we investigate the power of the panel unit root tests (and the size of the stationarity tests) for a variety of first order autoregressive coefficients. Fourth, we consider both of the two usual specifications of deterministic variables in the unit root literature.
INTRODUCTION
Panel unit root and stationarity tests have become extremely popular and widely used over the last decade. The fact that several such tests are now implemented in commercial software will lead to further increased usage. Thus it is important to collect evidence on the size and power of these tests with large-scale simulation studies in order to provide practitioners with some guidelines for deciding which test to use (for a specific problem or sample size at hand).
All tests included in this study are so called first generation tests that are designed for cross-sectionally independent panels. This admittedly very strong assumption simplifies the derivation of the asymptotic distributions of panel unit root and stationarity tests considerably. We include the panel unit root tests developed in the following papers: Levin et al. (2002), Harris and Tzavalis (1999), Breitung (2000), Im et al. (1997Im et al. ( , 2003, and Maddala and Wu (1999). We also include two panel stationarity tests, developed in Hadri (2000) and Hadri and Larsson (2005). We include also a discussion concerning the effect of two commonly used forms of cross-sectional covariance on the test performance (constant covariance and a covariance matrix in Toeplitz form; see the details in Section 3). It turns out that the performance comparison across tests is remarkably robust to these two covariance structures; some examples are displayed in the appendix. Therefore the presentation of the results focuses on crosssectionally independent panels.
To overcome the cross-sectional independence restriction of first generation tests, in recent years several tests that allow for some form or another of cross-sectional dependence have been developed. These include Bai and Ng (2004), Chang (2002), Choi (2002), Moon and Perron (2004), and Pesaran (2003). The most general results are derived in Bai and Ng (2004) with a factor model approach, who allow for (multiple) common stationary and integrated components. The other papers mentioned generally allow only for stationary common components, which may be insufficient for many practical applications. For example in purchasing power parity studies, the base country price index is a potential candidate for a nonstationary common component (for a detailed discussion see Wagner, 2005). Except for the factor model approach, the theory for nonstationary cross-sectionally dependent panels appears to still be in an early stage, and no widely accepted modeling strategies for cross-sectional dependencies have emerged up to now. For example, for macroeconomic panels it may be necessary to consider dependence structures that are invariant to the ordering of the panel (see, e.g., Gregoir, 2004) or that include some notion of (economic) distance (see, e.g., Chen and Conley, 2001). A detailed simulation study of second generation panel unit root tests, including in particular also a discussion of the relative merits and limitations of approaches proposed in the literature, is undertaken in ongoing work. 1 In our simulation study we are primarily interested in the following aspects. 2 First, we investigate the performance of the tests depending upon 1 Thus in a sense the present paper can be seen as the first stage in our simulation agenda. Note also that in applications mainly first generation tests are used, which makes a detailed understanding of their performance relevant. 2 Our simulation study is based on ARMA(1, 1) processes, respectively on AR(1) processes if the MA coefficient is equal to 0, given by (ignoring deterministic components here for brevity) y it = y it −1 + u it with u it = it + c it −1 , where it ∼ N (0, 1), and it is independent of jt for i = j . The parameter c is equal to minus the moving average root. N → ∞ following T → ∞, N /T → 0 (for cases 2 and 3) HT N → ∞ and T fixed UB N → ∞ following T → ∞ IPS White noise: N → ∞ and T fixed Serial correlation: N → ∞ following T → ∞, N /T → k > 0 MW N , T fixed, approximation of ADF p-values for finite T H LM N → ∞ following T → ∞ H T N → ∞ and T fixed the time series and cross-sectional dimensions. Since in the derivation of the asymptotic test statistics, differing rates of divergence for the time series and the cross-sectional dimension are assumed for different tests (see Table 1), it is interesting to analyze the performance of the tests when varying the time and cross-sectional dimensions of the panel. We take for both the time dimension T and the cross-sectional dimension N all values in the set 10,15,20,25,50,100,200 . Thus we investigate in total forty-nine different panel sizes. Second, we assess the impact of serial correlation on the performance of the tests. We model serial correlation by simulating ARMA(1, 1) processes and let the moving average roots tend toward 1. It is well known from the time series unit root literature (e.g., Agiakloglou and Newbold, 1996;Schwert, 1989) that unit root tests suffer from severe size distortions for large positive moving average roots. This is as expected, since in the case of a moving average root at 1, the unit root is cancelled and the resultant process is stationary (see also the more detailed discussion on this issue in Section 3). In our study we consider moving average roots in the set 0 2, 0 4, 0 8, 0 9, 0 95, 0 99 and also include the case of no moving average root. This latter case corresponds in our simulation design to serially uncorrelated errors, which is also the special case for which some of the tests listed above are developed (e.g., the test of Harris and Tzavalis, 1999). Third, we study the performance as a function of the first-order autoregressive coefficient . For the power analysis of the panel unit root tests we take in the set 0 7, 0 8, 0 9, 0 95, 0 99 , and for the size analysis of the stationarity tests ∈ 0, 0 1, 0 2, 0 3, 0 4, 0 5 . Fourth, we investigate the performance of the tests for the two most common, and arguably for (macro)economic time series most relevant, specifications of deterministic variables. These are intercepts in the data generating process (DGP) when stationary but no drifts when integrated (referred to as case 2), and intercepts and linear trends under stationarity and drifts when integrated (referred to as case 3). 3 The total set of results, comprising about 170 pages of tables and about 30 pages with multiple figures, is available from the authors upon request. In Section 3 of the paper we discuss the main observations and display some representative results graphically. A brief outlook on some of the main findings is: The relative size of the panel (i.e., the size of T relative to N ) has important influence on the performance of the tests. Especially for T ≤ 50 the performance of all tests is strongly influenced by the cross-sectional dimension N . For increasingly negative MA coefficients, as expected, size distortions become more prominent and especially for large negative values of c the size diverges to 1 (even for T , N → 200). The general impression concerning the size behavior is that the Levin et al. (2002) and Breitung (2000) tests have their size closest to the nominal size. There are, however, exceptions (see the discussion in Section 3.1). Concerning power we observe that for case 2 either the Levin et al. (2002) test or the Breitung (2000) test has the highest power, whereas in case 3 there exist parameter constellations and sample sizes such that each of the considered tests has highest power.
The stationarity tests show very poor performance. The tests essentially reject the null hypothesis of stationarity for all processes that are not "close to white noise," for all but the smallest values of T . This finding is not inconsistent with the fact that empirical studies usually reject the null hypothesis of stationarity when using the tests of Hadri (2000) or Hadri and Larsson (2005).
The paper is organized as follows: Section 2 describes the implemented panel unit root and stationarity tests. Section 3 presents the simulation setup and discusses the simulation results, and Section 4 draws some conclusions. An appendix containing additional figures follows the main text.
PANEL UNIT ROOT AND STATIONARITY TESTS
In this section we describe the implemented panel unit root and stationarity tests. We include a relatively detailed description here for two reasons. First, the detailed description allows the reader to see the differences and similarities across tests clearly in one place. Second, our description is intended to be detailed enough to allow the reader to implement the tests herself.
The data generating process (DGP) for which the considered tests are designed is in its most general form given by where i , i ∈ and −1 < i ≤ 1. 4 The noise processes u it are stationary ARMA processes, i.e., the stationary solutions to a i (z) = 0 for all |z| ≤ 1 and a i (z) and b i (z) relative prime. The innovation sequences it are i.i.d. with variances 2 i and finite fourth moments and are assumed to be cross-sectionally independent.
The above assumptions on the noise processes are stronger than required for the applicability of functional limit theorems. In particular the assumptions guarantee a finite long-run variance of the processes u it , i.e., a bounded spectrum of u it at frequency 0. For stationary ARMA processes the long-run variance, 2 ui,LR say, is immediately found to be 2 i b 2 i (1)/a 2 i (1). 5 Some of the tests discussed below are designed for more restricted DGPs than the general DGP given in (1). In particular, some tests are designed for serially uncorrelated noise processes u it .
As in the time series unit root literature, three specifications for the deterministic components are considered in the panel unit root literature. These are DGPs with no deterministic component (d 1t = ), DGPs with intercept only (d 2t = 1 ), and DGPs containing both intercept and linear trend (d 3t = 1, t ). Exactly as in the time series literature, three cases concerning the deterministic variables in the presence of a unit root and under stationarity are considered most relevant. Case 1 contains no deterministic components in both the stationary and the nonstationary cases, case 2 allows for intercepts in the DGP when stationary but excludes a drift when integrated, and case 3 allows for intercepts and linear trends under stationarity and for a drift when a unit root is present.
Panel Unit Root Tests
Levin, Lin, (and Chu). We start the description of the unit root tests with the Levin and Lin (1993) tests, abbreviated by LL93 henceforth. Their results have only been recently published in Levin et al. (2002). 6 The null hypothesis of the LL93 test is H 0 : i = 1 for i = 1, , N , against the homogeneous alternative H 1 1 : −1 < i = < 1 for i = 1, , N . Thus under the homogeneous alternative the first-order serial correlation coefficient 4 In all our simulations we restrict attention to balanced panels, i.e., to panels where the number of observations is identical for all cross-sectional units. This is of course not required for all tests investigated. Some cross-sectional dependence can be handled with the tests discussed by including (random) time effects, t say. See the discussion above Section 3.1. 5 Solving the ARMA equation for the Wold representation u it = c i (z) t = ∞ j =0 c ij t −j , the (shortrun) variance of u it is given by 2 ui = 2 i ∞ j =0 c 2 ij , and the long-run variance is given by 2 is required to be identical in all units. This restriction stems from the fact that the test statistic is computed in a pooled fashion.
The approach is most easily described as a three-step procedure, with preliminary regressions and normalizations necessitated by cross-sectional heterogeneity. 7 In the first step for each individual series an ADF type regression of the form 8 is performed, where v it denotes the residual process of the AR equation. If the processes are AR processes and the AR orders p i are specified correctly, then v it = u it holds. Here and throughout the paper m indexes the case considered. The lag lengths in the autoregressive test equations have to be increased appropriately as a function of the time dimension of the panel to ensure consistency, if the processes u it are indeed ARMA processes. More specifically, p i (T ) ∼ T , with 0 < ≤ 1/4 assumed in the ARMA case. It appears possible that this condition of Levin et al. (2002) may be relaxed, given the results of Chang and Park (2002) brought to our attention by a referee, who derive ADF test asymptotics for p = o(T 1/2 ). In practical applications some significance testing on the estimatedˆ ij , an information criterion or checking for no serial correlation in the estimated residualsv it , is used to determine the lag lengths p i . Then, for chosen p i , orthogonalized residuals are obtained from two auxiliary (or partitioned) regressions:ẽ it say, from a regression of y it on the lagged differences y it −j , j = 1, , p i and d mt , andf it −1 say, from a regression of y it −1 on the same set of regressors. These residuals are standardized by the regression standard error from regressingẽ it onf it −1 ,ˆ vi say, to obtain the standardized residualsê it and f it −1 . This step is necessary to correct for cross-sectionally heterogeneous 7 Up to the computation of correction factors to account for cross-sectional heterogeneity, the procedure consists essentially of the usual two regressions well known in unit root and cointegration testing. These two regressions are the regressions of both y it and y it on the lagged differences y it −j and deterministic components. Then, the residuals of these two regressions are regressed onto each other to compute the first-order serial correlation coefficient, respectively, its t -statistic. These regressions can be performed in a pooled fashion if the panel is homogeneous. However, in heterogeneous panels, the optimal lag orders will in general differ across units. Furthermore, crosssectional heterogeneity necessitates additional correction steps described in the text. A referee has pointed out to us that such an approach is known as a partitioned regression in microeconometrics. In the context of the standard regression model the feasibility of this approach is the content of the famous Frisch-Waugh theorem. 8 Actually, it is recommended by Levin et al. (2002) that in a first step the cross-section averagē y t = 1 N N i=1 y it is removed from the observations. This stems from the fact that the presence of timespecific aggregate effects does not change the asymptotic properties, when the tests are performed on the transformed variables y it −ȳ t . Thus, as indicated already, a limited amount of dependence across the errors is allowed for, in a form that can easily be removed. See the discussion above Section 3.1 on cross-sectional dependence. variances to allow for efficient pooled OLS estimation of ( − 1) at a later stage; see (4) below.
The second step is to obtain an estimate of the ratio of the long-run variance to the short-run variance of y it , or equivalently of u it . This is required for the construction of mean (and variance) correction factors, since the t -statistic based on (4) diverges under the null hypothesis for cases 2 and 3. Therefore to obtain a nondegenerate limiting distribution, correction factors are required. The definition of the long-run variance, , immediately leads to an estimator of the formˆ where the lag truncation parameter L can be chosen, e.g., according to Andrews (1991) or Newey and West (1994). In the above equation we choose as estimate for the unobserved noiseû it = y it −ˆ mi d mt . 9 In our simulations the weights are given by w(j , L) = 1 − j L+1 . This kernel is known as the Bartlett kernel. The estimated individual specific ratio of long-run to short-run variance is defined asŝ i , which is used later for the construction of correction factors to adjust the t -statistics of the hypothesis that i := ( i − 1) = 0 for i = 1, , N .
The test statistic itself, which can be based on either the coefficientˆ or the corresponding t -statistic, is computed from the pooled regression of The null hypothesis is H 0 : = 0, and the test we use in the simulations is based on the corresponding t -statistic, t say. The standard deviation ofˆ as given in (4), STD(ˆ ) say, can be straightforwardly computed from the pooled regression residuals, since due to the prefiltering all the errors in this pooled regression have the same (asymptotic) variance.
For case 1, Levin and Lin (1993) show that t ⇒ N (0, 1). For cases 2 and 3, the t -statistic t diverges to minus infinity and thus has to be recentered and normalized to induce convergence toward a well-defined 9 Note that a direct estimate for the long-run variance is given byˆ 2 Levin et al. (2002) indicate that variance estimation based on the first differences is found to have a smaller bias under the null hypothesis, which in turn should help to improve both (finite sample) size and power of the panel unit root test.
Here mT and mT denote mean and variance correction factors, tabulated for various panel dimensions in Table 2 on page 14 of Levin et al. (2002).
T denotes the average effective sample size across the individual units. The adjusted t -statistics t * converge to the standard normal distribution for cases 2 and 3.
Harris and Tzavalis. The test of Harris and Tzavalis (1999), labeled HT , augments the analysis of Levin and Lin (1993) by considering inference for fixed T and asymptotics only in the cross-section dimension N . However, their results (closed form correction factors as a function of T ) are obtained only for serially uncorrelated errors. All three cases for the deterministic variables are considered. For fixed T , the authors derive asymptotic normality (for N → ∞) of the appropriately normalized and centered coefficientsˆ (which are for cases 2 and 3 inconsistent for T → ∞, as can be seen from the above discussion). In particular, with The practical relevance of this result is to obtain improved tests for panels with small T and large N . E.g., for case 1, the variance scaling factor used for testing is-when the limit is taken only with respect to Nby a factor T /(T − 1) smaller than the LL93 scaling factor. This implies immediately that, compared to the fixed-T test, the LL93 test will be oversized, i.e., the test based on test statistics using correction factors based on asymptotics in both T and N will reject the null hypothesis more often. The drawback of the Harris and Tzavalis results is the mentioned restriction to white noise errors. (2000) develops a pooled panel unit root test that does not require bias correction factors, which is achieved by appropriate (depending upon case considered) variable transformations. Due to its pooled construction, also the Breitung test, UB henceforth, is a test against the homogeneous alternative.
Breitung. Breitung
In case 1, this test coincides exactly with the Levin et al. (2002) test, since in this case no bias corrections are required. For case 2, bias correction factors are avoided by subtracting the initial observation. Subtracting the initial observation instead of the mean circumvents the Nickell bias.
Thus case 2 is equal to case 1 of LL93 on the transformed variables y it = y it − y i0 . In both cases the asymptotic distribution of the test statistic is standard normal without the need of resorting to correction factors.
For case 3, slightly more complicated transformations have to be applied, after serial correlation has been removed with first step regressions. There are two ways of removing the serial correlation: the first is resorting to preliminary regressions as in the description of the Levin et al. (2002) test, and the second, suggested by Breitung and Das (2005) to have better small sample performance, is prewhitening. 10 Prewhitening involves in the first step the regressions (for each i) from which the prewhitened variablesẽ it andf it −1 are computed as Note that at this step no correction for the mean or trend is performed. The prewhitened variables are next standardized by the regression standard error of (7) to obtainê it andf it −1 . Here we use for simplicity the same notation as for the residuals obtained via auxiliary regressions. We do this for notational simplicity and also because the two approaches are asymptotically equivalent. Finallyê it andf it −1 are transformed as The above transformations demean e * it and demean and detrend f * it −1 . Here we denote for notational simplicity by T also the sample size after the auxiliary regressions. Now the unit root test is performed in the pooled regression 10 Prewhitening is based on the idea of deriving an estimator of the nuisance parameters under the null hypothesis. As has been pointed out by a referee it is equivalently possible to perform the correction for the short-run dynamics in a similar way as for the Levin et al. (2002) test. However, from personal communication with Jörg Breitung, we have learned that prewhitening results in better small sample properties in his simulation experiments. by testing the hypothesis H 0 : * = 0. Breitung (2000) shows that the t -statistic of this test has a standard normal limiting distribution.
We now turn to panel unit root tests that are designed to test against the heterogeneous alternative H 2 1 : −1 < i < 1 for i = 1, , N 1 and i = 1 for i = N 1 + 1, , N . For asymptotic consistency (in N ) of these tests, a nonvanishing fraction of the individual units has to be stationary under the alternative, i.e., lim N →∞ N 1 /N > 0. The tests are based on group-mean computations, i.e., on appropriately combined individual time series unit root tests.
Im, Pesaran, and Shin. In two papers, Im et al. (1997Im et al. ( , 2003, henceforth abbreviated as IPS, present two group-mean panel unit root tests designed against the heterogeneous alternative. IPS consider only cases 2 and 3 and allow for individual specific autoregressive structures and individual specific variances. The same arguments as used in Levin and Lin (1993) might cover the case of ARMA disturbances, with the lag lengths in autoregressive approximations increasing with the sample size at an appropriate rate. Im et al. seem to share this view, given that one of the reported simulation experiments is based on moving average dynamics for the errors.
Note that in order to apply the tables with correction factors provided by Im et al., identical autoregressive lag lengths for all units and a balanced panel are required. The two tests are given by a t -test based on ADF regressions (IPS t ) and a Lagrange multiplier (LM ) test (IPS LM ).
We now describe the construction of the t -test for serially correlated errors. For the moment we focus on only one unit i. The errors u it are assumed to follow an AR (p i + 1) process. Thus the t -test statistic from the ADF regression (2) can be written as follows, with m = 2, 3 indicating again the deterministic terms present in the regression: suppressing the index m in the matrix notation for Q i and X i ). For finite values of T , the statistics t iT ,m depend upon the nuisance parameters i . IPS show that this dependence vanishes for T → ∞, but that the bias of the individual t -statistics under the null hypothesis remains. This follows from the fact that under the null hypothesis convergence to the Dickey-Fuller distribution corresponding to the DGP and model prevails. Therefore mean and variance correction factors have to be introduced. The proposed test statistic itself is the cross-sectional average of the corrected t -statistics: are simulated for m = 2, 3 for a set of values for T and lag lengths p (see Table 3 in Im et al., 2003). Thus, without resorting to further tailor made Monte Carlo simulations, the applicability of the IPS tests is limited to balanced panels and identical lag lengths in all individual equations (and error processes). Simulating the mean and variance only as a function of the lag length and setting the nuisance parameters i = 0 introduces a bias of order O p (1/ √ T ) but still takes into account the finite sample effect of the different lag lengths chosen. 11 Note that for T → ∞ the t -statistics converge to the Dickey-Fuller distributions and thus the asymptotic correction factors are the mean and variance of the Dickey-Fuller statistic corresponding to the model. Thus if one wants to avoid using simulated critical values one can also refer to the asymptotic values for T → ∞ (which has the additional advantage of allowing the use of cross-section-specific lag lengths p i ).
Let us now turn to the Lagrange multiplier test. Using the Lagrange multiplier test principle implies that the alternative hypothesis is actually given by i = 1 as opposed to i < 1, although the authors propose to use a 1-sided test nevertheless (see Im et al., 1997, Remark 3.2). For each individual unit the test statistic is given by As for the t -test, for T → ∞ the dependence upon nuisance parameters disappears. Paralleling the above argument, the Lagrange multiplier panel unit root test statistic is given by where The correction factors are available in Im et al. (1997). Maddala and Wu (1999) tackle the panel unit root testing problem with a very elegant idea dating back to Fisher (1932). Note that Choi (2001) presents very similar tests that only differ in the scaling in order to obtain asymptotic normality for N → ∞. The basic idea of Fisher can be explained with the following simple observations that hold for any testing problem with continuous test statistics: First, under the null hypothesis the p-values, say, of the test statistic are uniformly distributed on the interval [0, 1]. Second, −2 log is therefore distributed as 2 2 , with log denoting the natural logarithm. Third, for a set of independent test statistics, −2 N i=1 log i is consequently distributed as 2 2N under the null hypothesis.
Maddala and Wu.
These basic observations can be very fruitfully applied to the panel unit root testing problem, provided that cross-sectional independence is assumed. Any unit root test with continuous test statistic performed on the individual units can be used to construct a Fisher type panel unit root test, provided that the p-values are available or can be simulated. We implement this idea by applying ADF tests on the individual units. For ADF tests, estimated p-values for cases 1 to 3 can be obtained owing to the extensive simulation work of James MacKinnon and his coauthors (see, for one example, MacKinnon, 1994). Note as a further advantage that the Fisher test requires neither a balanced panel nor identical lag lengths in the individual equations. We have implemented the test for cases 1 to 3 based on individual ADF tests. They are labeled as MW m for m = 1, 2, 3 (ignoring the dependence upon ADF in the notation).
Panel Stationarity Tests
Hadri. Hadri (2000) proposes a panel extension of the Kwiatkowski et al. (1992) test, labeled H LM henceforth. Cases 2 and 3 are considered. The null hypothesis is stationarity in all units against the alternative of a unit root in all units. The alternative hypothesis of a unit root in all cross-sectional units stems from the fact that this test is based on pooling. Individual specific variances and correlation patterns are allowed for. We start our discussion of the test statistics, however, assuming for the moment serially uncorrelated errors, and only allow for individual specific variances 2 i . The test is constructed as a residual based Lagrange multiplier test with the residuals taken from the regressions it . Recentering and rescaling the expressions by subtracting their mean and dividing by their standard deviation gives rise to asymptotic standard normality Owing to the simple shape of the correction terms, closed form solutions for the correction factors can be easily obtained. They are given by 2 = 1/6, 2 = 1/45, and 3 = 1/15, 3 = √ 11/6300. The extension to serially correlated errors is straightforward; the variance estimatorˆ 2 ei only has to be replaced by an estimator of the long-run variance of the noise processes in (17). Hadri and Larsson (2005) extend the analysis of Hadri (2000) by considering the statistics for fixed T (the test is therefore abbreviated by H T ). The key ingredient for their result is the derivation of the exact finite sample mean and variance of the Kwiatkowski et al. (1992) test statistic that forms the individual unit building block for the Hadri type test statistic. For cases 2 and 3 they compute the exact mean and variance of iTm = 1/T 2 T t =1 S 2 iT /ˆ 2 ei , which is the core expression of the Hadri type test statistics, compare (18). Standard asymptotic theory for N then delivers asymptotic normality
Note finally that serial correlation can be handled again by computing the individual specific long-run variances as discussed several times in this section. However, since the long-run variance generally has to be estimated, the corresponding test will not have exactly the same distribution as in the case of serially uncorrelated errors. In other words, result (20) does not hold exactly in case of serially correlated errors for finite T if a long-run variance estimator is used. The resultant distortions in the test distribution depend upon the unknown long-run variances and thus cannot be quantified in applications. This implies that the usefulness of the Hadri and Larsson (2005) finite T test for serially correlated errors is hard to assess.
The tests discussed in this section are based on different limit arguments. The most widely used concept is that of a sequential limit where first T → ∞ followed by N → ∞, employed in the tests of Levin et al. (2002), Breitung (2000), Im et al. (1997Im et al. ( , 2003, and Hadri (2000). Some of the tests require furthermore a relative rate restriction, e.g., N /T → 0 (Levin et al., 2002) or N /T → k (Im et al., 1997(Im et al., , 2003. As has been seen above, inference for fixed T and only N → ∞ is only developed for the case of serially uncorrelated errors. This is done in Harris and Tzavalis (1999) for the Levin et al. tests and in Hadri and Larsson (2005) for the Hadri tests. Thus the performance of such tests will depend upon the magnitude of both T and N and may also depend upon the relative magnitude of the time and cross-section dimension of the panel. This is one of the issues to be analyzed with the simulation study.
The tests following the Fisher principle developed in Maddala and Wu (1999) are the only ones derived for fixed N and T . However, the critical values for the ADF tests have to be approximated for finite T . We summarize the asymptotics used in the derivation of the test statistics in Table 1. A detailed discussion of the relevant limit concepts for nonstationary panels and the relations among the different limit concepts is contained in Phillips and Moon (1999).
THE SIMULATION STUDY
In this section we present a representative selection of results obtained from our large-scale simulation study. Due to space constraints we only report a small subset of results and focus on some of the main observations that emerge. The full set of results is available from the authors upon request.
We only report results for cases 2 and 3, since case 1 is of hardly any empirical relevance for economic time series. The computations have been performed in GAUSS with a substantially extended, corrected, and modified set of routines based originally on Chiang and Kao (2002). A list containing the major changes is available upon request. The number of replications is 10,000 for each DGP and sample size. Both the time dimension T and the cross-sectional dimension N assume all values in the set 10, 15, 20, 25, 50, 100, 200 . Thus we consider in total 49 different panel sizes. The performance of the tests in relation to the sample dimensions T and N is one aspect of interest in our simulations. Remember from the discussion in the previous section that the tests rely upon different divergence rates for T and N ; compare again Table 1. One question in this respect is whether the finite-T tests of Harris and Tzavalis (1999) and Hadri and Larsson (2005) exhibit smaller size distortions than their asymptotic-T counterparts for panels with T small (compared to N ).
The DGPs simulated for case 2 are of the form with it ∼ N (0, 1). The parameters chosen in the simulations are = [ 1 , , N ], , and c. We summarize the dependency of the DGP upon these parameters notationally as DGP 2 ( , , c). Note for completeness that the formulation of the intercepts as i (1 − ) ensures that in the unit root case (when = 1) no drift appears. Consequently, when = 1 we set = 0 in the simulations for computational efficiency. Otherwise, the coefficients i are chosen uniformly distributed over the interval 0 to 4, i.e., i ∼ U [0, 4]. We parameterize case 3, DGP 3 ( , , c), as with it ∼ N (0, 1). This formulation allows for a linear trend in the absence of a unit root and for a drift in the presence of a unit root. The coefficients i are, as for case 2, U [0, 4] distributed. For the unit root tests the following values are chosen for : 0 7, 0 8, 0 9, 0 95, 0 99, and 1. 12 The former five values are used to assess the power of the tests against the stationary alternative. For the stationarity tests we only report results for ∈ 0, 0 1, 0 2, 0 3, 0 4, 0 5 for the size analysis. These values are chosen because preliminary simulations have shown that the stationarity tests fail to deliver acceptable results for larger values, i.e., for ∈ 0 6, 0 7, 0 8, 0 9, 0 95, 0 99 .
For the moving average parameter c we choose all values in the set 0, −0 2, −0 4, −0 8, −0 9, −0 95, −0 99 for the size study of the panel unit root tests and the power study of the stationarity tests, and c ∈ 0, −0 2, −0 4 for the power study of the panel unit root tests and the size study of the stationarity tests. Why do we choose 0 and negative values approaching −1? It is well known from the time series unit root literature, compare Schwert (1989) or Agiakloglou and Newbold (1996), that unit root tests suffer from severe size distortions in the presence of large positive MA roots. In the boundary case with the MA coefficient equal to −1, the unit root is cancelled and the resultant process is stationary. Thus the closer the coefficient c is to −1, the larger the size distortions are expected to be for any given sample size. 13 These observations are rooted in the asymptotic theory of autoregressive approximations for rather general process classes (for the multivariate case see Bauer and Wagner, 2005). Such results show that the approximation quality of autoregressive approximations depends (in case of ARMA processes) upon the MA root closest to the unit circle in absolute values. Therefore with ARMA(1, 1) processes we can control directly the relevant dimension of the approximation quality of autoregressive models and at the same time allow for higher order serial correlation with only one parameter. This is the main reason for choosing ARMA processes and not AR processes with higher lag lengths as DGPs. Concerning the question of the relevance of ARMA processes for econometric modeling, we want only briefly to mention two motivations here. First, Zellner and Palm (1974) show that any subset of variables of a vector autoregressive process generally follows a (vector) ARMA process. Since (panel) unit root tests are often used as an individual variable preliminary step for (panel) vector autoregressive modeling, this shows that the robustness of the performance with respect to ARMA processes is very important. Second, more structurally, Campbell (1994) shows within the real business cycle paradigm that the exactly linearized solution processes to dynamic stochastic general equilibrium models are typically ARMA processes.
With our setup we can analyze the extent of the size distortions as a function of both N and T . The value c = 0 serves as a benchmark case with no serial correlation and is also the special case for which the test of Harris and Tzavalis (1999) is designed. For c = 0, the choice of the lag lengths in the autoregressive approximations that most of the tests are based on becomes potentially important. We try to assess the importance of this choice by running the panel unit root tests (in case of MA errors) for several choices for the autoregressive lag length. One of our choices is BIC. We, however, also compute the test statistics for c = 0 for autoregressive lag lengths varying from 0 to 2 (since 2 is for all values of c ≥ −0 4 the maximum lag length according to BIC), to assess the influence of the lag 13 It is straightforward to show that the asymptotic bias for T → ∞ ofˆ , estimated from an AR(1) equation when the errors are not white noise but MA(1), is linear in the MA coefficient c. This holds both in the stationary and the integrated case. Note that in case that c = −1, Equations (21) and (22) are unidentified ARMA systems that allow for stationary solutions. The lack of identifiability for c = −1 stems from the fact that the autoregressive and the moving average polynomial are not left coprime or in other words contain a common factor. length selection on the size behavior (see the discussion below on the effect of lag length selection). 14 Note that we choose the value of c identical for all cross-section members. We do this to study "cleanly" the effect of the moving average coefficient approaching −1, which is harder to assess when the MA coefficients are drawn randomly for the cross-section units.
The careful reader will have observed that our simulated DGPs all have a cross-sectionally identical coefficient under both the null and the alternative. Thus we are in effect in a situation where we generate data either under the null hypothesis or under the homogeneous alternative. We do this because only the more restrictive homogeneous alternative can be used for all tests described in the previous section. This implies to a certain extent that we do not explore the additional degree of freedom that the tests against the heterogeneous alternative hypothesis (IPS and MW) possess. Thus, to a certain extent, the pooled tests are favored in our comparison, since the last step regression to estimate is for these tests one pooled regression with about N (T − p) observations and consists of N regressions with T − p observations for the group-mean tests (denoting with p the autoregressive lag length). An analysis of group-mean tests and their performance under the heterogeneous alternative is not considered separately in this paper. The relative ranking of the group-mean tests in our simulations may however still serve as an indicator for the relative performance of these tests. 15 As indicated in the introduction we also simulate DGPs that allow for cross-sectional correlation. Denote with = [ ij ] i,j =1, ,N the covariance matrix of it . Then we allow for two different forms of covariance, labeled constant covariance ( CC ) and a covariance matrix in Toeplitz form ( TP ): Note that in our simulation setup, owing to the unit variances of it these coincide with the correlation matrices. The first of the two covariance matrices has, e.g., been used in O'Connell (1998), and the second corresponds to a spatial autoregression of order 1 (interpreting the crosssection dimension spatially). In our simulations we take the (correlation) coefficient in the set 0 3, 0 6, 0 9 , where of course = 0 is the crosssectionally uncorrelated case. The major insight we have obtained from these additional simulations with cross-sectional correlation is that the performance rankings of the tests (for both size and power) are essentially unchanged compared to the cross-sectionally uncorrelated case. For CC this is not so surprising, since by applying cross-sectional demeaning to such a process asymptotically for N → ∞ removes the cross-sectional correlation. To be precise, the covariance matrix of the cross-sectionally demeaned innovations, Thus, in the case of constant covariance, cross-sectional demeaning decreases the cross-sectional covariance to ( − 1)/N . This explains why for such processes cross-sectional demeaning leads to comparable results as in the cross-sectionally uncorrelated case. However, even when abstaining from cross-sectional demeaning the rankings are very robust. Some authors, e.g., Levin et al. (2002), suggest to cross-sectionally demean as a first step in any case (which as mentioned above also removes for N → ∞ time specific aggregate effects t ). From this perspective therefore this case of cross-sectional covariance is seen to be not a great problem. The second studied case is a bit more complicated, since cross-sectional demeaning does not lead to monotonic reductions of the correlations, not even when N → ∞. Therefore for the Toeplitz case the results without cross-sectional demeaning may be more relevant. The finding-in its extent surprising-is that the orderings across tests are extremely robust with respect to this form of cross-sectional dependence. Of course, all tests become increasingly distorted with increasing . This holds both for size and power and also for panel unit root and stationarity tests; see Figure 8 for panel unit root tests and Figure 9 for panel stationarity tests in the Appendix. These findings (with more detailed results upon request) lead us to report here only the results for cross-sectionally independent panels. We only want to stress again that for the cross-sectional correlations investigated, the differences to the results obtained for cross-sectionally uncorrelated panels remain small for up to 0.6 and the rankings across tests remain almost unchanged throughout. Levin et al. (2002) and the Harris and Tzavalis (1999) tests for case 2 with serially uncorrelated errors (DGP 2 (0, 1, 0)). The LL93 2 results are displayed with solid lines with bullets, and the HT 2 results are displayed with dashed lines with stars.
Size of Panel Unit Root Tests
In this subsection we report the results of the analysis of the actual size of the panel unit root tests. In this study we use the word size to denote the type I error rate at the actual DGP. This is not the size as defined by the maximal type I error rate over all feasible DGPs under the null hypothesis, see Horowitz and Savin (2000) for an excellent discussion of this issue. The nominal critical level in the simulation study is 5%. As noted above, the Harris and Tzavalis (1999) test is only designed for serially uncorrelated errors. Thus this test is only computed for c = 0. All other tests (LL93, UB, IPS t , IPS LM , and MW ) are computed for all values of c.
We start with case 2 in Figures 1 and 2 and display results for case 3 in Figures 3 and 4. For these and all other figures, it is always the crosssectional dimension N that varies along the horizontal axis. 16 Figure 1 displays for c = 0 a comparison of the size of the LL93 2 test and the HT 2 test, which is-as has been discussed-a fixed T version of the LL93 2 test (for serially uncorrelated errors). The graphs display the size for all values of N for T ∈ 10, 25, 100 . It becomes clearly visible that for small T like 10, the Harris and Tzavalis (1999) test has superior size performance. The difference in size performance increases with N , for both T = 10 and T = 25 (in the latter case for N ≥ 25). This, of course, can be traced back to the fact that the asymptotic normality and the corresponding critical values of the LL93 2 test are based on sequential limit theory with N → ∞ following T → ∞ and furthermore with lim N /T → 0; see Table 1. For larger T , the improved performance FIGURE 2 Size of panel unit root tests for case 2 (DGP 2 (0, 1, c)) with c ∈ 0, −0 2, −0 4, −0 8, −0 9, −0 95 for T = 25. The solid lines with bullets correspond to LL93 2 , the solid lines with triangles correspond to UB 2 , the solid lines correspond to IPS t ,2 , the dash-dotted lines correspond to IPS LM ,2 , and the dashed lines correspond to MW 2 . of the ADF-type test performed in the LL93 2 test kicks in and starts to outweigh the performance deterioration with increasing N . For T = 100, the size of LL93 2 is monotonically decreasing toward 5% in the right graph of Figure 1.
Thus for panels with little or no serial correlation the HT test can be considered an interesting extension or implementation of the LL93 test. No serial error correlation is unfortunately a rare case for economic time series. We therefore turn next to study the size of the five panel unit root tests designed for serially correlated panels; see Figure 2. In this figure we display the size performance depending upon the MA parameter c for T = 25.
As a baseline case, and as a follow-up to the previous analysis, we include again the case c = 0 (the upper left graph of Figure 2). One sees that for short panels (similar results also hold for T = 10, 15, 20, not shown), in particular, the LL93 2 test and also the MW 2 test are increasingly oversized with increasing N . The two tests of Im et al. (1997Im et al. ( , 2003 and the Breitung (2000) test exhibit satisfactory size behavior. In particular, for these three tests, size is not increasing with N but stays close to the nominal level of 5%. Note, however, that for medium length panels with T = 50, 100, both the LL93 2 test and the MW 2 test exhibit satisfactory size behavior as well (for c = 0). The general summary for the serially uncorrelated case is that for all T investigated, the Im et al. (1997Im et al. ( , 2003 tests and the Breitung (2000) test have comparably acceptable size. The increase is slower for these tests than for the Levin et al. (2002) and the Maddala and Wu (1999) tests. Especially for T small relative to N , an application of the Harris and Tzavalis (1999) test offers an improvement over Levin et al. (2002).
For panels with increasingly negative serial correlation, i.e., with c → −0 99, size distortions become more prominent for any given T , as is illustrated for T = 25 in Figure 2. For this value of T , an MA coefficient of c = −0 4 is the "boundary" case (among the values of c investigated) for which for some tests the size does not rise sharply (i.e., up to 0.2 or higher) as N is increased to 200. For the more negative values of c, the size diverges for all tests to 1 for N ≥ 100. Somewhat surprisingly, also for the larger values of T , the "boundary" value for the MA coefficient is still given by c = −0 4. For T ≥ 50 and for c ∈ −0 8, −0 9, −0 95 , "size divergence" occurs again for N ≥ 100. 17 This divergence can be partly mitigated by using smaller values for the autoregressive lags than suggested by BIC. 18 In light of Table 1, this divergence might not be too surprising, as most tests' critical values are derived on the basis of sequential limit theory. There are, however, exceptions: the Maddala and Wu test is developed for finite N and uses a finite T approximation of the p-values for the individual ADF tests. For serially uncorrelated errors, furthermore, Im et al. (1997) provide critical values for the tests for finite T and only N → ∞. Thus we a priori expect the MW test (and the IPS tests for serially uncorrelated errors) to be less prone to the size distortions observed above. However, this is not observed throughout our simulations. The performance of the Maddala and Wu test as displayed in Figure 2 is quite representative. For c = 0 it shows the fastest size divergence for N → 200 and for c = 0 its size performance is in the ball park of other tests. What about the two IPS tests? Both tests exhibit rather similar behavior, and their size stays relatively stable close to the nominal value. Of course, for c becoming "too" negative some size distortions occur.
The tests that exhibit in most cases the slowest divergence of size as c is decreased towards −0 95 are the LL93 2 and (usually second slowest) the 17 Generally, for very small T = 10, 15, all tests exhibit smaller size distortions as a function of N than for larger T .
18 Surprisingly, performing no correction for serial correlation sometimes mitigates the "sizedivergence" for increasing N , in particular for c close to 0. For values of c close to −1, including more lags is in general preferable. The values of c close to −1 also lead, as expected, to larger lag lengths suggested by BIC for T ≥ 100. It is not clear whether these observations have practical implications or generalize beyond the MA(1) error processes simulated in this study. An investigation of this issue is left for future research. UB 2 test. This behavior is the large T extension of the behavior observed for the Levin et al. (2002) test for small T ∈ 10, 15, 25 . The nominal size of the LL93 2 test even decreases for fixed small T ∈ 10, 15, 25 for N tending to 200 for certain values of c (e.g., for T = 25, this holds for c = −0 2, −0 4). With increasing serial correlation, instead of being undersized this test has the slowest divergence of the size towards 1 for N → ∞. For the UB 2 test the behavior is different, since it displays relatively fast size divergence for the smaller values of c (see for an example the center graph in the upper row of Figure 2). Thus summarizing we find that for the panels with highly negative MA coefficients, the LL93 2 test is grosso modo the least distorted test, with in general a slight tendency for being undersized in small T and large N panels.
We now turn to case 3 and start again with a comparison of the Levin et al. (2002) and Harris and Tzavalis (1999) tests for c = 0. In Figure 3 we display as above results for T ∈ 10, 25, 100 . As in the case of random walks without drift, substantially smaller size distortions are observed for the Harris and Tzavalis (1999) test (in particular again for small T ). The differences for the larger values of T are slightly less pronounced than in case 2. For T ≥ 50, the size performance is very satisfactory also for large values of N .
In case of no serial correlation in u it , size divergence only occurs for T = 10, 15 for the LL93 3 test and at a lesser rate for the MW 3 test. For T = 25, all tests except the MW 3 test exhibit satisfactory size performance for all N . Only the MW 3 test still has size distortions up to 0.3 when N → 200 and T = 25. The two IPS tests have very similar performance. Thus, in case of no serial correlation, size divergence for N → 200 occurs only for the smallest values of T . The relative sample sizes are therefore not of great FIGURE 3 Size of the Levin et al. (2002) and the Harris and Tzavalis (1999) tests for case 3 with serially uncorrelated errors (DGP 3 ( , 1, 0)). The LL93 3 results are displayed with solid lines with bullets, and the HT 3 results are displayed with dashed lines with stars.
FIGURE 4
Size of panel unit root tests for case 3 (DGP 3 ( , 1, c)) with c ∈ 0, −0 2, −0 4 for T = 100. The solid lines with bullets correspond to LL93 3 , the solid lines with triangles correspond to UB 3 , the solid lines correspond to IPS t ,3 , the dash-dotted lines correspond to IPS LM ,3 , and the dashed lines correspond to MW 3 . concern as soon as T ≥ 25, and even for shorter panels three tests (UB 3 , IPS t ,3 , and IPS LM ,3 ) show satisfactory size performance.
With serially correlated errors, as in case 2, the value c = −0 4 is the boundary value for which not all tests' size diverges to 1 for T , N → 200. Two tests have substantially smaller size distortions (over a variety of combinations of T and N ) than the other tests. These are the Levin et al. (2002) and Breitung (2000) tests. For c ≥ −0 2 these two tests have size below 0.1 for all combinations of T and N , whereas the other tests' size is diverging to at least 0.8 for N → 200 and T ≥ 50. Also for c = −0 4 the LL93 3 test is not subject to size divergence. The size divergence behavior of the IPS LM ,3 , IPS t ,3 , and MW 3 tests is very similar. Thus for the case of random walks with drifts, the summary is that LL93 3 and UB 3 outperform the other tests. The major exception to this general rule occurs for T ∈ 10, 15 , where the IPS LM ,3 test shows good size properties and the LL93 3 test does not yet appear so favorable. This is to a certain extent surprising, given that the LL93 3 test is a pooled test and the IPS LM ,3 test is a groupmean test. The observation concerning the relative performance of the IPS LM ,3 and the LL93 3 tests, with the latter starting to outperform the former for T ≥ 15, also holds for c < −0 4. As for case 2, it is worth noting that the divergence problem also occurs for the MW 3 test (see the upper right graph in Figure 4 for an example) despite being developed for fixed N inference. Performance improvements can be realized by varying the number of lagged differences included in the regressions, similar to case 2. We find again that the relative size of T and N has significant influence on the results obtained for small T . This observation has to be stressed again, although it is essentially a direct consequence of the construction of the tests; cf., Table 1.
Power of Panel Unit Root Tests
The discussion of the power of the panel unit root tests against the stationary alternative is not based on so-called size-corrected critical values. This follows from the fact, discussed in detail in Horowitz and Savin (2000), that size correction based on arbitrary points in the set of feasible DGPs under the null hypothesis in general leads to empirically irrelevant critical values. 19 The problem arises because the actual type I errors (of any of the unit root tests) vary substantially across integrated ARMA processes. Therefore, size corrections-which hence should correctly be labeled type I error corrections for given DGP-do not necessarily lead to insights that can be generalized.
Consequently our power analysis is based on the asymptotic critical values. Horowitz and Savin (2000) discuss situations when bootstrap-based critical values lead to considerable power gains; this is not discussed here, but the interested reader will find bootstrap applications of the tests discussed in this paper in Wagner and Hlouskova (2004). The bootstrapbased inference in that paper often leads to different conclusions from inference based on asymptotic critical values. A precise theoretical analysis concerning the validity of bootstrap inference in nonstationary panels is yet to be provided.
Before we discuss the results, let us start by summarizing a few general observations. First, which may be not surprising, power is monotonically increasing in N for all DGPs simulated under the stationary alternative hypothesis for all values of T (see, for example, Figure 5). Note, however, that power does not increase monotonically in T for given N . This occurs for relatively small values of T and N , when assumes values close to 1. For larger values of T , power increases when T is increased further for any value of N . Most notably the LL93 test is subject to nonmonotonicity of power in T .
We start our discussion again with case 2; see Figure 5. In this figure we display the power of the panel unit root tests for = 0 8 and T ∈ 10, 25, 100 . The upper row shows the case with serially uncorrelated errors and the lower row displays the case c = −0 4. The figure clearly displays one representative result, namely the effect of the value of c on the ordering of the tests with respect to power. The highest power curve corresponds throughout to either the UB 2 or the LL93 2 test (also for parameter choices not displayed in figures here). For the larger values of T it is generally the UB 2 test that has highest power, whereas the LL93 2 test has highest power in many cases for smaller values of T . Corresponding to FIGURE 5 Power of panel unit root tests for case 2 with = 0 8 (DGP 2 ( , 0 8, c)) for c ∈ 0, −0 4 and T ∈ 10, 25, 100 . The solid lines with bullets correspond to LL93 2 , the solid lines with triangles correspond to UB 2 , the solid lines correspond to IPS t ,2 , the dash-dotted lines correspond to IPS LM ,2 , and the dashed lines correspond to MW 2 . the sensitivity discussed in the previous subsection, altering the lag lengths in the ADF regressions can be used to improve the power performance of the Levin et al. (2002) test. The most variable power performance is observed for the MW 2 test, ranked from second-to-last place without any detectable dependence upon sample size or parameters (see Figure 5). For the two group-mean tests of Im et al., power is comparatively low for small values of T (this is most likely a consequence of the group-mean construction of the test statistic) but is in general quite appealing for larger values of T . However, the UB 2 test is for those large panels the most powerful test. Note also that for T ≥ 100 even for N = 10 all tests have power equal to 1, for ≤ 0 9. For even larger values of ∈ 0 95, 0 99 , N ≥ 50 is required to have power tending to 1 when T ≥ 100. The previous observations hold both for c = 0 and c = 0.
For case 3, the results are less clear than for case 2. There are panel dimensions (T , N ) and parameter values ( , c) for which each of the five tests has highest power. Some clear observations emerge only for T = 10, where the LL93 3 test is the most powerful test for c = 0 and the MW 3 test is the most powerful test for c = 0. The latter is the second most powerful test when T = 10 and c = 0. This is a bit surprising, since the Maddala and Wu (1999) test is a group-mean test. The UB 3 test performs relatively well, but not as well compared to the other tests as in case 2. Also for case 3 power is basically equal to 1 for all tests for all values of N for values of up to 0.9 for T ≥ 100. Detailed graphical results of the power of the panel unit root tests for case 3 are available from the authors upon request.
Size of Panel Stationarity Tests
Some representative results for the size behavior of the stationarity tests of Hadri (2000) and Hadri and Larsson (2005) are displayed in Figure 6 for case 2. The figure displays the size of both tests as a function of ∈ 0, 0 1, 0 2, 0 3, 0 4, 0 5 and c ∈ 0, −0 4 for T = 25. Remember from the discussion in Section 2.2 that the H T test of Hadri and Larsson (2005) is based on finite T inference. One of the aspects we want to compare is the relative performance of the H LM test and the H T test. Focusing on this aspect first, we find that only for the case = 0 and c = 0 do substantial differences between these two tests occur. This holds not only for T = 25 shown but for all values of T . As expected, for larger values of T , the differences become smaller. The explanation for this result is that the nonparametric estimation to correct for serial correlation is too imprecise to result in improved size performance, since advantages of the H T test only materialize in the single case where no serial correlation corrections are required.
The second general observation that is exemplified in Figure 6 is that c = 0 leads to larger size distortions than c < 0, as shown with the example c = −0 4 in the figure. This finding can be explained by noting that our generated processes are for = 0 4 and c = −0 4 white noise processes, since the AR and the MA root are cancelled in this case. Thus it seems that only for processes close to white noise is the size of the tests acceptable. 20 This is bad news since, in case of stationary autoregressive time series with strong serial first-order autocorrelation, the tests have basically size 1. This finding also holds for larger values of T . Our observation, however, can explain to a certain extent the fact that an application of the panel stationarity tests à la Hadri (2000) often leads to a rejection of the null hypothesis. Even for "highly stationary panels" as displayed in the figure, the null is rejected in almost all replications (unless the AR and the MA root are nearly or exactly cancelled). In other words, the Hadri (2000) and the Hadri and Larsson (2005) test can be "used to find unit roots" (although, of course, strictly speaking a rejection of the null hypothesis does not imply acceptance of the alternative hypothesis).
Note that qualitatively entirely similar findings are obtained for case 3, which we therefore do not discuss separately.
Power of Panel Stationarity Tests
We finally briefly discuss the power of the panel stationarity tests. The size results (rejection of stationarity for many cases) already allow for predictions concerning the behavior of the power function. First, power will be low for small T and processes "close" to white noise. "Close" to white noise here means that the MA coefficient is close to −1, so that the unit root is nearly cancelled. This is exactly what happens, see the graphical results for case 2 in Figure 7 that show exactly what has just been discussed. Note that similar results are available for case 3 upon request. Summing up: The high power stems from the fact that the Hadri (2000) and Hadri and Larsson (2005) tests tend to reject stationarity most of the time even for highly stationary series. It is thus not a surprise that stationarity is also rejected for unit root series. It is only the general observation that it is hard to detect nonstationarity in short time series that reduces power (and size) of the tests for small T ∈ 10, 15 .
CONCLUSIONS
The strongest and most unequivocal conclusion from our simulations is that the panel stationarity tests of Hadri (2000) and Hadri and Larsson (2005) perform very poorly. This is to a certain extent similar to the often observed poor performance of the Kwiatkowski et al. (1992) test, which is the time series building block of these tests. The null hypothesis of stationarity is rejected as soon as sizeable serial correlation of either the autoregressive or the moving average type is present.
The picture that emerges for the panel unit root tests is much more differentiated and only a few clear-cut patterns emerge (which is itself an interesting observation). Some of the main findings are: First, for case 2 (with intercepts under stationarity) the best power behavior is displayed by either the Levin et al. (2002) test or by the Breitung (2000) test. Second, for serially uncorrelated panels the Harris and Tzavalis (1999) implementation of the Levin et al. (2002) test offers substantial improvements for short panels. The third clear message that emerges from the simulations is that for short panels, size and power problems emerge when the cross-sectional dimension is too large, i.e., when T is too small compared to N . This finding is in line with the fact that most test statistics are based on sequential limits with first T → ∞ followed by N → ∞. However, the test of Maddala and Wu (1999), developed for fixed-N inference, does not show superior performance with respect to variations of N , e.g., concerning size divergence as a function of N . Fourth, as expected, the size distortions become larger when the moving average coefficient c → −1. Across our simulations the value of c = −0 4 has emerged as a "boundary" case for which at least some tests exhibit satisfactory behavior (for T ≥ 25 and all values of N ). Taking a rough average over all experiments, the Levin et al. (2002) and Breitung (2000) tests have the smallest size distortions. However, there is large variance around this result, and there are constellations where, e.g., the Levin et al. (2002) has very rapid size divergence. Combined with the good power performance (notably for case 2), these two tests appear grosso modo quite favorable. All our results generalize to the simulation experiments performed with cross-sectionally dependent panels. The extent of robustness of the performance ranking across tests to the two discussed cross-sectional covariance structures is remarkable.
At this point, however, we have to note again that the group-mean tests of Im et al. (1997Im et al. ( , 2003 and of Maddala and Wu (1999) are to a certain extent disadvantaged in our simulation study. This stems from the fact that we simulate (up to the intercepts and trend slopes) homogeneous panels under both the null and the alternative. For such panels pooling is apparently both advantageous and straightforward. When comparing only the group-mean tests, we do not find a stable ranking over parameter values and sample sizes, neither with respect to size nor with respect to power. However, only a detailed analysis with heterogeneous panels will allow us to understand the relative performance of these tests for situations where the additional degree of freedom they offer (the heterogeneous alternative) is utilized.
The impact of lag length selection in the ADF type regressions, which has found to be "nonmonotonic" in c, is an open issue for future research. By nonmonotonicity we mean the observation that for c close to 0 smaller lag lengths than suggested by BIC lead in many cases to improved performance, whereas for values of c close to −1 a larger number of lagged differences than suggested by BIC often leads to improvements. A priori such behavior is not expected. In this respect also the influence of the time dimension of the panel on this observation has to be investigated further.
Finally, the variability of the results over the parameters, observed not only for small but also for large panels, suggests that substantial performance improvements might be realized by relying upon consistent bootstrap inference. However, it is well known from the time series literature that bootstrap consistency is a delicate issue in unit root situations. Similar problems will arise in panels, in particular also in panels with cross-sectional dependencies. This is probably one of the most important problems to be solved in the panel unit root test literature for practical purposes. The solid lines with bullets correspond to LL93 2 , the solid lines with triangles correspond to UB 2 , the solid lines correspond to IPS t ,2 , the dash-dotted lines correspond to IPS LM ,2 , and the dashed lines correspond to MW 2 . The left figures display the results for = 0, i.e., for the cross-sectionally uncorrelated case, the figures in the center display the results for = 0 6, and the right figures display the results for = 0 9.
FIGURE 9
Size of Hadri (2000) and Hadri and Larsson (2005) stationarity tests for case 2 with Toeplitz cross-sectional correlation. The errors are serially correlated with c = −0 4 and T = 25 and ∈ 0, 0 5 . The results for = 0 are displayed with solid lines, for = 0 3 with dashed lines with stars, for = 0 6 with dash-dotted lines, and for = 0 9 with dashed lines. | 2019-04-12T13:52:59.456Z | 2006-06-01T00:00:00.000 | {
"year": 2006,
"sha1": "c0d9763193087845e5c6c870343f6893d4f71015",
"oa_license": "CCBY",
"oa_url": "https://boris.unibe.ch/145655/1/dp0503.pdf",
"oa_status": "GREEN",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "21323f8b513ee998f384a3be437586906c51fc0e",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Engineering",
"Economics"
]
} |
73625543 | pes2o/s2orc | v3-fos-license | A chiral magnetic spiral in the holographic Sakai-Sugimoto model
We investigate the effect of a magnetic field on the vacuum of low-temperature QCD at large-N_c in the presence of a chiral chemical potential, using the holographic Sakai-Sugimoto model. Above some critical chemical potential we find an instability, which triggers a decay of the homogeneous vacuum to a non-homogeneous configuration with a spiral form, which we construct explicitly. We find that this decay is suppressed for sufficiently large magnetic field, and determine the critical strength. We find that the new vacuum does not exhibit the chiral magnetic effect. This is related to the way the chiral chemical potential is introduced. We discuss an alternative way of introducing the chiral chemical potential that leads to a nonzero chiral magnetic effect.
Introduction and discussion
Recent work by various authors has provided evidence that in QCD, a magnetic field leads to several interesting phenomena, for values of the magnetic field which are substantially smaller than anticipated in earlier studies. The origin of these effects lies in the chiral axial anomaly and the additional couplings it provides. For instance, it has been shown [1] that the axial anomaly is responsible for the appearance of pion domain walls, which carry baryon charge. These are stable for sufficiently large baryon chemical potentials. Importantly, they form for relatively small critical values of the magnetic field, as this critical value scales with the pion mass.
In the presence of an axial chemical potential [2,3], an external magnetic field can trigger the appearance of a vectorial current in the direction of this field, (1.1) The origin of this so called chiral magnetic effect (CME) is the asymmetry between left-and right-handed fermions (parametrised by the axial potential µ A ), which leads to separation of electric charge. Related to this is the chiral separation effect [4][5][6], where instead of an axial potential, a baryonic quark potential is introduced. The magnetic field now leads to a separation of chiral charge, and an axial current is generated, Combining these two effects leads to evidence for the existence of a so-called chiral magnetic wave [7]. All these effects suggest that the consequence of a magnetic field may be much more important (and potentially easier to see in experiments) than previously thought. These effects were originally derived using a quasi-particle picture of chiral charge carriers, which is only valid at weak coupling. They have meanwhile also been confirmed in a number of holographic descriptions of the strong coupling regime. The pion gradient walls were found in the Sakai-Sugimoto model in [8,9] (see also [10]), and there are even abelian analogues of this solution [11] which involve an η -meson gradient (and of course no baryon number in this case). Whether or not holographic models exhibit the CME or CSE is subject of more controversy, as it requires a careful definition of the holographic currents. An analysis of the D3/D7 system was given in [12,13] and we will comment extensively on the Sakai-Sugimoto model later in this paper.
A major question is what happens when the chemical potentials are large enough so that they trigger a condensation of bound states. Even without a magnetic field and without an axial anomaly, various models predict the formation of a non-isotropic (although homogeneous) vector meson condensate, once the axial chemical potential becomes of the order of the meson mass (see [14] for an analysis in the Sakai-Sugimoto model relevant here, and a list of references to other works). When the axial anomaly is present, the condensate is typically no longer even homogeneous, but forms a spiral structure [15]. A lot of work in this direction focuses on high temperature models, but in fact such condensation already happens at low and zero temperatures [14,15].
In the present paper, we therefore examine the Sakai-Sugimoto model in the presence of a magnetic field and an abelian chiral chemical potential, and set out to determine whether there is an instability against decay to a non-homogeneous chiral spiral also in this case. We will also analyse whether the ground state exhibits a chiral magnetic effect, and whether there is an η -gradient in this case. We will only consider the confined chirally broken phase of this model, i.e. the low-temperature behaviour.
Our main results are as follows. First, we determine the location of the phase boundary between the homogeneous solution of [8,9] and a new chiral spiral like solution 1 . We construct the non-homogeneous chiral spiral like solution explicitly. 2 We establish that the latter still does not exhibit the CME, but point out that this conclusion relies crucially on subtleties related to the precise way in which the chemical potential is introduced. Finally, we show that a magnetic field tends to stabilise the homogeneous solution, and there is a critical value beyond which the chiral spiral ceases to exist. The parameter space in which we need to scan for non-linear spiral solutions is rather large and the numerical analysis is consequently sometimes tricky; we comment on details of this procedure in the appendix.
An important issue in this analysis is the precise way in which the chemical potential is included in the holographic setup. It was recently pointed out [13,16] that the usual approach, which roughly amounts to identifying the chemical potential with the asymptotic value of the gauge field, may be incorrect in case the associated charge is not conserved (like, in our case, the axial charge). However, we can define µ A as the integrated radial electric flux between the two boundaries of the brane. Following the field theory arguments of [16] we will argue that there are then two natural formalisms for introducing the chemical potential. In formalism A we introduce µ A through boundary conditions in the component A 0 of the gauge field. In formalism B the chemical potential instead sits in A z . In the presence of the axial anomaly these two formalisms are inequivalent.
In formalism A, we find that the non-homogeneous phase characterised by wave number k, is dominant and there is a preferred value of this wave number k = k g.s. . When the magnetic field is small compared to the chiral chemical potential B µ A , k g.s. depends only very weakly on the value of µ A . This result is consistent with our previous work [15]. For sufficiently large magnetic field we find that the nonhomogeneous ground state is suppressed. In formalism B, on the other hand, the homogeneous state always has lower energy than the non-homogeneous one, and there is hence no chiral magnetic spiral.
As far as the chiral magnetic effect is concerned, we find that in formalism A the effect is absent. This result is consistent with previous calculations on the Sakai-Sugimoto model [17]. In formalism B, on the other hand, there exists a non-zero chiral magnetic effect. This seems to be more in line with recent lattice calculations [18], chiral model calculations [19] and holographic bottom-up models [20,21] for the confined chirally broken phase of QCD. However, a main shortcoming of formalism B is that the inhomogeneous phase, whose existence is indicated by our perturbative analysis, is absent in the full nonlinear theory. Since the perturbative analysis is blind to the subtle issues on how one introduces the chemical potential in the holographic setup, we are inclined to think that, as it stands, formalism B is incorrect. Whether formalism A is correct on the other hand, or also needs to be altered, is an open question at the moment, and needs a separate study, which we leave for future work.
2 Chemical potentials, currents and anomalies
Effective five-dimensional action
In order to set the scene and to introduce our conventions, let us start by giving a brief review of the basics of the Sakai-Sugimoto model [22,23]. For a more detailed description of the features of this model which are relevant here, we refer the reader to the original papers or to [15].
The Sakai-Sugimoto model at low temperature consists of N f flavour D8-branes and N f anti-D8-branes which are placed in the gravitational background sourced by a large number N c of D4-branes which are compactified on a circle of radius R. In the simplest set up, the probes are asymptotically positioned at the antipodal points of a circle, while in the interior of the geometry they merge together into one U-shaped object. The gauge theory on the world-volume of the probe brane is nine-dimensional, containing a four-dimensional sphere, the holographic direction and four directions parallel to the boundary. By focusing on the sector of the theory that does not include excitations over the S 4 , one can integrate the probe brane action over this sub-manifold and end up with an effective five dimensional DBI action on the probe brane world-volume. By expanding this action to the leading order in the string tension, one ends up with a five dimensional Yang-Mills theory with a Chern-Simons term. For N f = 1 we can write the action as where the indices are now raised or lowered using the effective five dimensional metric g mn . This metric is defined as where K z ≡ (1 + z 2 ). The x µ directions are parallel to the boundary and z is the holographic direction orthogonal to the boundary. The coupling constants κ and α are given in terms of the number of colours N c , the compactification mass scale M KK and the 't Hooft coupling λ by 3) The Chern-Simons term written in (2.1) is valid only for a single D8 probe, i.e. for a U (1) gauge theory on the brane world-volume.
Symmetries and chemical potentials
Holographic models encode global symmetries of the dual gauge theory in the form of gauge symmetries in the bulk theory, and these relations hold both for the closed as well as the open sectors of the string theory. The same is true for the Sakai-Sugimoto model at hand, where there are two independent gauge theories living near the two boundaries of the flavour D8-D8 brane system. These U (N f ) L and U (N f ) R gauge symmetries correspond to the global U (N f ) × U (N f ) R flavour symmetries of the dual gauge theory. In the low-temperature phase of the Sakai-Sugimoto model that we are interested in, the two branes are connected in the interior of the bulk space, and thus the gauge fields A L M and A R M are limits of a single gauge field living on the two connected branes. Therefore, one cannot independently perform gauge transformations on these two gauge fields, but is constrained to gauge transformations which, as z → ±∞, act in a related way. Specifically, since near the boundary a large bulk gauge transformation acts as A L/R → g L/R A L/R g −1 L/R , then clearly any state (and in particular the trivial vacuum A = 0) is invariant under the vectorial transformations, i.e. those transformations for which g L = g R . This means that the vector-like symmetry is unbroken in this model. On the other hand, the fact that the branes are joined into one U-shaped object means that the axial symmetry is broken, which corresponds to the spontaneous breaking of the axial symmetry in the dual gauge theory. The corresponding Goldstone bosons can be seen explicitly in the spectrum of the fluctuations on the brane world-volume.
The relationship between the bulk gauge field and the source and global symmetry current of the dual gauge theory is encoded in the asymptotic behaviour of the former. More precisely, the bulk gauge field A M (z, x µ ) behaves, near the boundary and in the A z = 0 gauge, as Here ρ ν parametrises the normalisable mode, while a ν (x µ ) describes the non-normalisable behaviour of the field. The latter is interpreted as a source in the dual field theory action, where it appears as Hence the expectation value of the current corresponding to the global symmetry in the gauge theory is given by .
When the bulk action is just the ordinary Yang-Mills action (in curved space), this expectation value is the same as the coefficient ρ ν in the expansion (2.4). However, in the presence of the Chern-Simons term, the coefficient ρ ν is different from the current, as we will explicitly demonstrate for the system at hand in section 2.3. This difference has been a source of some of confusion in the literature. Its importance has recently been emphasised in the context of the chiral magnetic effect in the Sakai-Sugimoto model in [24]. From (2.5) we also see that adding a chemical potential to the field theory corresponds to adding a source for J 0 , which implies the boundary condition for the holographic gauge field A ν (x) = µδ ν0 .
For the Sakai-Sugimoto model, the bulk field A m living on the D8-branes has two asymptotic regions, corresponding to each brane, and hence there are two independent chemical potentials µ L and µ R which can be separately turned on. Instead of left and right chemical potentials one often introduces vectorial and axial potentials, defined respectively as µ B = 1 2 (µ R + µ L ) and µ A = 1 2 (µ R − µ L ). For the N f = 2 case, the vectorial and axial chemical potentials for the U (1) subgroup of the U (2) gauge group on the two D8-branes correspond to the baryonic and axial chemical potential in the dual gauge theory, while the non-abelian SU (2) chemical potentials are mapped to the vectorial and axial isospin potentials.
In what follows we will also be interested in studying the system in the presence of an external (non-dynamical) magnetic source, which will be introduced by turning on a nontrivial profile for the non-normalisable component a ν (x) of the bulk field.
The Chern-Simons term, anomalies and the Bardeen counter term
The symmetries and currents of the model discussed in the previous section are, however, valid only at leading order in λ −1 . At the next order in λ −1 , the Yang-Mills action on the brane world-volume receives many corrections, among which the Chern-Simons term. This term turns out to be crucial for the existence of a nontrivial ground state of the system in the presence of the external magnetic field, a situation which we will study in the following sections. On manifolds with boundaries, however, the Chern-Simons term is manifestly gauge non-invariant and it also spoils conservation of the vectorial and axial currents. Both of these reflect the fact that the Chern-Simons term is the holographic manifestation of the vector and axial gauge anomalies in the dual theory.
Apart from the Chern-Simons term there are of course also various other corrections. One important class of terms is coming from the expansion of the DBI action. In previous work [15] we have seen that the qualitative picture of chiral spiral formation at vanishing magnetic field does not change when these corrections are taken into account. We will assume that this holds true here as well. For the other higher derivative corrections, it is important to note that we will find that our ground state has a momentum scale of order M KK , not 1/l s , so higher derivatives can typically be ignored as long as M KK l s is small.
Returning to the role of the Chern-Simons term in describing anomalies, let us start by decomposing the gauge field in terms of the axial and vectorial components, as These two components transform under under an inversion of the holographic coordinate z → −z as Furthermore, the µ component of these fields are related to the dual gauge fields as where the non-calligraphic A V /A µ (x) are the boundary vector and axial-vector gauge fields. When written in terms of these vectorial and axial potentials the action reads We now want to compute the currents following the prescription (2.6). The variation of the action can be written as where the derivatives of the Lagrangian are given by (2.12) Note that P m V /A are antisymmetric in m ↔ . Imposing that the bulk term in the variation (the first line in (2.11)) vanishes gives the equations of motion, (2.13) We will need these shortly to show that the currents are not conserved. The boundary term in the action variation can be written as This implies that the holographic vector and axial currents are given by the expres- where we have used the boundary conditions (2.9) and F V /A µν Using the equations of motion (2.13) we can explicitly show that these two currents are not conserved due to the presence of the Chern-Simons term. One finds We see two things here. Firstly, the anomaly (i.e. the right-hand side of above equations) is indeed sourced by the Chern-Simons term. Secondly, the anomaly is present only if the boundary value of the bulk gauge field strength is non-vanishing.
In other words, only in the presence of an external field does the anomaly show up. While the axial anomaly is not problematic (it just reflects the fact that this symmetry is no longer present in the dual quantum field theory), this is not the case with the vectorial symmetry. In QED coupled to chiral fermions one has to require that the vector current is strictly conserved, since its non-conservation would imply a gauge anomaly. It is possible to make the vectorial current conserved by adding extra boundary terms to the action, as was first shown by Bardeen in [25].
In the holographic setup, one expects that such a Bardeen-type counter term will appear, but this type of term should come from the requirement that the full theory in the bulk in the presence of the boundary is gauge invariant under vectorial gauge transformations. Let us therefore consider a generic vectorial gauge transformation in the bulk where Λ V (x, z) is a function even in z. Under this transformation, the action (2.10) is invariant up to a boundary term, Therefore, if we want to impose invariance of the action under (2.17) we need to add the anomaly counter term correction We see that there are two contributions of this surface term: at the "holographic" boundaries z → ±∞ and at the boundary at spatial infinity | x| → ∞. Its contribution at holographic infinity is indeed the Bardeen counter term as derived in quantum field theory [25], and which was, in the holographic setup, initially postulated (added by hand) in [24]. We see that its presence automatically follows from the requirement of classical gauge invariance of the bulk theory in the presence of the boundary. The contribution at spatial infinity, would typically vanish, as all physical states in the system are localised in the interior. However, in the presence of external sources which generate an external magnetic field that fills out the whole four-dimensional space, like the one we will be considering, it is not a priori clear if this is true, and one has to be careful about possible extra contributions to the action and currents. It will however turn out that for our non-homogeneous ansatz, these extra terms are irrelevant and that only the Bardeen counter term is non-vanishing.
Let us continue by showing the effect of adding the S an. term on the currents. The total action now reads The variation of this new action can again be written in the form (2.11), withL and P , instead of L and P , i.e. (2.21) The equations of motion are of course unchanged (as the Bardeen counter term is only a surface term), while the new currents are obtained as Explicitly, the expressions read (2.23) The divergences of these currents are where we used again the equations of motion (2.13). This clearly shows that the vector current is now conserved, while the anomaly is seen only in the axial sector. When the coefficient α is taken from string theory, the non-conservation of the axial current is exactly the same (including the numerical factor) as in QED coupled to external fermions [26]. In what follows we will work with these renormalised currents and action. We should also emphasise that when one considers chemical potential for a symmetry which is anomalous (as it is case here), just knowing the corrected action (2.20) may not be enough. Also, one has to be careful about the boundary conditions one has to impose on the states, as not all boundary conditions are allowed. We discuss these subtle issues in section 3.4.
The corrected Hamiltonian
The Lagrangian density of the action after inclusion of the anomaly counter term can be written as The conjugate momenta associated with the vector and axial gauge fields thus take the formΠ (2.26) We then obtain the on-shell Hamiltonian asH =H Bulk +H Bdy , where the two contributions read (2.28) Here we have used the gauge field equations (2.13) for the time component = 0 (generalised Gauss law).
The spatially modulated phase
Having settled the issue of how to deal with the Chern-Simons term in the presence of external fields for the Sakai-Sugimoto model, we now want to find the ground state of the system, at strong coupling, in the presence of external magnetic field and non-vanishing axial and baryon chemical potentials. Based on the weak coupling, partonic, arguments which were mentioned in the introduction, we expect that the ground state should be a chiral magnetic spiral like configuration. In particular, we expect it to be non-homogeneous. So far, for Sakai-Sugimoto model a non-homogeneous ground state was constructed in the presence of large enough axial chemical potential but with no external fields in [15] (at low temperature; see [27] for a high-temperature analysis). The main reason why such a state appeared was due to the nontrivial Chern-Simons term.
We now want to see if this state persists and how it is modified once the external magnetic field is introduced.
Magnetic ground state ansatz in the presence of chemical potentials
We are interested in studying the ground state of the system at non-zero axial chemical potential µ A and non-zero baryon chemical potential µ B . In addition, we will turn on a constant magnetic field in the x 1 direction B = Bx 1 . The boundary conditions associated with this physical scenario are We will see that in fact our ansatz is insensitive to the baryon chemical potential, but we will keep it in the formulas for a little while longer.
Let us now consider our particular ansatz, in the gauge A z = 0. First, we want to introduce the vectorial and axial chemical potentials, hence we turn on A 0 = f (z) with the above boundary conditions. Second, we need to introduce the (constant) magnetic field in the boundary, hence we turn on a component of A transverse to the direction of B: A T B (x 2 , x 3 , z). In principle this function could depend on z, but the Bianchi identity tells us that in fact the magnetic field is independent of z. Therefore, Next by looking at the equations of motion for the A L B component parallel to the magnetic field, we see that this component also has to be turned on, and is only a function of z, Finally, we expect that a chiral magnetic spiral will appear in the direction transverse to the external magnetic field i.e. it represents the chiral wave transverse to the boundary magnetic field satisfying where k = ±| k| and k is the spatial momentum. So in summary our ansatz is given by where the fields satisfy the boundary conditions The Gauss law, i.e. the zeroth component of the equation of motion (2.13), is automatically satisfied for our ansatz. The remaining equations reduce, after integrating one of them with integration constantρ, to Restricting now to the metric (2.2), and substituting eq. (3.8) into (3.9) and (3.10) we obtain our master equations, and we have also introduced a set of dimensionless variablesf ,ĥ,â,k,ρ andB defined by withλ = λ/(27π). These coupled equations are in general not solvable analytically, except in the special case k = 0, which we will review next.
Review of the homogeneous solution
Before embarking on the full task of finding the non-homogeneous solutions to the equations of motion, we will in this section first review the homogeneous solution (i.e. the solution for which k = 0) in the presence of a constant magnetic field. This is an abelian version of the solution first constructed in [8,9], see also [17]. In the homogeneous case there is no transverse spiral, i.e. h = 0 and the equations of motion simplify to ∂zf =ρ −Bâ , withz = arctan z. These equations can be integrated exactly, and the solution takes the formâ whereĈ A ,Ĉ B ,ρ andf 0 are four integration constants for the two second order differential equations. The corresponding field strengths take the form (3.19) and C A/B =λM KKĈA/B . In terms of the baryonic and axial chemical potentials the boundary condition on f (z) reads (3.20) From the expression for f (z) these potentials are related toĈ A andĈ B bŷ where µ A/B =λM KKμA/B . In analogy with the analysis of [1] we are here interested in configurations in which there is a non-vanishing pion (or rather, η ) gradient in the direction of the external field. Since we are working in the A z = 0 gauge, a pion field will appear as part of the axial, non-normalisable component of all A µ 's (see [22,23]). Hence, we impose the boundary condition 3 We should note here that this is not the most general boundary condition for the field a(z), since we have set the even part to zero. Having thus reduced the parameter space to the set {µ A , µ B , j}, we have the relationŝ where j =λM KKĵ . Using these relations we can rewritef (z) andâ(z) aŝ (3.24) Since the constant µ B does not appear in the Hamiltonian, it is effectively a free parameter, which we are free to set to zero. As expected, we find that the baryon chemical potential has no effect on this abelian system. In contrast, minimising the Hamiltonian will impose a constraint on the axial chemical potential and j, as expected for physical systems.
In summary, we have two physical boundary values, µ A and j, which are implicitly expressed in terms of the two parameters C A and C B . At any given fixed values of µ A and j, we will want to compare the homogeneous solution given above to possible non-homogeneous condensates and determine which of the two has lower energy.
Currents for the homogeneous and non-homogeneous ansatz
The work of [17,24], which studied the homogeneous solution discussed above, resulted in the interesting conclusion that there is no chiral magnetic effect present in the holographic Sakai-Sugimoto model. There are some subtleties with this which were pointed out in [16], to which we will return shortly. However, their result also leaves the open question as to whether there is a chiral magnetic effect for more general solutions to the equations of motion, for instance the non-homogeneous ones which we consider here.
So even before we find the full non-homogeneous solution, an important lesson might be learnt from an evaluation of the corrected holographic currents (2.23) for the non-homogeneous ansatz. For our ansatz (3.6), the corrected currents becomẽ where we have used the gauge field equations for the components = 0 and = 1. From (3.25) we see two important facts. First, the density of particles carrying baryonic charge is zero. This confirms once more that there is nothing baryonic in the solutions under consideration, in agreement with the fact that the baryon chemical potential µ B decouples completely.
The second observation is that the component of the vector current in the direction of the external magnetic field is zero. This is in sharp contrast to what one would expect if there was a chiral magnetic effect present. We should, however, emphasise that it is possible to define corrected currents which are different from those above, if one decides to deal with the anomalous symmetry in a different way, see section 3.4. However, this alternative method, although it produces the chiral magnetic effect, suffers from other shortcomings, as we will explain.
The expressions (3.25) simplify further when we restrict to the homogeneous ansatz, for which one obtains We should emphasise once more that the vector (baryonic) currents vanish only when the contribution from the Bardeen term in the action is properly taken into account. One could in principle evaluate an "abelianised" version of the baryon number (second Chern class), ∼ F 3z F 12 . However, this expression is strictly speaking valid only in a nonabelian system, and in this situation one expects that the charge computed using the corrected conserved currentJ 0 V should coincide with this topological number.
Our analysis shows that the fact that there is no chiral magnetic effect in the Sakai-Sugimoto model is not due to the simplified homogeneous ansatz, and that it is also not a consequence of the details of any numerical solution which we will present later; rather, the chiral magnetic effect is absent for the entire class of solutions captured by the ansatz (3.6). The contribution of the Bardeen terms required to make the vector currents conserved is crucial for the absence of the chiral magnetic effect.
We finally see that, in contrast to the homogeneous solution, where no vector currents are present at all, the non-homogeneous system exhibits transverse vector currents. This shows some resemblance to the chiral magnetic spiral of [28].
Chemical potentials for non-conserved charges
Given the rather convincing quasi-particle picture of the origin of the chiral magnetic effect at weak coupling [2,3], it is somewhat surprising that it is not present in the model at hand at strong coupling. A reason for this discrepancy has been suggested in [16], in which it was emphasised that one should be careful in computing the effects of a chemical potential in theories for which the associated charge is not conserved.
The main observation made in [16] is that there are two ways to introduce a chemical potential into a thermal quantum system. One is to twist the fermions along the thermal circle, i.e. to impose instead of the usual anti-periodic boundary condition. This is what one would do at weak coupling. It was called the "B-formalism" in [16]. The other way is to keep anti-periodic boundary conditions, but instead use a shifted Hamiltonian, This coupling can be viewed as a coupling to a gauge field for whichà 0 = µ A . We will refer to this as the "A-formalism" (in which we will temporarily put tildes on all objects, as above, for clarity). For a non-anomalous symmetry, these two formalisms are equivalent, and one can go from the B-formalism to the A-formalism using a gauge transformation involving the external gauge field, with parameter θ A = −µ A t, relating the gauge fields in the two formalisms according to In terms of our holographic picture, this gauge transformation acts directly on the 5d gauge field, with a parameter Θ A (z) that is z-dependent, which again acts on the gauge field as 4 The difference between the two formalism can thus be formulated in a clear holographic language as well: in the A-formalism the ansatz hasà 0 asymptoting to the chemical potential, whereas in the B-formalism the ansatz instead has this chemical potential stored in the A z component. The chemical potential is then best written as which is nicely gauge invariant and independent of the formalism used. This is all clear and unambiguous when the symmetry is non-anomalous. However, in the presence of an anomaly, one cannot pass from the formalism defined by (3.27) to that defined by (3.28). This is what happens in our system: as we discussed above, the 5d action after the anomaly correction (2.20) is not invariant under a chiral gauge transformation, The anomaly implies that one of the two formalisms is incorrect. The point of view of [16] is that there are strong indications that the B-formalism is the correct one in field theory. If one insists on computing with untwisted fermions, one needs to perform a gauge transformation, which not just introduces the chemical potential into A 0 , but also modifies the action and Hamiltonian to correct for the fact that the action is not gauge invariant. To be precise, when one uses untwisted fermions, the action that one should use is the gauge-transformed action, which differs from the original one by the anomaly. This was called the "A'-formalism" (note the prime) in [16].
Let us see how this logic works for the Sakai-Sugimoto system under consideration here. The idea is thus that if we want to identify the chemical potential with 4 The boundary gauge transformation parameter θ A (x) is related to the 5d parameter by with the perhaps somewhat inconvenient signs following from our relation between the bulk and boundary gauge fields (2.9).
the asymptotic value ofà 0 , we should be working not withS[à ] but rather with S[à ] +S Θ [à ]. In order to compute the currents, we need to compute the variation of the Θ A , which takes the form The contribution of the Θ A term to the holographic currents can be obtained using the dictionary We then obtain (3.37) These expressions are independent of the particular function g A (z) which one chooses in (3.30). For our ansatz (3.6) with the boundary conditions (3.7) we get This shows a very promising feature: any solution in this class will now exhibit the chiral magnetic effect, as there is a non-vanishingJ 1 V component. Unfortunately, we will see later that things are more subtle at the level of the Hamiltonian. There are two main problems when writing down the A -formalism for things more complicated than the currents. Firstly, any bulk quantities such as the Hamiltonian will typically depend on g A (z), not just on its asymptotic values. Secondly, it will turn out that even for a 'natural' choice of g A (z), for instance g A (z) = −f A (z), the new Hamiltonian has the property that it does not lead to a minimum for non-homogeneous configurations. To see this requires some more details about this condensate, but let us here already present the expression for the corrected Hamiltonian. It can be written as whereH Bulk (A ) andH Bdy (A ) are given by (2.27), (2.28) and the Θ A term is For our ansatz (3.6) with the boundary conditions (3.7) the theta term in the Hamiltonian take the form In the homogeneous case we get This boundary term can have drastic consequences for the phase structure of the theory: we will see in section 3.6 that it disfavours non-homogeneous configurations. We should emphasise that the procedure for introducing the A'-formalism in the holographic context is, unfortunately, rather ambiguous. This is essentially because the required boundary condition on the gauge transformation parameter Θ A (z), given in (3.31), does not uniquely specify the behaviour of Θ A (z) in the bulk. In contrast, this kind of ambiguity does not appear in field theory, as gauge transformation which untwists fermions, and "moves" chemical potential into temporal component of the gauge potential, is unique.
Instead of using the A'-formalism one could consider doing the holographic computation directly in the B-formalism. This would require writing the solution in the A 0 = 0 gauge (the fermions are not directly accessible so it is unclear whether additional changes are required to implement the twisting). This does lead to a CME, but the Hamiltonian again turns out to have no minimum for non-homogeneous configurations. Some details are given in appendix A.2.
Perturbative stability analysis of the homogeneous solution
In this section we will perturbatively analyse the stability of the homogeneous solution (3.24), in order to show that this configuration is unstable and wants to "decay" to a non-homogeneous solution (3.6) (whose explicit form will be found later). Our analysis here is a revision of the work done in [11], but our findings differ from theirs in an important way which is crucial for the remainder of our paper.
Our starting point is given by the equations of motion linearised around the configuration (3.24). Following [11], we will look for fluctuations of the modes transverse to the direction of the external field, i.e. along (A 1 , A 2 ), since these should lead to the formation of a chiral spiral. These fluctuations are given by The equations of motion for the fluctuations (δA 1 , δA 2 ) are coupled, but they diagonalise in the complex basis where the equations become Here F 1z , F 0z are field strengths of the background homogeneous solution (3.19), and we will express all results in terms of the constants C A and C B (instead of µ A and j) in order enable a simpler comparison with the results of [11].
Given values of C A and C B , we numerically solve equation (3.46). As usual in perturbation theory, solutions with real ω represent fluctuations that are stable, while those for which ω has a positive imaginary part correspond to instabilities, since they are exponentially growing in time. Fluctuations for which ω = 0 are marginal and have to be analysed in the full nonlinear theory in order to see if they correspond to unstable directions in configuration space. We will argue now that, while [11] has correctly identified the perturbatively unstable solutions with complex ω, they have missed the marginally unstable modes, which are actually unstable in the full theory. In the following section we will then explicitly construct the new vacua corresponding to these marginal modes.
Before presenting solutions to the equation (3.46), observe that this equation exhibits the symmetry t → −t, ω → −ω, which means that all solutions will come in pairs (ω, −ω). Additionally, when solving this equation, one looks for normalisable solutions, i.e. one looks for the solutions that behave as δA (±) ∼ 1/z near the boundaries.
Let us start by discussing the solutions with only C A non-vanishing. At B = 0 these correspond to solutions with j = 0. Samples of these solutions are shown in figure 1. We see that as |C A | is increased, the two branches of solutions come closer to the ω = 0 axis and then touch (the middle plot in figure 1). For all those values of |C A |, given any momenta k, there is always real ω solution. However, as |C A | is increased even more, the two branches of solutions separate along the k-axis, so that there is a region of momenta for which there are no real ω solutions (see the third plot of figure 1). For these "forbidden" values of the momenta, one can explicitly find solutions with complex ω, which clearly signal a proper instability of the solution. These modes have previously been found in [11].
Let us now turn on |C B |, while keeping |C A | fixed. Samples of these solutions are shown in figure 2. We see that as |C B | is increased, the two branches of solutions shift in the vertical direction, while the distance between them remains non-vanishing (which is the reason why this other branch is not visible in the second and third plots). For some value of |C B |, the upper branch crosses the ω = 0 axis, and continues to go towards negative ω. We thus see that for large enough |C B |, marginal ω = 0 modes are always present in the spectrum. In the next section, we will show that these modes, which were previously missed in the literature, are actually unstable once non-linearities are taken into account. These findings are summarised in figure 3. Depicted there is the behaviour of the frequencies ω of the fluctuation modes at the 'extrema' of the dispersion relation plots, as a function of the two parameters C A and C B . For a generic value of C B and C A = 0, one root is always positive. As C A is increased from zero, the positive root (red dot) moves towards the left, until it becomes zero. This defines the lower, solid curve of marginal modes in figure 3. Above this curve there is always a marginally stable mode in the spectrum, and thus potentially an instability. As C A is increased further, both roots eventually develop a positive imaginary part, and we enter the region of strictly unstable modes (the blue shaded region). This area was previously discussed in [11]. 5
The inhomogeneous solutions to the equations of motion
Having established the location of the unstable and marginally stable modes in the fluctuation spectrum of the homogeneous solution, we would now like to explicitly construct the non-homogeneous solutions to which they are expected to decay, and study their properties. In our previous work [15] we have explicitly constructed a non-homogeneous solution in the presence of non-vanishing axial chemical potential, and in the absence of a magnetic field. We will therefore first discuss solutions with nonzero µ A and non-zero B, which are a natural generalisation of those considered before. We will then introduce a non-zero potential j as well and discuss how the solutions behave as a function of this parameter.
The upshot of our previous analysis [15] was that for large enough axial chemical potential, larger than some critical value, a non-homogeneous solution is formed. The relation between the chemical potential and axial particle density was almost linear, similar to the homogeneous case. However, a particular density of particles was achieved for a smaller value of the chemical potential than in the homogeneous configuration. The wave number characterising the period of the non-homogeneous solution turned out to be very weakly sensitive to the actual value of the particle charge density in the canonical ensemble (or to the value of the chemical potential in the case of the grand canonical ensemble). The first things we would like to know is whether any of these characteristics change in the presence of an external magnetic field.
Our starting point is the nonlinear system of equations (3.11), (3.12) and (3.13). We first observe that the functionf (z) appears only in the first equation (3.11), and this equation can be directly integrated once we determine the functionsb(z) and h(z), from the other two equations.
Hence, we first need to solve equations (3.12) and (3.13). The parametersB and k in these equations correspond to the external magnetic field and the momentum of the transverse spiral, and are fixed at this stage. There are then four undetermined constants corresponding to non-normalisable and normalisable modes for each of the functionsb andĥ. However, we solve equations (3.12), (3.13) requiring that the transverse spiral describes a normalisable mode, i.e. we impose that the only external field is the magnetic field, and that there are no transverse external fields. In contrast, the functionb(z) (or alternatively,â(z), see (3.14)), which describes the longitudinal field, does have a non-normalisable component. For example, on the positive side of the U-shaped brane it generically behaves aŝ where the coefficients b 0 and b 1 are given by (3.49) However, just like for the homogeneous solution, this non-normalisable component of the solution corresponds to a gradient of the η field and not to an external field (we will provide more evidence of this in section 3.6). Its unusual appearance as a non-normalisable mode is a consequence of our choice of the A z = 0 gauge. We should note that at this stage we do not impose thatb orĥ are of definite parity. We numerically solve the equation of motion using the shooting method, and to do this we only need to specify three undetermined constants h 0 , C A and C B on one side of the U-shaped brane, as well as the parameter B and k in the equations. We solve equations for various values of constants and parameters, but keep only those for whichĥ is normalisable.
A first indication of the behaviour of the solutions can be obtained by looking at the effect of increasing C B at non-zero value of B. Remember from section 3.5 that there exists a marginally stable mode in the spectrum, which follows the downward bending curve in figure 3. Along this curve there might be a decay to a new ground state. That this indeed happens is easily confirmed by looking for nonlinear solutions at non-zero C A and C B . A number of physical solutions is depicted in figure 4. They confirm that the marginal modes found in the previous section in fact correspond to true instabilities and new ground states. However, these figures say little about the actual physics, as neither C A nor C B are physical parameters. It is tempting to associate C B with a baryon chemical potential, but as we have already mentioned, this is incorrect, as the parameter µ B completely decouples and does not influence the value of the Hamiltonian. Instead, the correct interpretation is that the physical parameters in our problem are µ A and j, which happen to be related in a non-trivial way to C A and C B . Families of solutions (parametrised by k) at fixed µ A and j lie on curves in the space spanned by C A and C B , which only become straight lines at C B = 0 when B = 0. This is once more clearly visible in figure 5. This depicts a family of solutions at constant µ A , parametrised by k, at C B = 0. These clearly do not have a constant value of j.
In order to find solutions with both µ A and j fixed, we need to allow for a variation of both C A and C B as a function of k. Even with the magnetic field fixed to a particular value, this still means that the normalisable solutions lie on a curve in a four-dimensional parameter space spanned by {h 0 , k, C A , C B }. This makes a brute force scan computationally infeasible. Independent of the large dimensionality of this problem, we also found that the larger µ A cases require substantial computational time because of the fact that the asymptotic value h(−∞) varies rather strongly as a function of k and the other parameters. In other words, the valley of solutions is rather steep and the bottom of the valley, where h(z) is normalisable at z → −∞, is difficult to trace. In this respect, it is useful to note that a solution written in C ++ using odeint [29] and GSL [30] outperformed our Mathematica implementations by two to three orders of magnitude (!) in these computations. The details of the procedure which we followed can be found in appendix A.3.
We first focus on solutions at fixed and rather arbitrary value B = 1; the dependence on B will be discussed in the next section. We will mainly discuss j = 0 solutions. Our numerical investigations of j = 0 solutions show that all of these ac- tually have higher free energy, and are thus not real ground states of the system (see below). In figure 6 we display a set of configurations at constant µ A and vanishing j. Also depicted is the difference of the Hamiltonian of the homogeneous solution for this pair of µ A , j values, and the Hamiltonian of the non-homogeneous solution.
As in earlier work without magnetic field [15,27], there is a family of solutions parametrised by the wave momentum k, and the physical solution is the one for which the Hamiltonian is minimised. Perhaps somewhat surprisingly, for smaller values of the chemical potential (we have done computations up to about µ A = −8) the momentum k g.s. at which the non-homogeneous ground state attains its minimum energy is only very weakly dependent on µ A . Our numerics for the range of µ A up to −7.5 show that the ground state momentum for all these cases equals 0.83 to within less than one percent. Furthermore, this ground state momentum is also the same (to within numerical accuracy) as the ground state momentum for the B = 0 case This shows that minimisation with respect to j drives the system to j = 0, both for the homogeneous and for the non-homogeneous state, and that this also drives the η -gradient sitting in J 1 A to zero. Under j → −j the Hamiltonian is invariant while J 1 A changes sign.
analysed in [15]. This is despite the fact that the actual solutions are quite different. It would be interesting to understand this behaviour better. If one evaluates the Hamiltonian of the A'-formalism (3.42), one finds that all non-homogeneous configurations always have higher energy than the corresponding homogeneous one at the same values of µ A and j. We take this as a strong sign that there is still something un-understood about the A'-formalism, as the analysis of section 3.5 clearly indicates a perturbative instability. It is in principle possible that we are not looking at the correct ansatz for the ground state, but we consider this unlikely. A similar statements holds for the B-formalism.
The dependence of these solutions on j can also be computed, and is depicted in figure 7. From those plots one observes two things. Firstly, a minimisation of the Hamiltonian with respect to j drives the system to j = 0. Secondly, a non-zero j leads to a non-vanishing J 1 A current, which can be interpreted as an η -gradient. Together, these observations show that the preferred non-homogeneous state is one with a vanishing η -gradient. Figure 10. Solutions at constant µ = −5.0 (left) and µ = −6.0 (right) and j = 0, for various values of the magnetic field. This shows that a magnetic field suppresses the instability to a non-homogeneous ground state, and there is a critical magnetic field B crit (µ A ) above which the non-homogeneous solution ceases to exist. Figure 11. Results from a linearised analysis close to the critical magnetic field. Left: critical chemical potential (i.e. the potential for which a non-homogeneous ground state first develops) versus critical magnetic field (i.e. the field for which the solution disappears). Right: as determined earlier for the non-linear case, the parameter C A goes to zero (asymptotically) as B increases.
Critical magnetic field
For larger magnitudes of the external magnetic field, the numerics becomes increasingly expensive as the valleys of solutions to the equations of motion rapidly become steeper and are more and more closely approached by regions in parameter space in which no regular solutions exist (i.e. regions in which h(z → ∞) diverges). See figure 8 for an impression.
A first analysis which one can make is to simply scan for normalisable solutions at fixed C A and C B , for increasing values of B. This leads to plots like in figure 9. One observes that a larger magnetic field requires a smaller |C A | for the chiral spiral solution to exist. In [11] this effect was seen at the level of the perturbative instability analysis, and led the authors to conclude that a magnetic field enhances the instability. However, this conclusion is premature and actually incorrect. What one needs to do is to analyse the effect of the magnetic field on solutions at fixed value of µ A and j, not at fixed values of the unphysical parameters C A and C B . If one does this more elaborate analysis, the conclusion is actually opposite: the magnetic field tends to stabilise the homogeneous solution, and there exists a critical B crit (µ A ) above which the non-homogeneous solution ceases to exist. This can be seen in figure 10, where solutions at fixed µ A and j are depicted. By increasing B sufficiently slowly while keeping the other physical parameters fixed, we can determine this B crit in numerical form.
It is difficult to get a good picture of B crit versus µ A , as the numerics become expensive, for reasons we have mentioned earlier. However, we can make use of the fact that near the critical magnetic field, the parameter h 0 is small. Assuming that this implies that the entire function h(z) is small, we can then use a linear approximation to the equations of motion, which is much easier to solve. The result of this linear analysis is depicted in figures 11 and 12. The latter shows that the ground state momentum k g.s. is actually not as flat as the non-linear computations suggests.
For larger B, one should keep in mind that the results may be invalidated for a variety of reasons. Firstly, the linear approximation may break down, as smallness of h 0 does not necessarily imply smallness of the full function h(z). Secondly, we have seen that the valley near the non-linear solutions becomes very steep for large values of B, and hence the linear solution may deviate quite strongly from the non-linear one for large B. Thirdly, when B is large, DBI corrections to the equations of motion may become relevant, as higher powers of the field strength are no longer necessarily small. For these reasons, one should be careful interpreting the large B c regime of figure 12.
Conclusions and open questions
We have analysed in detail the instability of the Sakai-Sugimoto model in the presence of a chiral chemical potential µ A , an η -gradient j and a magnetic field B. We have shown that the presence of marginally stable modes (overlooked in [11]) is a signal for decay towards a non-homogeneous chiral spiral like ground state. Minimising the Hamiltonian on a constant µ A , constant j curve leads to a unique ground state momentum k g.s. (see figure 6), which for small B is only weakly dependent on µ A . Increasing the magnetic field to sufficiently large values suppresses the instability and drives the system back to the homogeneous phase ( figure 12). This result may be related to the effective reduction of the Landau levels in a strong magnetic field (see [31]).
A linear analysis suggests that the ground state momentum may have a nontrivial dependence on the chemical potential, and might be compatible with a linear scaling at large µ A (as in the Deryagin-Grigoriev-Rubakov non-homogeneous large-N c QCD ground state [32]). However, substantial additional work is necessary to determine whether this is indeed happening in the full non-linear theory.
We have found that a recent proposal for the correction of the currents [13,16], while capable of producing the chiral magnetic effect both in the homogeneous and non-homogeneous ground states, is incomplete. It is a) not unique and b) leads to a Hamiltonian which does not prefer the non-homogeneous ground state. We emphasise that this is a problem with the currents and boundary terms in the Hamiltonian, as the perturbative instability analysis is not affected by these corrections, and neither are the non-linear solutions.
A Appendix: Technical details A.1 The Hamiltonian after the anomaly corrections
Here we will give a detailed derivation of the Hamiltonian for the Sakai-Sugimoto system taking special care about surface terms that are present, as well as taking into account the corrections to the action coming from the anomaly.
After anomaly in the system have been corrected by inclusion of extra terms in the action, the full Langrangian density can be written as The conjugate momenta associated with the vector and axial gauge fields take the formΠ Hence, the on-shell Hamiltonian takes the formH =H Bulk +H Bdỹ where we have used the gauge field equations for the time component of gauge potential (generalised Gauss law). For our inhomogeneous ansatz the conjugate momenta simplify tõ The bulk term simplifies for our ansatz tõ H Bulk = κ d 3 xdz − √ −gg zz g 00 (∂ z f ) 2 + √ −gg zz g xx (∂ z h) 2 + (∂ z a) 2 (A.7) In terms of the variables (3.16) the bulk and boundary terms take the form (A.13)
A.2 The B-formalism ansatz
In formalism B the time component of the chiral gauge field is zero at the boundary. An ansatz that preserves the field strengths (and hence the equations of motion) is given by The components of the vector and axial currents take the same form as those in the A-formalism, given in (3.25), with the exception ofJ 1 V which reads 16) This in particular implies that there is a non-zero chiral magnetic effect in formalism B.
A.3 Scanning parameter space for solutions
We recall from the main text that we aim for the minimisation of the Hamiltonian as a function of the condensate momentum k, on curves which have fixed values of µ A and j. Unfortunately, the latter two parameters are determined only indirectly, after a solution has been found; their dependence on the shooting parameters h 0 , C A and C B is not known analytically. Minimising the Hamiltonian over curves at fixed value of C A and C B would in general not be equivalent (i.e. wrong). One could in principle make a fine-grained lattice scan for solutions in the parameter space spanned by {h 0 , k, C A , C B }, and then consider only those points which have a certain fixed value of µ A and j. This, however, is computationally extremely expensive, and wastes a lot of time on regions of parameter space which will never be used. We therefore follow a different approach.
The general problem which we need to solve for an efficient determination of the required curves is the following. We have a d-dimensional parameter space of configurations, and physical solutions are those in which d − 1 functions of the configuration are vanishing simultaneously. One of these functions is always the value of the function h(z) evaluated at −∞, as this imposes normalisability. Among the other functions we have, for instance, the value of µ A for this configuration minus some fixed reference value, in case we want to scan for solutions at this fixed value of the chemical potential. This 'isocurve tracing' problem is most easily implemented by starting from one known point on which the functions are simultaneously zero, and then using a version of the Newton-Raphson method to find the location of a neighbouring simultaneous zero. 6 If we have n lattice points in every direction of parameter space, this brings the computational cost down from being order O(n d ) to the much more tractable O(n).
We start by scanning for physical solutions at C B = 0 and some fixed value of C A , which is a two-dimensional problem in {h 0 , k} space and yields curves like those in figure 5. On these curves neither µ A nor j will in general be constant. However, one can choose one point on such a solution curve, and use this as a seed point p seed for the isocurve tracing described above. Denoting the value of µ A and j by µ seed A and j seed respectively, we then trace the common zero of the three functions h(−∞), 6 General purpose Mathematica and C ++ implementations of this isocurve tracing algorithm are available upon request from the authors. µ A − µ seed A and j − j seed in the four-dimensional parameter space {h 0 , k, C A , C B } (only when B = 0 do these curves lie completely in the {h 0 , k, C A } subspace). A typical solution is depicted in figure 13. The only remaining problem with these curves is now that in general j seed = 0 (the only such points on the curves in figure 5 are located on the endpoints of those curves, where the coefficient h 0 goes to zero).
In order to find curves at j = 0, we start again from p seed , but first do an isocurve trace at fixed k in the {h 0 , C A , C B } space until we find a point at which j = 0. An example is given in figure 14. This point is then used as our new seed pointp seed for the four-dimensional scan. Repeating the whole process using a seed point obtained for different initial values of C A then produces a set of curves at j = 0 for various constant values of µ A .
On each of these curves we can now finally compute the Hamiltonian as a function of k, and find the value k min for which it is minimised. For a sample set of values of µ A the result is depicted in figure 6. | 2012-10-08T19:47:29.000Z | 2012-09-10T00:00:00.000 | {
"year": 2012,
"sha1": "91c665b259cc8f1d941044b800747ec9fba2bb6c",
"oa_license": null,
"oa_url": "http://dro.dur.ac.uk/12349/1/12349.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "91c665b259cc8f1d941044b800747ec9fba2bb6c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
2123752 | pes2o/s2orc | v3-fos-license | Suprasellar and third ventricular cavernous malformation: Lessons learned in differential diagnosis and surgical planning
Background: While craniopharyngiomas (CPs) are the most common cystic suprasellar lesions in adults, cavernous malformations (CMs) only exceptionally occur in this location and are seldom considered in the differential diagnosis of such lesions. However, unlike CPs, suprasellar CMs are not typically approached via an endoscopic endonasal approach. Case Description: We present a unique clinical case of suprasellar and third ventricular CM mimicking a CP, posing a major decision-making dilemma at the levels of both preoperative diagnosis and surgical planning. Conclusion: This case highlights the importance of carefully considering all the differential diagnoses of sellar pathology to select the most appropriate management strategy and surgical approach.
INTRODUCTION
The sellar region is anatomically complex and a common site of neoplastic, infectious, inflammatory, developmental, and vascular pathologies. [11] Among cystic lesions of the sella turcica, parasellar, and suprasellar space, craniopharyngioma (CP) and Rathke's cleft cyst (RCC) are, by far, the most common. However, the differential diagnosis also includes a variety of disease processes, for which treatment can vary widely. These include colloid cyst, arachnoid cyst, cystic pituitary adenoma, xanthogranuloma, epidermoid cyst, and dermoid cyst. [4,7,9,15] While craniotomy remains a first-line treatment option for many of these lesions, a significant proportion can be resected safely and effectively via an endoscopic endonasal approach. [12,14] In contrast to other cystic lesions, cavernous malformations (CMs) rarely occur in the sellar region and/or third ventricle, and thus are not typically considered in the differential diagnosis of cystic sellar lesions. [6,8,12] Moreover, CMs have not been previously resected via the endoscopic endonasal approach. We present a unique clinical case of suprasellar and third ventricular CM mimicking a CP and posing a major decision-making dilemma at the levels of both preoperative diagnosis and surgical planning. While we believe this case could have been better approached, our findings underscore the importance of carefully considering all differential diagnoses of sellar pathology as a critical step in devising the most appropriate management strategy and surgical approach.
CASE DESCRIPTION
A 52-year-old woman presented with several months of progressive anterograde amnesia noticed by her colleagues at work. Neurologic exam was otherwise unremarkable. Noncontrast head computed tomography (CT) revealed a hyperdense, suprasellar, and third ventricular lesion, consistent with either intralesional microcalcifications or hemorrhage [ Figure 1a]. Magnetic resonance imaging (MRI) of the brain demonstrated a large, partially cystic, minimally enhancing suprasellar lesion extending into the third ventricle, with compression of the bilateral fornices at the foramina of Monro. A small intrasellar component of the lesion was also suspected [ Figure 1b-d]. Based on its location and radiologic appearance, this lesion was thought to represent most likely a CP. Surgical resection was thus indicated for tissue diagnosis and to decompress the fornices. Given the likely presence of an intrasellar component of this lesion, a decision was made to first attempt endoscopic endonasal resection rather than craniotomy.
Following lumbar drain placement, a standard endoscopic endonasal transsphenoidal approach was undertaken using intraoperative neuronavigation (Stryker, Kalamazoo, MI, USA). After opening the sellar dura, the small lesion in the posterior portion of the pituitary gland was easily identified and excised. Intraoperative frozen section examination was suggestive of CP. However, this intrasellar portion did not appear to be in direct continuity with the suprasellar lesion. An attempt to access the suprasellar space was then made. Unfortunately, however, a surgical corridor could not be safely developed. Anteriorly, the optic chiasm and anterior communicating artery complex completely obstructed the line of sight. Posteriorly, the neurohypophysis and dorsum sellae impeded access to the suprasellar compartment. Thus, a decision was made to abort the endoscopic procedure and plan a transcranial approach. Pathologic examination of the permanent intrasellar specimen showed a cystic lesion with clusters of simple squamoid epithelium and a rare strip of nonciliated columnar epithelium and proteinaceous contents. This was most consistent with the diagnosis of RCC, although a colloid or pars intermedia cyst or even a CP could not be ruled out [ Figure 2a]. The patient's postoperative course was marked by diabetes insipidus requiring desmopressin therapy.
Five days later, the patient underwent bifrontal craniotomy and gross total resection of the suprasellar and third Figure 1e and f]. Intraoperatively, the lesion was noted to be mulberry-shaped and filled with blood products of various ages, which was consistent with a diagnosis of CM. The lesion was completely resected and the bilateral fornices were successfully preserved intact. Pathologic examination of the specimen revealed abnormal ectatic vascular channels in a hemorrhagic background, consistent with a vascular malformation [ Figure 2b-d].
On the second postoperative day, the patient suffered sudden acute deterioration of her level of consciousness with right nonreactive mydriasis. Head CT revealed a new right hemispheric subdural fluid collection with mass effect and midline shift to the left [ Figure 3a and b]. The patient was thus taken to the operating room and underwent emergent evacuation of the subdural collection and placement of a left frontal external ventricular drain (EVD) [ Figure 3c]. Postoperatively, the patient exhibited significant improvement with gradual return to her normal level of consciousness. She was ultimately discharged to a rehabilitation facility after the EVD was successfully weaned and removed.
Three weeks later, the patient exhibited worsening mental status and increasing lethargy. Head CT demonstrated external hydrocephalus [ Figure 3d and e]. She was taken back to the operating room and underwent EVD placement, which was subsequently converted to a ventriculoperitoneal shunt a few days later [ Figure 3f].
Following cerebrospinal fluid (CSF) diversion, she improved significantly to her baseline neurological status, and was ultimately discharged back to the rehabilitation facility. At her last follow-up 4 months later, she was functionally independent without focal neurologic deficits, despite persistent anterograde amnesia and desmopressin-dependent diabetes insipidus.
DISCUSSION
This case report illustrates the importance of carefully studying the differential diagnosis of lesions arising from the suprasellar space given the significant impact of histopathologic diagnosis on the overall management strategy and surgical approach. In fact, suprasellar and third ventricular lesions can be approached through a wide variety of endoscopic transfacial and open transcranial approaches. The decision-making process is often complex and depends on multiple factors, ranging from the nature, size, and extent of the lesion to the local anatomy, surgeon experience, and patient preference. [1,6,12] In fact, we have identified two levels at which the present case could have been better approached, potentially sparing the patient transsphenoidal surgery. First, the differential diagnosis of suprasellar and third ventricular lesions is broad and all possibilities should be considered. The fact that this lesion was hyperdense on CT, suggesting possible microcalcifications, partially cystic and minimally enhancing on MRI, made us f a e overly confident of the diagnosis of CP. This was further compounded by the fact that CP is, by far, the most common suprasellar cystic lesion in adults. [5] Nonetheless, we should have considered the remote possibility of an alternative diagnosis, specifically that of CM. In fact, growing CMs may have very similar hyperdense appearance on CT and similar signal characteristics on MRI, as a result of recurrent intralesional hemorrhages. [3,5] Furthermore, though rare, suprasellar and third ventricular CMs have been previously reported and well documented, and often present with short-term memory loss. [8,12] Second, the indication for surgery and the choice of approach largely depend on the suspected histopathologic diagnosis. There is general consensus that CMs involving the third ventricle should be aggressively surgically managed, given that these lesions tend to grow more rapidly and cause more mass effect than CMs arising in other locations. [8,10] However, had the lesion been thought to be a CM rather than a CP, an endoscopic endonasal approach would not have been undertaken. In fact, while the vast majority (75%) of CPs involve both suprasellar and intrasellar spaces, intrasellar extension of a CM is not usual. [2,13] Therefore, the lack of an intrasellar extension that is readily accessible via the transsphenoidal approach and, hence, the lack of a natural sellar-suprasellar tumoral surgical corridor would have discouraged us from undertaking the endoscopic endonasal approach, had the possibility of a suprasellar CM been raised preoperatively. Finally, although it could be argued that a transventricular, rather than interhemispheric approach might be associated with a low risk of injury to the fornices, a transventricular transparenchymal approach would have been particularly difficult in this patient, given the very small size of her lateral ventricles and foramina of Monro.
CONCLUSIONS
Neurosurgeons should keep an open mind when considering the differential diagnosis of sellar pathology because the optimal management strategy and surgical approach may vary considerably depending on the suspected underlying histopathologic diagnosis. Specifically, a CM should be routinely considered, along with CP, in the differential diagnosis of suprasellar and/or third ventricular, cystic-appearing lesions. While the endoscopic endonasal approach is often a good choice for suprasellar CPs, it is less so for CMs which are best approached transcranially.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest. | 2018-04-03T02:43:42.308Z | 2017-10-13T00:00:00.000 | {
"year": 2017,
"sha1": "c07dbe1b81186292fe440b4cd25d061ec9763d63",
"oa_license": "CCBYNCSA",
"oa_url": "https://europepmc.org/articles/pmc5672642",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "c07dbe1b81186292fe440b4cd25d061ec9763d63",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
257153366 | pes2o/s2orc | v3-fos-license | Effectiveness and safety of coronavirus disease 2019 vaccines
Purpose of review To review and summarise recent evidence on the effectiveness of coronavirus disease 2019 (COVID-19) vaccines against severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection and COVID-19 hospitalisation and death in adults as well as in specific population groups, namely pregnant women, and children and adolescents. We also sought to summarise evidence on vaccine safety in relation to cardiovascular and neurological complications. In order to do so, we drew primarily on evidence from two our own data platforms and supplement these with insights from related large population-based studies and systematic reviews. Recent findings All studies showed high vaccine effectiveness against confirmed SARS-CoV-2 infection and in particular against COVID-19 hospitalisation and death. However, vaccine effectiveness against symptomatic COVID-19 infection waned over time. These studies also found that booster vaccines would be needed to maintain high vaccine effectiveness against severe COVID-19 outcomes. Rare cardiovascular and neurological complications have been reported in association with COVID-19 vaccines. Summary The findings from this paper support current recommendations that vaccination remains the safest way for adults, pregnant women, children and adolescents to be protected against COVID-19. There is a need to continue to monitor the effectiveness and safety of COVID-19 vaccines as these continue to be deployed in the evolving pandemic.
INTRODUCTION
Coronavirus disease 2019 (COVID-19) vaccination programmes have been rolled out globally as the key strategy to control and minimise the impact of the COVID-19 pandemic. Three vaccines have mainly been used in the UK, namely BNT162b2 (Pfizer-BioNTech), ChAdOx1 nCoV-19 (Oxford-AstraZeneca) and mRNA-1273 (Moderna). There are many studies reporting safety and effectiveness of the COVID-19 vaccines against severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection and severe COVID-19 outcomes [1 & ]. In this review, we summarise recent evidence around COVID-19 vaccine effectiveness against confirmed SARS-CoV-2 infection and COVID-19 hospitalisation and death in adults as well as the vaccine effectiveness in some specific population groups, i.e., pregnant women, children and adolescents. We also aim to summarise recent evidence on vaccine safety regarding cardiovascular and neurological complications.
In doing so, we draw primarily on evidence from two our own data platforms (i.e., Early Pandemic Evaluation and Enhanced Surveillance of COVID-19 (EAVE II) and Data and Connectivity: COVID-19 Vaccines Pharmacovigilance (DaC-VaP)) and supplement these with insights from related large population-based studies and systematic reviews. EAVE II is a Scotland-wide COVID-19 surveillance platform that has been used to track and forecast the epidemiology of COVID-19, inform risk stratification assessment, and investigate vaccine effectiveness and safety [2]. It comprises national healthcare datasets on 5.4 million people (approximately 99% of the Scottish population) deterministically linked through the Community Health Index number, which is a unique identifier used for all health-care contact across Scotland. DaC-VaP is a UK-wide collaboration looking at the safety and effectiveness of COVID-19 vaccines using linked electronic health record data in all four UK nations [3].
Coronavirus disease 2019 vaccine effectiveness in adults
In early 2021, we conducted a population-based national prospective cohort study using the EAVE II platform. This study comprised linked vaccination, primary care, real-time reverse transcription polymerase chain reaction (RT-PCR) testing, and hospital admission patient records for 5.4 million people in Scotland registered at 940 general practices. The study found that mass roll-out of the first doses of the BNT162b2 and ChAdOx1 nCoV-19 vaccines were associated with substantial reductions in the risk of COVID-19 hospital admission among adults in Scotland [4 && ]. Between December 8, 2020 and February 22, 2021, the first dose of the BNT162b2 vaccine had a vaccine effectiveness (VE) of 91% [95% confidence interval (CI) 85-94] in reducing COVID-19 related hospital admission at 28-34 days post-COVID-19 vaccination among adults. VE for the ChAdOx1 nCoV-19 at 28-34 days post-COVID-19 vaccination was 88% (95% CI 75-94). The combined VE against COVID-19 related hospital admission was 89% (95% CI 83-92) at the same time interval. When we restricted the analysis to those aged at least 80 years old, the combined VE against COVID-19 hospital admission was 83% (95% CI 72-89) at 28-34 days post COVID-19 vaccination. This first national analysis provided considerable reassurance based on real-world evidence that vaccines were highly effective in reducing the risk of serious COVID-19 outcomes and that they also provided very high levels of protection in the elderly.
The emergence of variants of concern (VOCs) has raised important questions about VE. Using the EAVE II platform, we have been able to show that VE against serious COVID-19 outcomes has remained high in fully vaccinated individuals infected with the Delta and Omicron VOCs and their key sublineages [5-8,9 && ]. Specifically, our first national VE study mentioned above was against Alpha VOC infection when the vaccines were trialled against wild type [4 && ]. The VE against death from the Delta VOC at least 14 days after the second vaccine dose was 90% (95% CI 83-94) for BNT162b2 and 91% (95% CI 86-94) for ChAdOx1 nCoV-19; [9 && ] VE against symptomatic SARS-CoV-2 infection due to AY.4.2 (Delta plus -a sub-lineage of the Delta VOC) was 73% (95% CI 62-81) post 14 days of second vaccine dose [7]. The protection was consistent across BNT162b2, ChAdOx1 nCoV-19 and mRNA-1273. Both BNT162b2 and ChAdOx1 nCoV-19 vaccines were effective in reducing the risk of SARS-CoV-2 infection and COVID-19 hospitalisation in people with the Delta VOC, but these effects on infection appeared to be diminished when compared to those with the Alpha VOC [8].
VE of COVID-19 vaccination against confirmed SARS-CoV-2 infection and COVID-19 hospitalisation or death wanes over time [10]. Investigating vaccine waning was complicated by the emergence of new variants (specifically the Delta variant of concern in the UK). To investigate potential waning, our Scottish EAVE II partnered with colleagues in Brazil (where the Gamma variant of concern had emerged) in 2021. In Scotland, VE of two-dose ChA-dOx1 nCoV- 19 There were rare cardiovascular and neurological complications reported in association with COVID-19 vaccines which provide reassurance about the safety of these vaccines.
The findings from this paper support current recommendations that vaccination remains the safest way for adults, pregnant women, children and adolescents to protect themselves against COVID-19.
There is a need for ongoing evaluation of the realworld effectiveness and safety of COVID-19 vaccines as we enter into the next phase of the pandemic. ]. There was in contrast a more substantial 20-30% reduction in protection against infection and milder disease in the six months following full vaccination [12
&&
]. There is a growing body of evidence that booster doses enhance protection. EAVE II study has reported that booster vaccines were effective in preventing symptomatic infection with a 57% (54-60) reduction in the risk of symptomatic Omicron VOC in comparison to individuals who were positive with SARS-CoV-2 at least 25 weeks after the second vaccine dose [6]. A team from Brazil found that BNT162b2 booster vaccine 6 months after completion of the primary vaccination schedule increased VE against confirmed SARS-CoV-2 infection from 34.7% (95% CI 33.1-36.2) at least 180 days after second dose of CoronaVac (Sinovac Biotech) to 92.7% (95% CI 91.0-94.0) and against severe outcomes (hospitalisation or death) from 72.5% (95% CI 70.9-74.0) to 97.3% (95% CI 96.1-98.1) 14-30 days after the booster dose [13].
Our recent UK-wide analysis of risk of serious COVID-19 outcomes following the first booster using the DaC-VaP platform has identified specific subpopulations who remain at high risk of serious COVID-19 outcomes who have then been prioritised for second dose boosters and COVID-19 therapeutics: older adults aged at least 80 years [adjusted rate ratio (aRR) 3.60 (95% CI 3.45-3.75)], those with five or more comorbidities [aRR 9.51 (95% CI 9.07-9.97)], being male [aRR 1. 23
Coronavirus disease 2019 vaccine effectiveness in pregnant women
The EAVE II platform has been used to create a subcohort of pregnant women -COVID-19 in Pregnancy in Scotland (COPS) cohort. This cohort is one of the national cohorts of pregnant women that includes not only women who give births but also women who are pregnant and who subsequently do not give births either because of a termination or miscarriage. Our analysis of Scotland-wide data found that vaccine uptake was much lower in pregnant women than in the general female population aged 18-44 years: our analysis was undertaken in October 2021 and found, 32.3% of women who gave birth received two doses of vaccine compared to 77.4% in all women in the same age group irrespective of pregnancy [15 && ]. A subsequent analysis of this pregnancy cohort has found that mothers who had received two or more doses of vaccination are likely protected against neonatal SARS-CoV-2 infection (all 12 cases of neonatal SARS-CoV-2 infection occurred in women who had not received at least two doses of vaccine when they had SARS-CoV-2 infection during pregnancy) [16].
Coronavirus disease 2019 vaccine effectiveness in children and adolescents
In a recent analysis involving a partnership between the EAVE II team and colleagues in Brazil, we found that VE against symptomatic SARS-CoV-2 infection was reported highest at 14-27 days post second dose among adolescents aged 12-17 years [19] in Scotland]. From 27 days onwards, VE started to decline in both countries. However, protection against severe COVID-19 (hospitalisation or death) remained high from 28 days after the second dose or at 98 days or more after the second dose [19].
A systematic review of 22 published studies and two ongoing trials evaluated VE against COVID-19 infection in healthy and immunosuppressed children and adolescents aged 2-21 years old [20]. This found that the immune response and efficacy in protecting against moderate to severe COVID-19 infection of COVID-19 vaccines were 96-100% in healthy children and adolescents. Specially, VE against COVID-19 related hospitalisation and its consequences after first and second doses were 91% (95% 89-92) and 92% (95% CI 76-100) respectively. VE was lower in those with underlying diseases and suppressed immune systems [20].
VACCINE SAFETY
There is considerable health policy, public health and public interest in the safety of COVID-19 vaccines, not least because of their very rapid developmental timelines. This interest in safety signals has centred on vascular and neurological adverse events.
Vaccination and risk of vascular complications
An analysis using the EAVE II platform found no positive associations between the first dose BNT162b2 and thrombocytopenic, thromboembolic and haemorrhagic events, but there was a small increase risk of idiopathic thrombocytopenic purpura (ITP) in those receiving a first dose of ChAdOx1 nCoV-19 [21 && ]. Second dose ChAdOx1 nCoV-19 vaccination was also observed to be associated with borderline increased risks of ITP and cerebral venous sinus thrombosis (CVST) events [22]. This small elevated risk of CVST events following ChAdOx1 nCoV-19 vaccination was also observed based on a pooled self-controlled case series study of 3 UK nations undertaken using the DaC-VaP platform [23].
Vaccination and risk of neurological complications
In an analysis across England with validation in Scotland, there was a small increased risk of neurological complications -in particular Guillain-Barré syndrome, Bell's palsy and haemorrhagic stroke in those who received COVID-19 vaccines, but the risk of these complications was greater following a positive SARS-CoV-2 test [26 && ,27]. It was estimated that there were 38 excess cases of Guillain-Barré syndrome per 10 million people after receiving ChAdOx1 nCoV-19 and 145 excess cases after a positive SARS-CoV-2 test [26 && ]. Overall, these adverse events were rare thus providing reassurance about the safety of COVID-19 vaccines.
CONCLUSION
There is now a substantial body of evidence demonstrating that the three main COVID-19 vaccines deployed in the UK offer considerable protection against symptomatic COVID-19 infection and in particular against severe forms of the disease leading to COVID-19 related hospital admission and mortality. This VE has remained high as new variants have emerged, particularly in those who are fully vaccinated. Studies have found that VE is also high in important sub-populations, including pregnant women, children and young people and the elderly. Vaccine protection does however wane underscoring the need for vaccine boosters. The safety profile of COVID-19 vaccines has now also been extensively studied, these investigations finding small increases in risks of vascular and neurological events associated with some vaccines, but overall lower risks of these outcomes than following SARS-CoV-2 infection. There is a need to continue to investigate vaccine effectiveness and safety as we move into a new phase of the pandemic.
Advisory Group (known as NERVTAG) Risk Stratification Subgroup, the Department of Health and Social Care's COVID-19 Therapeutics Modelling Group, and was a member of AstraZeneca's COVID-19 Strategic Thrombocytopenia Taskforce. All A.S.'s roles are unfunded. C.R. is a member of the Scientific Pandemic Influenza Group on Modelling, Medicines and Healthcare products Regulatory Agency Vaccine Benefit and Risk Working Group. All other authors declare no competing interests. | 2023-02-25T06:16:24.308Z | 2023-02-24T00:00:00.000 | {
"year": 2023,
"sha1": "00f6f6107954f8e3b8e4ff975a19824df66b57dc",
"oa_license": "CCBY",
"oa_url": "https://journals.lww.com/co-pulmonarymedicine/Fulltext/9900/Effectiveness_and_safety_of_coronavirus_disease.53.aspx",
"oa_status": "HYBRID",
"pdf_src": "WoltersKluwer",
"pdf_hash": "ea8597c039153957071e19c433b49a588412d213",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
59938757 | pes2o/s2orc | v3-fos-license | Measurement of CKM Matrix Element $|V_{cb}|$ from $\bar{B} \to D^{*+} \ell^{-} \bar{\nu}_\ell$
We present a new measurement of the CKM matrix element $|V_{cb}|$ from $ B^{0} \rightarrow D^{*}\ell \nu$ decays, reconstructed with full Belle data set ($711 \, \rm fb^{-1}$). Two form factor parameterisations, based on work by the CLN and BGL groups, are used to extract the product $\mathcal{F}(1)\eta_{\rm EW}|V_{cb}|$ and the decay form factors, where $\mathcal{F}(1)$ is the factor normalisation and $\eta_{\rm EW}$ is a small electroweak correction. In the CLN parameterisation we find $\mathcal{F}(1)\eta_{\rm EW}|V_{cb}| = (35.06 \pm 0.15 \pm 0.54) \times 10^{-3}$, $\rho^{2}=1.106 \pm 0.031 \pm 0.007$, $R_{1}(1)=1.229 \pm 0.028 \pm 0.009$, $R_{2}(1)=0.852 \pm 0.021 \pm 0.006$. In the BGL parameterisation we find $\mathcal{F}(1)\eta_{\rm EW}|V_{cb}|= 38.73 \pm 0.25 \pm 0.60$, which is higher but consistent with the determination from inclusive semileptonic $B$ decays when correcting for $\mathcal{F}(1)\eta_{\rm EW}$. This is the most precise measurement of $\mathcal{F}(1)\eta_{\rm EW}|V_{cb}|$ and form factors that has ever been carried out, and the first direct study of the BGL form factor parameterisation in an experimental measurement.
We present a new measurement of the CKM matrix element |V cb | from B 0 → D * ν decays, reconstructed with full Belle data set (711 fb −1 ). Two form factor parameterisations, based on work by the CLN and BGL groups, are used to extract the product F(1)ηEW|V cb | and the decay form factors, where F(1) is the factor normalisation and ηEW is a small electroweak correction. In the CLN parameterisation we find F(1)ηEW|V cb | = (35.06±0.15±0.54)×10 −3 , ρ 2 = 1.106±0.031±0.007, R1(1) = 1.229 ± 0.028 ± 0.009, R2(1) = 0.852 ± 0.021 ± 0.006. In the BGL parameterisation we find F(1)ηEW|V cb | = 38.73 ± 0.25 ± 0.60, which is higher but consistent with the determination from inclusive semileptonic B decays when correcting for F (1)ηEW. This is the most precise measurement of F(1)ηEW|V cb | and form factors that has ever been carried out, and the first direct study of the BGL form factor parameterisation in an experimental measurement.
I. INTRODUCTION
The decay B 0 → D * ν is used to calculate the Cabibbo-Kobayashi-Maskawa (CKM) matrix element |V cb |, the magnitude of the coupling between b and c quarks in weak interactions, and is a fundamental parameter of Standard Model (SM). The B 0 → D * ν decay is studied in the context of Heavy Quark Effective Theory (HQET) in which the hadronic matrix elements are parameterised by the form factors that describe this decay. The decay amplitudes of B 0 → D * − + ν are described by three helicity amplitudes which are extracted from the three polarisation states of the D * meson: two transverse polarisation terms, H ± and one longitudinal polarisation term, H 0 .
There is a long standing tension in the measurement of |V cb | using the inclusive approach, based on the decay mode B → X ν and the exclusive approach with B → D * ν. Currently, the world averages for |V cb | for inclusive and exclusive decay modes are [1]: (1) |V cb | = (39.1 ± 0.4) × 10 −3 (CLN-Exclusive). (2) where the error is the experimental and theoretical combined. It is thought that the previous theoretical approaches using the CLN form factor parameterisation [2] were model dependent and introduced bias, and therefore model independent form factor approaches based on BGL [3] should be used. In this paper we will perform fits with both approaches for the first time in an experimental paper. In this paper, the decay is reconstructed in the following channel: B 0 → D * − + ν where D * − →D 0 π − andD 0 → K − π + . This channel offers the best purity for the measurement, which is critical as it is ultimately systematic uncertainty limited. This is experimentally the most precise determination of |V cb | performed with exclusive semileptonic B decays. This result supersedes the previous results onB → D * −ν with an untagged approach from Belle [4]. A major experimental improvement to the the efficiency of the track reconstruction software was implmented in 2011, leading to substantially higher slow pion tracking efficiencies and hence much larger signal yields than the previous result.
II. EXPERIMENTAL APPARATUS AND DATA SAMPLES
We use the full Υ(4S) data sample containing 772 × 10 6 BB pairs recorded with the Belle detector [5] at the asymmetric-beam-energy e + e − collider KEKB [6]. An additional 88 fb −1 of data is collected 60 MeV below the Υ(4S) for the estimation of qq (q = u, d, s, c) continuum background.
The Belle detector is a large-solid-angle magnetic spectrometer that consists of a silicon vertex detector (SVD), a 50-layer central drift chamber (CDC), an array of aerogel threshold Cherenkov counters (ACC), a barrellike arrangement of time-of-flight scintillation counters (TOF), and an electromagnetic calorimeter (ECL) comprised of CsI(Tl) crystals located inside a superconducting solenoid coil that provides a 1.5 T magnetic field. An iron flux-return located outside of the coil is instrumented to detect K 0 L mesons and to identify muons (KLM). The detector is described in detail elsewhere [5]. Two inner detector configurations were used. A 2.0 cm radius beampipe and a 3-layer silicon vertex detector was used for the first sample of 152 × 10 6 BB pairs (referred to as SVD1), while a 1.5 cm radius beampipe, a 4-layer silicon detector and a small-cell inner drift chamber were used to record the remaining 620 × 10 6 BB pairs [7] (referred to as SVD2). We refer to these subsamples later in the paper.
A. Monte Carlo Simulation
Monte Carlo simulated events are used to determined the analysis selection criteria, study the background and estimate the signal reconstruction efficiency. Events with a BB pair are generated using EvtGen [8], and the B meson decays are reproduced based on branching fractions reported in Ref. [9]. The hadronisation process of B meson decays that do not have experimentallymeasured branching fractions is inclusively reproduced by PYTHIA [10]. For the continuum e + e − → qq events, the initial quark pair is hadronised by PYTHIA, and hadron decays are modelled by EvtGen. The finalstate radiation from charged particles is added using PHOTOS [11]. Detector responses are simulated with GEANT3 [12].
B. Event reconstruction and selection criteria Charged particle tracks are required to originate from the interaction point, and to have good track fit quality. The criteria for the track impact parameters in the r − φ and z directions are: dr <2 cm and |dz| < 4 cm, respectively. In addition we require that each track has at least one associated hit in any layer of the SVD detector. For pion and kaon candidates, we use particle identification likelihoods determined using Cherenkov light yield in the ACC, the time-of-flight information from the TOF, and dE/dx from the CDC.
Neutral D 0 meson candidates are reconstructed only in the clean D 0 → K − π + decay channel. The daughter tracks are fit to a common vertex using a Kalman fit algorithm, with a χ 2 -probability requirement of greater than 10 −3 to reject background. The reconstructed D 0 mass is required to be in a window of ±13.75 MeV/c 2 from the nominal D 0 mass of 1.865 GeV/c 2 , corresponding to a width of 2.5 σ, determined from data.
The D 0 candidates are combined with an additional pion that has a charge opposite that of the kaon, to form D * + candidates. Pions produced in this transition are close to kinematic threshold, with a mean momentum of approximately 100 MeV/c, hence are denoted slow pions, π + s . There are no SVD hit requirements for slow pions. Another vertex fit is performed between the D 0 and the π + s and a χ 2 -probability requirement of greater than 10 −3 is again imposed. The invariant mass difference between the D * and the D 0 candidates, ∆m = m D * −m D 0 , is first required to be less than 165 MeV/c 2 for the background fit, and further tightened for the signal yield determination.
Although the contribution from e + e − → qq continuum is relatively small in this analysis, we further suppress prompt charm by imposing an upper threshold on the D * momentum of 2.45 GeV/c in the CM frame (Fig. 1).
Candidate B mesons are reconstructed by combining D * candidates with an oppositely charged electron or muon. Electron candidates are identified using the ratio of the energy detected in the ECL to the momentum of the track, the ECL shower shape (E9/E25), the distance between the track at the ECL surface and the ECL cluster centre, the energy loss in the CDC (dE/dx) and the response of the ACC. For electron candidates we search for nearby bremsstrahlung photons in a cone of 3 degrees around the electron track, and sum the momenta with that of the electron. Muons are identified by their pen- etration range and transverse scattering in the KLM detector. In the momentum region relevant to this analysis, charged leptons are identified with an efficiency of about 90%, while the probabilities to misidentify a pion as an electron or muon is 0.25% and 1.5% respectively. We impose lower thresholds on the momentum of the leptons, such that they reach the respective particle identification detectors for good hadron fake rejection. Here we impose lab frame momentum thresholds 0.3 GeV/c for electrons and 0.6 GeV/c for muons. We furthermore require an upper threshold of 2.4 GeV/c in the CM frame to reject continuum events.
The tree level transition of the B 0 → D * − + ν decay is shown in Fig. 2. Three angular angular variables and the hadronic recoil are used to describe this decay. The latter is defined as follows. Iwhere q 2 is the momentum transfer between the B and the D * meson, and m B , m D * are the the masses of B and D * mesons respectively. The range of w is restricted by the value of q 2 such that the minimum value of q 2 = 0 corresponds to the maximum value of w, The three angular variables are depicted in Fig.3 and are defined as follows: • θ : the angle between the D * and the lepton, defined in the rest frame of W boson.
• θ v : the angle between the D 0 and the D * , defined in the rest frame of D * meson.
• χ: the angle between the two planes formed by the decays of the W and the D * meson, defined in the rest frame of the B 0 meson.
18
differential decay rate is given by i (w) are called the helicity form factors. These form factors are related to , as follows.
Definition of the angles θ , θv and χ for the decay
IV. SEMILEPTONIC DECAYS
In the massless lepton limit, the differential decay rate of B → D * ν decays is given by [2] GeV −2 and η EW is a small electroweak correction (equal to 1.006 in Ref. [13]).
A. The CLN Parameterisation
The helicity amplitudes H ±,0 in Eq. 4 are given in terms of three form factors. In the Caprini-Lellouch-Neubert (CLN) parameterisation [2] one writes these expressions in terms of the form factor h A1 (w) and the form factor ratios R 1,2 (w). They are defined as follows.
, and there are four independent parameters in total. After integrating over the angles, the w distribution is proportional to
B. The BGL Parameterisation
A more general parameterisation comes from Boyd, Grinstein and Lebed (BGL) [3], recently used in Refs. [14,15]. In their approach, the helicity amplitudes H i are given by The relation between the form factors in the BGL and CLN notations are and The three BGL form factors can be written as a series in z, In these equations the Blaschke factors, P 1± , are given by where z P is defined as and t± = (m B ± m D * ) 2 . The product is extended to include all the B c resonances below the B − D * threshold of 7.29 GeV with the appropriate quantum numbers (1 + for f and F 1 , and 1 − for g). We use the the B c resonances listed in Table I. The B c resonances also enter the 1 − unitarity bounds as single particle contributions. The outer functions φ i for i = g, f, F 1 are as follows: where χ T 1+ (0) andχ T 1− (0) are constants given in Table II, and n I = 2.6 represents the number of spectator quarks (three), decreased by a large and conservative SU(3) breaking factor. At zero recoil (w = 1 or z = 0) there is a relation between two of the form factors, The coefficients of the expansions in Eq. 10 are subject to unitarity bounds based on analyticity and the operator product expansion applied to correlators of two hadronic They ensure rapid convergence of the z expansion over the whole physical region, 0 < z < 0.056. The series must be truncated at some power N .
V. BACKGROUND ESTIMATION
The most powerful discriminator against background is the cosine of the angle between the B and the D * momentum vectors in the CM frame under the assumption that the B decays to D * ν. In the CM frame, the B direction lies on a cone around the D * axis with an opening angle 2 cos θ B,D * , defined as: where E * B is half of the CM energy and | p * The quantities E * D * , p * D * and m D * are determined from the reconstructed D * system.
The remaining background in the sample is split into the following categories.
• Uncorrelated decays, where the D * and originate from different B mesons in the event.
• Mis-identified leptons (fake leptons): the probability for a hadron being identified as a lepton is small but not negligible in the low momentum region, and is higher for muons.
• Fake D * candidates, where the D * is incorrectly reconstructed.
To model the B → D * * ν component, which is comprised of four P -wave resonant modes (D 1 , D * 0 , D 1 , D * 2 ) for both neutral and charged B decays, we correct the branching ratios and form factors. The P -wave charm mesons are categorised according to the angular momentum of the light constituent, j , namely the j P = 1/2 − doublet of D * 0 and D 1 and the j P = 3/2 − doublet D 1 and D * 2 . The shapes of the B → D * * ν q 2 distributions are corrected to matched the predictions of the LLSW model [16]. An additional contribution from nonresonant modes is considered, although the rate appears to be consistent with zero in recent measurements.
To estimate the background yields we perform a binned maximum log likelihood fit of the D * candidates in three variables, ∆m, cos θ B,D * , and p * . The bin ranges are as follows: Prior to the fit, the residual continuum background is estimated from off-resonance data and scaled by the offon resonance integrated luminosities and the 1/s dependence of the e + e − → qq cross section. The kinematics of the off and on-resonant continuum background is expected to be slightly different and therefore binned correction weights are determined using MC and applied to the scaled off-resonance data. The remaining background components are modelled with MC simulation after correcting for the most recent decay modelling parameters, and for differences in reconstruction efficiencies between data and MC. Corrections are applied to the lepton identification efficiencies, hadron misidentification rates, and slow pion tracking efficiencies. The data/MC ratios for high momentum tracking efficiencies are consistent with unity and are only considered in the systematic uncertainty estimates. The results from the background fits are given in Table III and Fig. 4.
VI. MEASUREMENT OF DIFFERENTIAL DISTRIBUTIONS
Measurement of the decay kinematics requires good knowledge of the signal B direction to constrain the neutrino momentum 4-vector. To determine the B direction we estimate the CM frame momentum vector of the nonsignal B meson by summing the momenta of the remaining particles in the event ( p * incl. ) and choose the direction on the cone that minimises the difference to −p * incl. . To determine p * incl. we exclude tracks that do not pass near the interaction point. The impact parameter requirements depend on the transverse momentum of the track, p T , and are set to: • p T < 250 MeV/c: dr < 20 cm, dz < 100 cm, • p T < 500 MeV/c: dr < 15 cm, dz < 50 cm, • p T ≥ 500 MeV/c: dr < 10 cm, dz < 20 cm.
Some track candidates may be counted multiple times, due to low momentum particles spiralling in the central drift chamber, or due to fake tracks fit to a similar set of detector hits as the primary track. This can be removed by looking for pairs of tracks with similar kinematics, travelling in the same direction with the same electric charge, or in the opposite direction with the opposite electric charge. Isolated clusters that are not matched to the signal particles (i.e. from photons or π 0 decays) are required to have lower energy thresholds to mitigate beam induced background, and are 50, 100 and 150 MeV in the barrel, forward end-cap and backward end-cap regions respectively. We compute p incl. by summing the 3-momenta of the selected particles: where the index i denotes all isolated clusters and tracks that pass the above criteria. This vector is then translated into the CM frame. There is no mass assumption used for the charged particles. The energy component, E * incl. , is set to the experiment dependent beam energies through E * beam = √ s/2.
We find that the resolutions of the kinematic variables are 0.020 for w, 0.038 for cos θ , 0.044 for cos θ V and 0.210 for χ. Based on these resolutions, and the available data sample, we split each distribution into 10 equidistant bins for the |V cb | and form factor fits.
A. Fit to the CLN Parameterisation
We perform a binned χ 2 fit to determine the following quantities in the CLN parameterisation: the product F 1 |V cb |, and the three parameters ρ 2 , R 1 (1) and R 2 (1) that parameterise the form factors. We use a set of onedimensional projections of w, cos θ , cos θ V and χ. This reduces complications in the description of the six background components and their correlations across four dimensions. This approach introduces finite bin to bin correlations that must be accounted for in the χ 2 calculation.
We choose equidistant binning in each kinematic observable, as described above, and set the ranges accordingly to their kinematically allowed limits. The exception is w: while the kinematically allowed range is between 1 and 1.504, we restrict this to between 1 and 1.50 such that we can ignore the finite mass of the lepton in the interaction.
The number of expected events in a given bin i, N theory i , is given by where N B 0 is the number of B 0 mesons in the data sample, B(D * + → D 0 π + ) and B(D 0 → K − π + ) are the D * and D branching ratios into the final state studied in this analysis, τ B 0 is the B 0 lifetime, and Γ i is the width obtained by integrating the CLN theoretical expectation within the corresponding bin boundaries. The expected number of events, N exp. i , must take into account finite detector resolution and efficiency, where j is probability that an event generated in bin j is reconstructed and passes the analysis selection criteria, and R ij is the detector response matrix (the probability that an event generated in bin j is observed in bin i). N bkg i is the number of expected background events as constrained from the total background yield fit.
In the nominal fit we use the following χ 2 function based on a forward folding approach: (20) where N obs i are the number of events observed in bin i of our data sample, and C −1 ij is the inverse of the covariance matrix. The covariance matrix is the variance-covariance matrix whose diagonal elements are the variances and the off-diagonal elements are the covariance of the elements from the i th and j th positions. The covariance is calcu- lated for each pair of bins in either w, cos θ , cos θ V and χ. The off-diagonal elements are calculated as, where p ij is the relative probability of the twodimensional histograms between observable pairs, p i and p j are the relative probabilities of the one dimensional histograms of each observable, and N is the total size of the sample. The diagonal elements are the variances of N exp i and are calculated as, ). (22) which uses the Poisson uncertainty associated with the number of events in the MC and data in each bin, and the final term is the total error associated with the background arising from the background fit procedure. We have tested this fit procedure using MC simulated data samples and all results are consistent with expectations, showing no signs of bias. The results from the fit are summarised in Table V and the fit correlation coefficients are given in Table IV. We find reasonable p-values and results consistent with the world averages. The comparison between data and the form factor fit are shown in Fig. 5. To perform the fit to the BGL parameterisation we follow the approach described in Ref. [14]. We similarly truncate the series in the expansion for a f and a g terms at O(z 2 ) and order O(z 3 ) for F 1 . This results in five free parameters (one more than in the CLN fit), defined as This number of free parameters can describe the data well, while higher order terms will not be well constrained unless additional information from lattice is introduced. We perform a χ 2 fit to the data with same procedure as for the CLN fit described above. The resulting value for |V cb | is larger than that from the CLN parameterisation, and consistent with the inclusive approach. The fit results are given in Table VI and Fig. 6. The linear correlation coefficients are listed in Tables VII. Correlations can be high in this fit approach. Only the SVD1+SVD2 combined samples are fitted as the fit does not converge well with the smaller SVD1 data sets in this parameterisation.
VII. SYSTEMATIC UNCERTAINTIES
To estimate the systematic uncertainties on the partial branching fractions, CLN form factor parameters, and |V cb |, we consider the following sources: background component normalisations, MC tracking efficiency, charm branching fractions, B → D * * ν branching fractions and decay differentials, the B 0 lifetime, and the number of B 0 mesons in the data sample. The systematic uncertainties on the branching fraction, F(1)|V cb | and CLN form factor parameters from the CLN fit are summerised in Table IX. These systematic errors are compatible with BGL approach as well.
We estimate systematic uncertainties by varying each possible uncertainty source such as the PDF shape and the signal reconstruction efficiency with the assumption of a Gaussian error, unless otherwise stated. This is done via sets of pseudo-experiments in which each independent systematic uncertainty parameter is randomly varied us- 05 1.1 1.15 1.2 1.25 1.3 1.35 1.4 1.45 1 05 1.1 1.15 1.2 1.25 1.3 1.35 1.4 1.45 1 ing a normal distribution. The entire analysis is repeated for each pseudo-experiment and the spread on each measured observable is taken as the systematic error.
The parameters varied are split into two categories, those that affect only the normalisation, and those that affect the differentials (shapes). We first list the latter contributions.
• The tracking efficiency corrections for low momen- tum tracks vary with track p T , as do the relative uncertainties. We conservatively treat the uncertainties in each slow pion p T bin to be fully correlated.
• The lepton identification efficiencies are varied according to their respective uncertainties, which are dominated by contributions that are correlated across all bins in p lab and θ lab .
• The results from the background normalisation fit are varied within their fitted uncertainties. We take into account finite correlations between the fit results of each component.
• The uncertainty of the decaysB → D * * −ν are twofold: the indeterminate composition of each D * * state and the uncertainty in the formfactor parameters used for the MC sample production. The composition uncertainty is estimated based on uncertainties of the branching fractions: ±6% forB → D 1 (→ D * π) ν , ±12% forB → D * 2 (→ D * π) ν , ±24% forB → D 1 (→ D * ππ) ν and ±17% forB → D * 0 (→ D * π) ν . If the experimentally-measured branching fractions are not applicable, we vary the branching fractions continuously from 0% to 200% in the MC expectation. We estimate an uncertainty arising from the LLSW model parameters by changing the correction fac- tors within the parameter uncertainties.
• The relative number of B 0B0 meson pairs compared to B + B − pairs collected by Belle has a small uncertainty and affects only the relative composi-tion of cross-feed signal events from B + and B 0 decays.
• Charged hadron particle identification uncertainties are determined with data using D * tagged charm decays.
The uncertainties that only affect the overall normalisation are: the tracking efficiency for high momentum tracks, the branching ratios B(D * + → D 0 π + ), and B(D 0 → K − π + ), the total number of Υ(4S) events in the sample, and the B 0 lifetime.
These results are consistent with, and more precise than those published in Refs. [4,[17][18][19]. We find the value of branching fraction is insensitive to the choice of parameterisation. We also present the results for |V cb | from the BGL fit, where the first uncertainty is statistical, and the second systematic.
These results are consistent with those based on a preliminary tagged approach by Belle [20], as performed in Refs. [14,15]. Both sets of fits give acceptable χ 2 /ndf: therefore the data does not discriminate between the parameterisations. The result with the BGL paramterisation has a larger fit uncertainty.
We perform a lepton flavour universality (LFU) test by forming a ratio of the branching fractions of modes with electrons and muons. The corresponding value of this ratio is B(B 0 → D * − e + ν) B(B 0 → D * − µ + ν) = 1.01 ± 0.01 ± 0.03 , where the first error is statistical and the second is systematic. The systematic uncertainty is dominated by the electron and muon identification uncertainties, as all others cancel in the ratio. This is the most stringent test of LFU in B decays. This result is consistent with unity.
IX. CONCLUSION
In this conference paper we present a new study by the Belle experiment of the decay B → D * ν. We present the most precise measurement of |V cb | from exclusive decays, and the first direct measurement using the BGL parameterisation. The BGL parameterisation gives a higher value for |V cb |, which is closer to that expected from the inclusive approach [1,[22][23][24]. We also place stringent bounds on lepton flavour universality violation, which has been observed to be consistent with zero. | 2018-11-19T23:25:21.000Z | 2018-09-10T00:00:00.000 | {
"year": 2018,
"sha1": "97ea9a96d4955373e4c7813abf4c940f0b2bb3ac",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.100.052007",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "d6a28c3059f9cc7a03517a318690fd705d65fde8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
3441019 | pes2o/s2orc | v3-fos-license | Patients’ Expectations and Perceptions of Service Quality in the Selected Hospitals
Background: Hospital’s success depends on patients’ expectations, perceptions, and judgment on the quality of services provided by hospitals. This study was conducted to assess the patients’ perceptions and expectations from the quality of inpatient health care in Vali-Asr hospital, Ghaemshahr, and Imam Khomeini and Shafa Hospitals, Sari. Materials and Methods: This study is applied regarding the objective of the study. Considering the research methodology, it is a descriptive – analytical study. The sample of this study consists of 600 patients with at least 24 hours of being hospitalized in internal, surgery, women, and children sectors of Vali-Asr, Ghaemshahr, Imam Khomeini, and Shafa Hospitals. Using random sampling method, the classifications relevant to the size of each class were selected. The data required was collected through the standard SERVQUAL questionnaire and then it was analyzed using the SPSS software. Results: The overall mean value and standard deviation of expectations were equal to 10.4 and 28, respectively. The mean value for the field of perception was 69.2 and the relevant standard deviation was 26. In terms of patients and hospital visits in concrete cases, the highest priority is related to empathy. The second priority is related to physical appearance, the third priority is related to responsiveness, the fourth priority is related to assurance, and the lowest priority is related to the reliability of the SERVQUAL approach. Examining the gap between patients’ perceptions and expectations, the widest gap was observed in the Vali-Asr Hospital with the mean and SD (-92.0±39.0) and the lowest gap was observed in Shafa Hospital with the mean value of (-39.9±44.0). According to The Kruskal–Wallis test, the difference observed in these three hospitals were significant. Conclusion: The results showed that patients’ expectations had not been met in any of the examined dimensions and their consent has not been achieved. It seemed that necessary for managers and relevant authorities to plan and pay special attention to this important issue.
INTRODUCTION
The main mission of hospitals is to provide quality care services for patients and to meet their needs and expectations. Fulfilling this important mission requires the quality institutionalization in hospitals (1). Accordingly, in 1983, the America National Health Service passed law that all health care centers in America should use the recipients' comments in setting their plans and consider these comments in the eval-uation of training programs designed for the staff. Despite the increased number of hospitals and hospital activities, the improved quality of health care services has become a priority concern for patients (2). The quality of health services in many countries, especially developing and Third World countries has become a pressing issue. In our country, patients are always looking for a hospital with better quality of health care services. Therefore, better ser-vice qualities can be considered as a means to achieve more support, competitive advantage, and long-term profitability (3). Today, quality is defined by customers' demand & customer's perceptions and expectations are considered as the most fundamental determinant factors of quality (4). Providing sufficient information on the grounds of the customer's perception of the service quality can help organizations to identify the dimensions that affect the organization's competitive advantage. On the other hand, it can prevent the wasting of resources (5). In order to determine the gap of the service quality in hospitals and health care centers, the SERVQUAL approach has been used in many studies. This service conceptual model was introduced in 1985 by Parasuraman et al. This tool measures the patients' perceptions and expectations of services in 5 different dimensions, including physical or concrete dimensions, reliability, responsiveness, assurance, and empathy. The difference between customer's expectations and perceptions of service provided is called the service quality gap (6). Successful organizations are trying to meet the environmental demands and needs and this is not made possible unless organizations understand the need to move towards being customer-centered. In fact, customer-centered organizations set their activities based on the expectations and preferences of their customers and are to satisfy the needs and expectations of customers and considering their expectations as service quality standards is of essence (7). Hospitals are the most important element of the health care system and in terms of resources, about half of health care costs is allocated to them account since the largest and most expensive operational units are health care systems attracting a majority of capital, financial, and human resources (8)(9)(10)(11).
Considering that patients compare the quality of services in health systems with their expectations, in order to assess the gap between the expectations and perceptions of service quality, patients' expectations and perceptions of service quality in Mazandaran University of Medical Sciences was evaluated in 2014. Determining the gap of quality services provided by the health centers, this study was to provide the proper grounds for the development of programs and projects by authorities and service suppliers to increase patients' satisfaction with services.
MATERIALS AND METHODS
This study is applied regarding the objective of the study. Considering the research methodology, it is a descriptive -analytical study. In terms of data collection, it is regarded as a survey. Two methods were used to gather information: library method and the standard SERVQUAL questionnaire. This questionnaire consists of general questions (age, sex, marital status, education, hospitalization records of Imam Khomeini Hospital, the number of hospitalizations in this Hospital, length of hospitalization) as well as 22 questions in the areas of concrete cases (questions 1 to 4), reliability (questions 5 to 9), responsiveness (questions 10 to 13), assurance (questions 14 and 17), and empathy (questions 18 and 22) in a5-pointLikert scale (strongly disagree, disagree, indifferent, agree and strongly agree), respectively. To calculate the validity of the questionnaire, content validity was used in the way that, considering the questionnaire being standard, professors of this field were also consulted with and the content validity was approved.
To test the reliability, internal consistency (Cronbach's alpha) was used and the values of Cronbach's alpha were equal to .88 for service quality dimension, .83 for concrete cases, .87 for reliability, .90 for responsiveness, .91 for assurance, and .80 for empathy. The study population included those patients with at least 24 hours of being hospitalized in internal, surgery, women, and children sectors of Vali-Asr, Ghaemshahr, Imam Khomeini, and Shafa Hospitals. According to the statistics of 2013, there were 10,000 people. Groups of samples (subjects) were selected using stratified random sampling method in which the hospital sector is regarded as a class and the sample size in each class was also selected proportional to the size of the class. Moreover, the sample size was determined to be 622 subjects by using Cochran formula (with Type I error of .01, estimation error of 5%, and p-value of .5). In order to make comparison between the current and desired status of the test and to determine the priority of services provided, paired sample t-test and Friedman's rank test were used, respectively.
RESULTS
The gender distribution of respondents in the study showed that they consists of 161 men (26.8%) and 439 women (2/73 percent). Regarding the marital status of respondents, 64 persons were single (10.7%) and 536 persons were married (3/89 percent). Regarding respondents' age, table of descriptive statistics in this study showed that the average age, median, mode, standard deviation, minimum, and maximum were 39.94, 38, 37, 10.99, 5, and 70, respectively. With regard to patients' level of education, it was observed that there were 36 illiterate persons (6%), 79 below-diploma persons (13. After collecting data on study variables and running Kolmogorov-Smirnov test, the results showed that all variables in both areas of perceptions and expectations are abnormal. Then, Binomial test was used to check the status of the variables examined. As it is observed, since the 5-point Likert-scaled questionnaire was used, we have to check the status of perception and its dimensions by the Binomial test. In this test, the cut-off point is regarded 3. The ratio of people with scores less than 3 is compared with the ratio of people with scores greater than 3. If the sig. value is less than .05, the equality hypothesis of these two categories is rejected and is determined if it is appropriate or inappropriate. Regarding the scores in perceptions, all 600 participants obtained a score greater than 3. It means that 100 percent of participants in the study obtained the perception scores greater than 3, which is a satisfactory score. Overall, it can be said that the perception score in the hospitals under consideration is greater than average. On the other hand, according to the above table, the perception dimension scores have the same condition and are higher than the average. To check the gap between perception and expectation scores, paired Wilcoxon test was used. As it is observed in Table 2, the total mean value for expectation is 4.06 and the standard deviation is equal to 0.45. The mean and standard deviation of perception is equal to 0.4 and 0.33, respectively. According to Wilcoxon Z value of -19.77 and the sig. value which is less than 0.05, the hypothesis of mean equality for expectations and perceptions is rejected. This means that there is a significant difference between the visitors and patients' expectations and perceptions in Sari Imam Khomeini Hospital, Ghaemshahr Vali-Asr Hospital, and Sari Shafa Hospital. The mean and standard deviation of concretes in the field of expectation are 4.62 and 0.47, respectively. The mean and standard deviation of concretes in the field of perception are 4.62 and 0.47, respectively. Regarding the t-value of -19.45 and sig. value being less than 0.05, the equality hypothesis for concrete mean scores in both fields of expectation and perception is rejected. The mean and standard deviation of reliability in the field of expectation are 4.62 and 0.47, respectively. The mean and standard deviation of reliability in the field of perception are 4.31 and 0.36, respectively. Regarding the z-value of -13.12 and sig. value being less than 0.05, the equality hypothesis of reliability mean scores in both fields of expectation and perception is rejected. The mean and standard deviation of responsiveness in the field of expectation are 4.61 and 0.47, respectively. The mean and standard deviation of responsiveness in the field of perception are 3.89 and 0.46, respectively. Regarding the z-value of -19.27 and sig. value being less than .05, the equality hypothesis of responsiveness mean scores in both fields of expectation and perception is rejected. The mean and standard deviation of assurance in the field of expectation are 4.61 and 0.46, respectively. The mean and standard deviation of assurance in the field of perception are 3.82 and 0.48, respectively. Regarding the z-value of -26.30 and sig. value being less than 0.05, there is a significant difference between the visitors and patients' expectations and perceptions with regard to assurance dimension in Sari Imam Khomeini Hospital. Finally, the mean and standard deviation of empathy in the field of expectation are 4.60 and 0.47, respectively. The mean and standard deviation of empathy in the field of perception are 3.97 and .34, respectively. Regarding the z-value of -38.18 and sig. value being less than 0.05, the equality hypothesis of empathy mean scores in both fields of expectation and perception is rejected. As a result, there is a significant difference between the visitors and patients' expectations and their perception in all dimensions examined in Sari Imam Khomeini Hospital and the patients' satisfactions Table 2. Paired Wilcoxon test to examine the differences between the perceptions and expectations in these fields have not been met. In this part, we are to explain in which field the gap between the perceptions and expectations is more. To this end, Friedman test is used. As it is observed in Table 3, the greatest gap is for assurance with the mean of -0.78, standard deviation of .52, and rank mean of 2.27. The smallest gap is related to reliability with the mean and SD (0.52±4.83) With regard to Friedman statistics of 664.34 and the sig. value being less than 0.05, the equality hypothesis is rejected and the result is that the difference between the gaps in the different areas is unequal.
As it can be observed in the above table, the results of Kruskal-Wallis test show that, the greatest gap of the three hospitals was related to Sari Shafa Hospital with the mean of -0.92 and standard deviation of .39. The lowest gap was in Vali-Asr Hospital with the mean of -0.39 and the standard deviation of 0.44. Moreover, the amount of gap in Sari Imam Khomeini Hospital was -0.35with the standard deviation of 0.04. According to the Kruskal-Wallis statistics and the sig. value being less than .05, the difference observed among the three hospitals was significant.
DISCUSSION
The results showed that the greatest gap of the three hospitals with regard to the dimensions of concretes, reliability, responsiveness, assurance, and empathy was related to Sari Shafa Hospital and the lowest gap for the dimensions discussed was observed in Ghaemshahr Vali-Asr Hospital. According to the statistics and the sig. value which is less than .05, a significant difference was observed among these three hospitals. Examining the gap in perceptions and expectations, the greatest gap was related to the dimensions of assurance and the lowest gap was relevant to reliability and the difference between the gaps in the different areas is unequal. In none of the surveyed dimensions, patients' expectations have not been met and their consent has not been obtained. The results of a study conducted by Tabibi, et al. showed that there is a significant difference among the patients' perceptions and expectations regarding 5 dimensions of service quality in these hospitals. Patients visiting the clinics ranked assurance with the score of 4.41 and personnel's responsiveness with the score of 2.21 as the most important and the least important dimensions (11). However, the findings of the Havasbeigi's study also showed significant differences among the patients' perceptions and expectations regarding five dimensions of service quality in the hospitals. Patients visiting the clinics ranked concreteness with the score of 3.47 and assurance with the score of 2.06 as the most important and the least important dimensions (12). Hekmatpour's et al. study examined the quality of health care in Arak hospitals and showed there are significant differences among all dimensions of patients' expectations and perceptions from service quality and patients' perception of quality in none of the dimensions was consistent with their expectations. It means that all hospitals failed to meet patients' expectations in any of the quality dimensions. Moreover, the overall rate of perceived service quality does not correspond to patients' average expectations. However, in Hekmatpour' study, the greatest quality gap was related to access to health care dimension and the lowest gap was relevant to service assurance. This is not compatible with the current study (13). Also, in all of domains of services in Caha study, patients' expectations of the services provided were higher than their perceptions and the gaps between patients' perceptions and their expectations were negative. The highest negative gap was in responsiveness dimension and the lowest negative gap was in assurance dimension. The negative gaps indicate that patients' expectations of the services provided are higher than their perceptions (14). In a study by Sabahi et al., to evaluate the hospitals' service quality from the perspective of patients being hospitalized, the results showed that the mean score was significant for all dimensions in hospitals and the highest and lowest quality scores were relevant to empathy and concreteness, respectively (15). It was observed from the results of Abedi study, in perception part; there was a significant difference in all groups except for responding and behavior, while, in expectation level, no significance in the age of the dimensions except for access. Also, the satisfaction status of patients in Imam Hospital clinic (17). The results revealed that there is a gap between the expected and perceived quality of hospital patients. Patients' expectations are beyond their understanding of the current situation and none of the aspects of the service is met in their expectations. However, patients' expectations in both concretes and responsiveness of service quality and received service qualities are more than other dimensions and service quality has been downloaded more than other aspects. And these two dimensions had the greatest impact on service quality gap. In Mahdizadeh' study, these factors and cooperation in evaluating the quality of health care and hospitalized patients' satisfaction using newly developed SERVQUAL method in physical environment and facilities showed that the patients' expectation score in all aspects was higher than their perception score and the most gap was related to concretes and the lowest gaps was observed in responsiveness (18). The problem of service quality is mostly related to those organizations which do not focus on understanding and meeting customers' needs and demands. The service organization should put themselves in their customers' boat and lay their own policies on the basis of their views. Lack of direct relationship with customers leads the customers' expectations not to meet. As a result, there would be a controversy among customers regarding the service quality provided and security factors (19).This study and other studies conducted in hospitals and other health care centers show that patients' expectations are not met in none of the aspects and they are not satisfied. The negative gap (expectations more than perceptions) in all dimensions of quality showed that it is necessary to improve service quality in all dimensions. In order to lessen the gap of all five dimensions of quality and provide desired services, it is recommended that hospital managers by planning and their optimal management take the patients' needs into account (20). This necessitates managers and the relevant authorities, special attention to planning. In addition, the proper use of tools such as SERVQUAL seems necessary in evaluating hospitals' service quality and in enabling managers and experts to identify the grievances. And since the difference between customers' expectations and their received services increases over time regardless of the new approaches or actions, hospital authorities should implement expectation management through which they become aware of the source or sources of the customers' expectation formation and become sure of their customers' logical needs, their own abilities and their organization's capabilities in meeting the patients' needs.
• Author's contribution: all authors were included in all phases of preparing this article, including final proof reading.
• Acknowledgment: The authors gratefully acknowledge the study team. The study was supported by a grant from the Health Sciences Research Center, Mazandaran University of Medical Sciences.
• Conflict of interest: none declared. | 2016-05-12T22:15:10.714Z | 2016-04-01T00:00:00.000 | {
"year": 2016,
"sha1": "85b873bfc512675b089427e51f3f016fa1df29d9",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc4851526?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "85b873bfc512675b089427e51f3f016fa1df29d9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
210231099 | pes2o/s2orc | v3-fos-license | Validation of Combustion Models for Lifted Hydrogen Flame
. Within a Reynolds Averaged Numerical Simulation (RANS) approach for turbulence modelling, a computational investigation of a turbulent lifted H 2 /N 2 flame is presented. Various turbulent combustion models are considered including the Eddy Dissipation Model (EDM), the Eddy Dissipation Concept (EDC), and the composition Probability Density Function transport model (PDF) in combination with different detailed and global reaction mechanisms. Turbulence is modelled using the Standard k- ε m odel, which has proven to offer a good accuracy, based on a preceding validation study for an isothermal H 2 /N 2 jet. Results are compared with the published measurements for a lifted H 2 /N 2 flame, and the relative performance of the turbulent combustion models are assessed. It is observed that the prediction quality can vary largely depending on the reaction mechanism and the turbulent combustion model. The best and quite satisfactory agreement with experiments is provided by two detailed reaction mechanisms applied with a PDF model.
Introduction
Power generation by gas and steam turbines [1] depend largely on the combustion process. Parallel to the efforts for exploiting new energy sources [2] as well as recovery techniques [3], combustion will continue to play an important role in power generation. This is true also for renewables, as biomass [4] plays an important role.
Combustion of hydrogen and hydrogen containing fuels occupies an important role in clean and efficient energy supply, environment protection and resource efficiency. Hydrogen offers an attractive alternative for storing excess energy in power generation from photovoltaics and wind energy. Furthermore, instead of combustion [5] the gasification of waste, biomass and coal [6] offers good possibilities for efficient and clean power generation. The so-called synthesis gas (syngas), which results as the product of gasification, contains, in addition to carbon monoxide and small fractions of methane, rather significant amounts of hydrogen. Additionally, there is a growing interest in nuclear energy based hydrogen production, i.e., using the nuclear power for electrolysis, thermochemical cycles or hybrid approaches to produce hydrogen [7]. From the environmental perspective, its subsequent combustion is most welcome since it produces no carbon dioxide.
Utilization of hydrogen or hydrogen blend fuels in combustion systems represents a great challenge. Hydrogen is extremely reactive and, compared to other gases, has different material properties, so that it can alter the combustion properties of the gas mixture even in small proportions. In premixed combustion, a potential problem is increased flashback propensity [8]. The counterpart of flashback is blow-off [9]. The forerunner of blow-off is the lift-off, as the flame root leaves the rim. Following the lift-off, with a further increase of the jet speed, a stabilized flame at a distance from the rim, i.e. a lifted flame can be obtained [10].
Computational analysis of turbulent lifted flames is a very challenging task, due to the modelling of turbulence and its interaction with chemistry [11]. For turbulence modelling, although the Large Eddy Simulation (LES) approach [12][13][14] is being increasingly used in practical applications, its adequate use in industrial development processes is still very challenging and the RANS approach [15] is still frequently preferred to this purpose. Given this, the present work is focused on the RANS methodology, and in the following review, RANS based approaches will be considered only.
Prediction of lifted turbulent jet flames is a demanding task. A partially premixed state is reached at the flame base, which leads to complex stabilization mechanisms [16]. In configurations, where the fuel jet is issuing into a hot coflow, like the presently investigated one, autoignition emerges as a further possible mechanism of stabilization. Thus, the applied turbulent combustion model should sufficiently accommodate for the mentioned effects.
As outlined above, different and quite sophisticated models have already been used to predict turbulent, lifted hydrogen flames.
However, to the authors' opinion, there is still need for further investigation. The purpose of the present paper is to present a "coherent" validation study, for a cascade of different turbulent combustion modelling strategies in a wide range. This coherent validation study based on the consistent/comparable strategies in all further aspects of the mathematical modelling, numerical methods and gridding is believed to be of additional value to the research community, as it will provide a basis for a direct comparison of a wide range of turbulent combustion models, unmasked by the other effects, for the present problem.
Problem definition
The "main" considered test case is the atmospheric, lifted flame of a turbulent H2/N2 jet in vitiated coflow, which was experimentally investigated by Cabra et al. [24]. This comprises a free jet of H2/N2 mixture (with H2 volume fraction of 25.37%) in a co-flow of exhaust gases stemming from lean hydrogen combustion (with oxygen volume fraction of 14.74%).
In comparing the predictions with the experimental results, mainly, the lift-off heights of the flames for different values of the coflow temperatures are monitored. For this purpose, the experimental data of Wu et al. [25] is considered which was obtained on an experimental setup, which was equivalent to that of Cabra et al [24].
For selecting the turbulence model to be used, a study has been carried out, where turbulence models are validated in an isolated manner from the combustion model, on an isothermal, non-reacting test case, which, however, resembles the setup of the main, combusting test case, in so far that the gas composition (containing H2), and, thus, density, is variable.
As test case for this purpose, the measurements of Sautet and Stepowski [26] are considered. The test rig was an open, atmospheric one at ambient conditions. In the experiments, non-reacting turbulent jets of H2/N2 mixtures discharging into a coflow air stream were investigated.
Models
The general-purpose, finite volume based CFD code ANSYS Fluent 18.0 [27] is used in the computational analysis of the problem.
Outline
A two-dimensional, axisymmetric formulation is used. The medium is considered to be an ideal gas mixture with Newtonian behavior. Buoyancy effects are neglected, which is reasonable due to the prevailing rather high Froude numbers [28]. The radiative heat transfer [29] is also neglected and the reacting system is assumed to be adiabatic. An accurate modelling of the molecular material properties is attempted. For all species, the specific heat capacities are represented by a pair of (low and high temperature ranges) fourth order polynomials of temperature [30]. The viscosities, thermal conductivities, as well as the multi-component diffusion coefficients are calculated according to the kinetic theory [9].
The flow turbulence is described within a RANS framework [15], as already mentioned above. Among the two-equation turbulence models the specific dissipation rate (omega, ω) based models have gained popularity [15,31]. However, since the present problem is of completely free-shear type, among the two-equation models, the dissipation rate (epsilon, ε) based models are considered only, in particular, the Standard k-ε [27,32], the RNG k-ε [27,33], and the Realizable k-ε models [27, 33,34]. For scalar turbulent fluxes, the gradientdiffusion approximation is used assuming constant turbulent Schmidt numbers. For the latter, 0.85 is used for the energy equation, whereas 0.7 is assumed for the further scalars.
The velocity-pressure coupling is treated by the SIMPLEC scheme [27]. For the discretization of the convective terms, the QUICK scheme [27] is used, which is considered to possess formal accuracy of third order. As no under-relaxation is applied to pressure, the under-relaxation factors range between 0.4-0.7 and 0.8-1.0 for the velocities and the scalar quantities, respectively.
For convergence, the threshold value for the normalized residual has been set to 10 -8 for the energy equation and to 10 -5 for the remaining equations.
Combustion models
As single-step global mechanisms, those of Kudriakov et al. [36] (KU) and Marinov et al. [37] (M) are considered (comprising the main species, H2, O2, H2O), where the former and latter consider an irreversible and a reversible reaction, respectively.
As For purely mixing controlled combustion, the timeaveraged consumption rate is calculated, in Eddy Dissipation Model (EDM), from the dissipation rate of turbulence eddies [42]. The chemical kinetics effects (K) are taken in an ad-hoc manner into account, calculating the rate from an Arrhenius expression neglecting fluctuations [9], comparing the two rates and taking the smaller one [27].
As an improved version of the eddy dissipation idea is the Eddy Dissipation Concept (EDC), where the timeaveraged conversion rate is calculated by taking the mixing and kinetics effects in a combined manner into account, in a more sophisticated way, treating the small turbulent scales to behave as well stirred reactors [27,43].
In the composition PDF transport (PDF) model, for obtaining the averaged values thermochemical variables a single-point, joint probability density function is obtained from its transport equation, which is derived from the governing equations under the application of some closure models [18,27].
Isothermal turbulent flow
The solution domain is two-dimensional axisymmetric, having rectangular shape in plane of the axial (x) and radial (r) coordinates.
The domain is starting at the exit plane of the jets, extending 22d in the axial direction. Its radial extension is 11d. The inlet boundaries representing the central and coaxial jet are placed on the left boundary of the domain (x=0), whereas the right boundary (x=22d) is defined to be pressure boundary with a prescribed constant pressure and zero gradient conditions for the remaining variables. The lower (r=0) boundary is a symmetry axis, whereas the upper one (r=11d) is also defined to be a symmetry surface.
On the left boundary (x=0), the part that surrounds the annular jet is defined to be a pressure boundary, again, (ambient pressure) that allows an inflow, i.e. the suction of ambient air by the ejector effect. At inlet, the measured values are prescribed as the inlet boundary conditions. Boundary conditions of turbulence quantities are derived from assumed turbulence intensities and length scales.
Computational grids are generated as structured, rectangular grids, with axial and radial concentration of the nodes near the jet inlet. For determining the adequate grid resolution, a grid independence study is performed. In the grid independence study, the Standard k-ε model is used as the turbulence model. Table 1 displays the variation of the potential core length (L) with grid fineness, where N is the total number of nodes. One can see that sufficient grid independence is achieved for N > 5,000. In the further calculations for the validation of turbulence models, the finest grid is used, which had 16,200 nodes.
The predicted variations of the half value radius (δ) at the axial position of x/d=20 are compared with the experimental values in Table 2.
One can see that the predictions delivered by the Standard k-ε model agree rather well with the experiments, better than the Realizable and RNG versions, for the present, variable density H2/N2/Air jet (Table 2). Thus, the Standard k-ε model is selected.
Flame
Similar to the isothermal test case, the solution domain consists of a cylinder, the bottom of which is placed at the jet exit. The axial coordinate (x) extends along the axis, in the main flow direction, with x=0 placed at the jet exit (jet inlet boundary is centered at the cylinder bottom). The domain size in the radial and axial directions are about 20d and 80d, respectively. The cylinder bottom is covered by two inlet boundaries, i.e. a central (jet), and an annular (coflow) one. Both inlets are separated by a thin, ring shaped wall boundary, representing the nozzle the lip. The top of the cylinder is defined as the outlet boundary, whereas the jacket of the cylinder is assumed to be an impermeable slip boundary. At the outlet boundary, a constant static pressure is prescribed, along with vanishing normal-gradient conditions for the remaining quantities. At the inlets, top-hat profiles are prescribed for all convectivediffusively transported variables, in accordance with the measured values. For turbulence quantities, a turbulence intensity of 4% is assumed at the both inlets. The jet diameter and the size of the individual holes in the outer disk are taken as basis in assuming the length scale for the jet, and coflow, respectively. Computational grids are generated as structured, rectangular grids, with axial and radial concentration of the nodes near the inlets as well as in the central and mixing zones. The grid independence study is performed using the EDM+K turbulent combustion model in combination with the global reaction mechanism (KU). Table 3 presents the variation of the predicted centerline temperature (T) at ten diameters downstream the jet inlet (x/d=10) for six different grids with changing number of total nodes (N). One can see that sufficient grid independence is achieved for the finer grids. In the further calculations, the finest grid having 16,000 cells is used.
The temperature and oxygen mole fraction fields predicted by the EDM+K model, using the global mechanism M are presented in Figure 1 for the coflow temperature (TCO) of 1060 K.
The lifted flame can easily be recognized in the temperature field (Fig. 1a). In Fig. 1b, one can see that oxygen penetrates into the fuel jet along the lift-off distance, and, is, then, rapidly consumed by the combustion reactions starting at the flame root, causing a local oxygen depleted zone. Table 3. Centerline temperature at x/d=10 as function of total number of grid nodes. Inspecting the data of Wu et al. [6] one can deduce that the measured lift-off heights in their dependence to the temperature of the coflow-stream can be represented by the following relationship with a quite good accuracy 68.12 CO T h 81.37 d 1000 with a coefficient of determination of 0.98. Predicted lift-off heights by the single-step reaction mechanism (M), in combination with EDM, EDC models, i.e. by EDM+K-M, and EDC-M, as nondimensionalized by the central jet diameter, and the experimental results (EXP), as represented by the correlation expressed by Eq. (1), are presented in Table 4, for different coflow temperatures.
One can see that the calculations predict qualitatively the right trend, i.e. decreasing lift-off height with increasing temperature. However, this trend is strongly underpredicted by the calculations and the quantitative deviations from the measurements are quite large.
Similarly, the calculated lift-off heights by detailed reaction mechanisms (CON, LI, KER), in combination with EDC, i.e. EDC-CON, EDC-LI and EDC-KER models are compared with experiments in Table 5. Empty boxes in the table indicate that no flame could be predicted for the corresponding temperature. One can see that the detailed mechanisms with EDC predict too small lift-off heights at high temperatures, with a very sudden and rapid increase with decreasing temperature followed by blow-off, beyond a certain value. Using the reaction mechanism GRI with EDC, no flame could be predicted for the listed temperatures. Calculated lift-off heights by detailed reaction mechanisms (GRI, CON, LI, KER), in combination with PDF, i.e. PDF-GRI, PDF-CON, PDF-LI and PDF-KER models are compared with experiments in Table 6. Empty boxes in the table indicate that no flame could be predicted for the corresponding temperature. With the GRI mechanism, a flame could be predicted only for the highest temperatures, however, with an extremely large lift-off height. With the mechanism CON, a flame is predicted for all temperatures, with an overprediction of the lift-off height throughout. The mechanisms LI and KER show a very good agreement with each other (except for the lowest temperature 1010 K, where PDF-KER overpredicts PDF-LI) and a quite fair agreement with the measurements. For high temperatures, PDF-LI and PDF-KER predict very close values to the experiments, and the degree of agreement is decreases as the temperature is reduced and the lift-off height increases. Still the overall agreement of PDF-LI and PDF-KER with the experiments is much better than that of the other simulation methods considered here.
Conclusions
A computational investigation of a turbulent lifted H2/N2 flame is presented, based on a RANS turbulence modelling approach, using the Standard k-ε model. Detailed reaction mechanisms of Li et al. [39] and Keromnes et al. [41] applied with PDF are observed to deliver the best predictions of the lift-off height (h) as function of the coflow temperature (TCO).The mechanism of Conaire et al. [40] with PDF overpredicts h. These reaction mechanisms do not perform that well, when applied with EDC, underpredicting h for high TCO with an abrupt increase of h for TCO lower than a certain value, followed by a too early blow-off. The GRI Mech 3.0 [38] predicted a blow-off for all TCO when applied with EDC, and a lifted flame, with an extremely overpredicted h, for TCO > 1045K, when applied with PDF. The global mechanism of Marinov et al. [37] applied with EDM+K and EDC could deliver the trend of increasing h with decreasing TCO, whereas the EDC version delivered quantitatively better results. The global mechanism of Kudriakov et al. [41] with EDM+K predicted attached flames for all TCO. | 2019-11-14T17:05:53.079Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "8655fd0fab528a6163128335492627a4091e2f88",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2019/54/e3sconf_icchmt2019_01014.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "d544abe706b68de2cbea32992041cb19f8763c05",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
6967611 | pes2o/s2orc | v3-fos-license | Rothamsted Repository Download
In any metabolomics experiment, robustness and reproducibility of data collection is of vital importance. These become more important in collaborative studies where data is to be collected on multiple instruments. With minimisation of variance in sample preparation and instrument performance it is possible to elucidate even subtle differences in metabolite fingerprints due to geno-type or biological treatment. In this paper we report on an inter laboratory comparison of plant derived samples by [ 1 H]-NMR spectroscopy across five different sites and within those sites utilising instruments with different probes and magnetic field strengths of 9.4 T (400 MHz), 11.7 T (500 MHz) and 14.1 T (600 MHz). Whilst the focus of the study is on consistent data collection across laboratories, aspects of sample stability and the require-ment for sample rotation within the NMR magnet are also discussed. Comparability of the datasets from participating laboratories was exceptionally good and the data were amenable to comparative analysis by multivariate statistics. Field strength differences can be adjusted for in the data pre-processing and multivariate analysis demonstrating that [ 1 H]-NMR fingerprinting is the ideal technique for large scale plant metabolomics data collection requiring the participation of multiple laboratories.
from many laboratories can be pooled in centralised searchable electronic resources. One of the main constraints to overcome, in working towards an international metabolomic database, is the standardisation and normalisation of data collected from different laboratories. Discussions aimed at harmonising methodologies and the setting of guidelines for the collection and reporting of metabolomics experiments have been published Fiehn et al. 2007). These discussions form a suitable framework for progression to standardised data, but the use of a number of different technologies for metabolite data collection will always require different parameters and techniques for automated data alignment and comparison. The various spectroscopic technologies utilised in metabolomics each present their own problems. The alignment of datasets from chromatography-linked mass spectroscopic methods presents the biggest problem although algorithms to align data within experiments have been developed (Lommen 2009). The technology has not yet however advanced to the extent that different laboratories can readily combine data in such a way to allow electronic searching and matching. The situation with metabolite fingerprint data, collected without chromatography is potentially less problematic and alignment tools for Nuclear Magnetic Resonance (NMR) spectra have been made available (Stoyanova et al. 2004;Dumas et al. 2006). High resolution NMR is the technique of choice for the organic chemist for metabolite structure determination, and has played a leading role in the development of metabolomics, particularly in medical and pharmaceutical sciences (Beckonert et al. 2007). [ 1 H]-NMR fingerprinting, in particular, has been applied extensively to biofluids and tissue extracts (Lindon et al. 2000) and to plant extracts (Ward et al. 2003Biais et al. 2009;Moco et al. 2008). The NMR spectra contain much information about the chemical composition of these complex sample mixtures. The spectra are normally collected with a common internal standard, or possibly with an electronic reference (Moing et al. 2004, Martínez-Bisbal et al. 2009, and the instrumentation is generally highly stable and the data quantitative, regardless of metabolite chemistry and free from the baseline drift issues that plague other analytical techniques. Despite these positive attributes in data quality, the development of electronic searching and matching of NMR spectra has been hindered by the different spectral resolutions provided by the variety of magnetic field strength instruments in use for metabolomics. A recent paper has addressed this issue, using a standard mixture and a fish tissue extract, in an inter-laboratory study (Viant et al. 2009). In this paper we also demonstrate, using plant derived samples, that inter-laboratory data collection and electronic comparison of NMR data can be achieved, even when data is collected on a variety of different field strength instruments.
Tissue processing
Harvested tissues were washed for 1 min with tap water at room temperature then wiped and used for sample preparation. For each cultivar, 36 broccoli heads were selected to make six homogeneous lots (biological replicates) of six heads each. For each biological replicate, in order to get a homogenous sampling, several florets were taken in the centre and on both sides of each broccoli head, corresponding to 130 ± 10 g FW (Fresh Weight) per head. The florets were cut into two equal parts and one half was immediately deep frozen in liquid nitrogen and stored at -80°C. The frozen pieces were milled (UMC5 grinder, STEPHAN TM , Lognes, France) for 1.5 min with liquid nitrogen in order to get a homogeneous fine powder. The resultant powdered samples were stored in 50 ml Falcon tubes at -80°C until analysis. For freeze-drying the tube tops were pierced and replaced with new screw tops after processing. Freeze-dried tissue was also stored at -80°C until NMR extracts were prepared (4 months after harvest). One biological replicate of the Monaco cultivar was randomly chosen for the selection of tissue type and sample stability test performed in one laboratory.
Solvent extraction and NMR sample preparation
Twelve individual extracts of each of the two different broccoli cultivars were prepared from a homogeneous freeze-dried batch of broccoli tissue in one laboratory utilising a standardised polar solvent extraction protocol described in Baker et al. (2006). Freeze-dried tissue (15 mg) was extracted in 80:20 D 2 O:CD 3 OD, containing 0.05% w/v TSP-d 4 (sodium salt of trimethylsilylpropionic acid) (1 ml), for 10 min at 50°C. After cooling (5 min) and centrifugation, the supernatant was transferred to a new tube and subjected to a 90°C heat shock for 2 min. After a second round of cooling (30 min) and centrifugation, 850 ll of the supernatant was transferred to the final vial for sample pooling. The 12 replicate extracts from each cultivar were combined to give a homogeneous master extract. From this pooled sample, identical aliquots (750 ll), were transferred into 10 separate glass vials which were transported at room temperature to the individual partner laboratories for initial analysis within 48 h. At the receiving laboratory, 600 ll of each sample was transferred into 5 mm NMR tubes for analysis. The NMR tubes (5 mm economy NMR tubes WG-1226) were also supplied to ensure parity across laboratories in terms of tube quality.
NMR data collection
[ 1 H]-NMR spectra were acquired at 300 K on seven different Bruker Avance Spectrometers at five separate laboratories, operating at 400, 500 and 600 MHz, equipped with either 5 or 10 mm probes as detailed in Table 1. A water suppression pulse sequence was utilised employing a presaturation pulse during the relaxation delay of 5 s. 500 and 600 MHz data were acquired using either 128 or 256 scans of 65,536 data points. 400 MHz data were acquired using either 256 or 1024 scans of 32,768 data points across a sweep width of 12 ppm. Where available, gradient shimming was utilised, otherwise conventional automated lock signal shimming was carried out. All FIDs were zero filled to double their original size, and Fourier transformed with an exponential window function (0.5 Hz). Spectra were manually phased and automatically baseline corrected using a 2nd order polynomial. 1 H chemical shifts were referenced to d 4 -TSP at d 0.00.
2.5 Schedule of data collection NMR spectra were recorded three times across a 6 day period according to the schedule outlined in Table 2, in both spinning and non-spinning mode. In each laboratory, data collection was performed on pre-defined days with the initial dataset collected within 24 h of sample receipt.
Data analysis
For multivariate analysis [ 1 H]-NMR spectra were automatically reduced, using AMIX (Analysis of MIXtures software, Bruker Biospin), to create two different ASCII files containing integrated regions or ''buckets'' of equal width (0.01 and 0.04 ppm). Spectral intensities were scaled to the d 4 -TSP region (d 0.05 to -0.05). The ASCII file was imported into Excel for the addition of sampling/treatment details. The regions for unsuppressed water (d 4.865-4.775), d 4 -MeOH (d 3.335-3.285) and d 4 -TSP (d 0.05 to -0.05) were removed prior to importing the dataset into SIMCA-P 11.0 (Umetrics, Umea, Sweden) for multivariate analysis. Principal component analysis (PCA) was carried out using mean centred data for the full NMR dataset and with unit variance scaling for models constructed from calculated metabolite concentrations.
Calculation of metabolite concentrations
Concentrations of individual metabolites present in the solvent extract were calculated by comparison to the known concentration of d 4 -TSP present in the solvent as follows: Moles metabolite per 100 ml = (area metabolite peak/area TSP peak) 9 (moles TSP per 100 ml) 9 (9/no of hydrogen atoms represented in metabolite peak). 15 mg of tissue was 123 extracted with 1 ml of solvent. Thus the concentration (ug/mg) of metabolite in the tissue was derived from (moles/ 100 ml) 9 (molecular weight of metabolite) 9 10,000/15). For absolute quantification a correction must be applied to account for differences in T1 between different NMR spectrometers.
Tissue selection and sample stability
In order to carry out the inter-laboratory comparison using NMR it was important to establish the stability of the ''test'' samples. In the proposed comparison study, whereby the technique of [ 1 H]-NMR fingerprinting was to be conducted at different European laboratories over a 7 day period it was imperative that no variance would be introduced by the deterioration of the test samples during this time period. The test samples consisted of extracts of broccoli florets and in order to ensure extract stability the chosen protocols were applied to generate NMR samples that were repeatedly analysed, at 600 MHz, on a single instrument. This initial study also included an assessment of the stability of extracts of both fresh and freeze-dried tissue. Figure 1 shows the typical NMR spectra obtained from a polar (80:20 D 2 O:CD 3 OD) solvent extraction on both fresh (Fig. 1a) and freeze-dried (Fig. 1b) broccoli tissue. As with many NMR spectra obtained from plants, the spectrum is dominated by primary metabolites such as carbohydrates, amino acids and organic acids. Although there are a few peaks present in the fresh tissue spectrum that are absent from the freeze-dried tissue spectrum, there is good comparability between the two tissue types for the majority of peaks with only small changes evident between samples (e.g. peak 13, Fig. 1b2 which corresponds to sucrose). Importantly there are no peaks in the spectra of freeze-dried tissues that may have arisen due to the freezedrying process. The analytical reproducibility is demonstrated in Fig. 2 which shows data generated from three separate tissue aliquots. Two common regions of the spectrum have been selected and include the region corresponding to the anomeric proton of a-glucose (5.25-5.15 ppm; Fig. 2a1) and secondly the region containing characteristic valine, leucine and isoleucine peaks (0.9-1.1 ppm, Fig. 2a2). For both regions there is good reproducibility for the freeze-dried samples but it is evident that there was more variability in the peak intensity of samples derived from fresh material, particularly for the glucose region which is known to be problematic in plant derived samples due not only to its proximity to the water suppression region of the spectrum but also due to (Baker et al. 2006). What was clear, however, was that there was little drift in the recorded chemical shifts and that, as expected, the stability of the instrumentation was excellent across all samples studied. PCA was carried out on the full NMR datasets obtained from fresh and freeze-dried samples (Fig. 2b). Clearly sample types separated in the direction of PC1 which described 72% of the variance, and a tighter clustering was obtained with the freeze-dried samples. From these data it was concluded that for the inter-lab comparison, samples would be generated from aliquots of freeze-dried tissue. In order to assess the stability of the solvent extracts over time, the same NMR samples were reanalysed 6 days later under identical NMR instrument conditions. Data from this comparison is shown in Fig. 3 and demonstrates that there has been no deterioration in sample quality and no qualitative or quantitative differences in peaks were observed. The datasets overlay perfectly when visualised together demonstrating that samples could be kept and re-analysed without the risk of deterioration during extended studies. Most importantly, with confidence in sample stability, any variation introduced during the inter-laboratory comparison could be ascribed to differences in magnetic field strength, instrument set-up and configuration or physical location differences.
Inter-laboratory comparison-instrument set-up and experimental schedule
Prior to the inter-laboratory comparison, details of the NMR parameter set were sent to each laboratory in order that contributing partners could set up their instruments to collect data in as near identical conditions as possible. Details of instrument configurations, probe specification and the final data collection parameters used by each laboratory are shown in Table 1. The inter-laboratory comparison comprised Bruker Avance NMR instruments of different magnetic field strength (400, 500 & 600 MHz), utilising a range of probe types including selective inverse, broadband observe and inverse, multinuclear and even a cryoprobe. In general the size of the probes used in this experiment were 5 mm although one 500 MHz instrument was configured with a 10 mm probe. Data was collected using a water suppression pulse sequence across an identical sweep width of 12 ppm in all cases. The number of scans utilised varied depending on the magnetic field strength and probe type. To avoid potential problems associated with temperature variation in different NMR laboratories, instruments were all set to 300 K, using a calibration sample of d 4 methanol (Findeisen et al. 2007). In addition to instrument set-up instructions, a schedule for data collection was given ( Table 2). The schedule allowed collection of three datasets over 6 days for each of the two biological samples in the experiment. On each occasion data was collected with and without rotation of the sample. Two different broccoli cultivars, Iron Man and Monaco, were included in the experiment. On each occasion participants were asked to collect data in spinning and non spinning mode to assess the effect of sample rotation on the quality of the final dataset.
NMR stability-individual laboratories
Data from each laboratory was visually inspected by superimposing the three replicate datasets obtained from each biological sample. Figure 4 shows an expansion of the region between 5.22 and 5.19 ppm containing the anomeric hydrogen signal of a-glucose as an example. The data overlays exceptionally well in this region in terms of both chemical shift and intensity, demonstrating that within one laboratory there is very little, if any, variation in the NMR spectrum when the same sample is analysed over 6 days. Data from each laboratory and each separate NMR machine behaved in a similar manner (not shown). This result was entirely expected since it has previously been reported that drift in chemical shift in samples from green plant tissue is minimal (Ward and Beale 2006) and that with careful sample preparation, control of pH and temperature during data acquisition, signals arising from the same metabolite should occur at exactly the same chemical shift from sample to sample (Krishnan et al. 2005).
Spinning versus non spinning
Spinning an NMR sample within the magnet has the advantage of averaging field inhomogeneity and thus increases the resolution of the peaks observed in the final spectrum. Under some conditions however, spinning the sample produces spurious signals or ''sidebands'' that are a consequence of modulation of the magnetic field at spinning frequency. In instruments of lower magnetic field strength (e.g. 400 MHz) it may be necessary to rotate the sample in order to achieve as good resolution as possible but at higher magnetic field (e.g. 600 MHz) rotation of the sample is seen to be undesirable due to the increased risk of these spinning sidebands. The problem in a metabolomics experiment is, that when analysing a complex mixture, such as un-fractionated plant extracts, the final spectrum will contain a large number of overlapping peaks and spinning sidebands of large peaks could possibly interfere and thus skew the final dataset. In a metabolomics experiment, where the experimentalist is trying to determine differences between the datasets, these peaks should actually cause very little problem as although undesirable, they have a fixed intensity (usually 1-2%), proportional to that of the genuine peak and thus would not vary across the dataset. In order to test this theory and the consequences of sample rotation, partners in the inter-laboratory comparison study were requested to collect datasets in both spinning and non-spinning mode. To further standardise the experiment, partners were supplied with identical NMR tubes to use of a set grade to minimise any variation due to cylindrical symmetry or glass quality of the NMR tube itself.
Sample spectral data (from the characteristic aliphatic valine, isoleucine and leucine region (0.9-1.1 ppm)) obtained in spinning and non spinning mode is shown in Fig. 5 and includes that from 600 MHz (Fig. 5a) and 400 MHz (Fig. 5b) instruments. Resolution was found to be slightly decreased in the non-spinning samples at both 400 and 600 MHz (e.g. mean line width of TSP peak at half height in 600 MHz spinning samples was 0.90 Hz compared to 1.19 Hz for non-spinning samples). This is accompanied with an apparent increase in signal intensity in the non-spinning samples over those which had been rotated (due to similar resolution differences in TSP signal and subsequent scaling thereof). This is offset by a lower Fig. 4 Assessment of instrument stability. Results obtained from three successive runs of the same NMR sample at days 1, 3 and 6. a a-Glucose anomeric proton region (d5. 22-5.19) from three superimposed 600 MHz NMR spectra collected at one site. b Spectral region as a with x-axis offset, demonstrating intensity reproducibility. Red Day1, Green Day3, Black Day6. c Spectral region as a with yaxis offset, demonstrating chemical shift reproducibility. Red Day1, Green Day3, Black Day6 resolution of the individual peaks within a particular signal and thus represents a broadening of the resonance due to errors in low order X/Y shims. At 600 MHz, the broadening of the peaks due to not rotating the sample is evident but in the case of most metabolomics studies where NMR data is employed, a bucketing or binning routine is employed to segment the dataset into equal size buckets and thus any small changes in peak width or slightly poorer resolution would be dealt with in post processing of the data.
Multivariate analysis of combined inter-laboratory dataset
Datasets from individual laboratories were processed using identical parameters by the lead laboratory (Table 1). A visual comparison of data obtained from instruments at comparable field strength showed little variation in chemical shift, resolution or intensity of the peaks. This was the case for data collected at both 400 and 600 MHz. When comparisons were made across the entire experiment, however, it was clear to see that the spectral quality, in terms of resolution, increased with field strength. This was not unexpected and is the reason why higher strength instruments, albeit that they are more expensive, are used where possible, not only to shorten data acquisition times but also to improve separating power and resolution, especially when the need is to resolve overlapping peaks in a complex mixture. Multivariate statistical approaches are often employed in metabolomics experiments to discriminate between samples of interest and to discern the metabolites responsible for the separation. Data for this inter-laboratory comparison study has been similarly modelled (Fig. 6) utilising PCA. Here the data from each partner laboratory, was reduced to bins of equal width (0.01 ppm) across the full spectral width and modelled together to try and establish if the two broccoli cultivars present in the study could be separated irrespective of the field strength of the instrument or location of the laboratory and associated instrument set-up. Figure 6a shows six clusters within the PCA scores plot and a separation in the direction of PC1, accounting for 61% of the total variance, according to magnetic field strength and thus resolution of the final spectra. Pleasingly, data from the instruments of the same magnetic field strength cluster very tightly irrespective of where they are located, whether the sample was rotated or not, and which probe was utilised for data collection. In the same PCA model, PC2, accounting for 35% of the variance, was able to separate the two broccoli cultivars. Analysis of the loadings plots (Fig. 6b) for this statistical model demonstrated this further showing that PC2 gives the true information on genotypic difference (carbohydrates, amino acids etc.) whilst that for PC1 contains a series of positive and negative peaks which resemble ''noise''. This effect is due to the fact that bucketing or binning to 0.01 ppm has divided the broader peaks obtained at 400 MHz and to a lesser extent, 500 MHz whereas the selection of 0.01 ppm binning width is entirely suitable for the more resolved 600 MHz data. Widening the bucket width to 0.04 ppm and re-building the PCA model (Fig. 6c) now shows the same six clusters as previously but the orientation has now changed such that PC1, accounting for 80% of the variance, now describes the difference between the two broccoli cultivars rather than the magnetic field strength which is now, at this higher bucket width, explained by PC2 (accounting for 18% of the variance). Thus the expansion of the width of the bin utilised in post acquisition processing has effectively reduced the resolution of a 600 MHz spectrum to that resembling data collection at 400 MHz. This is highlighted in Fig. 6d which represents the loadings plot of the second PCA model. Fig. 5 Comparison of a broccoli extract run with and without rotation during spectra acquisition. a 600 MHz spectra region from 0.9 to 1.1 ppm. b 400 MHz spectra region from 0.9 to 1.1 ppm. Spectra have been scaled to the signal height of TSP. As a consequence, signals with linewidth broader that of the d 4 -TSP peak show higher amplitude in the non-spinning case than in the spinning case. Since quantification evaluates signal area rather than signal height, determination of concentration values is not affected Comparison of the two loadings plots (Fig. 6b, d) clearly demonstrate the loss of resolution by choosing a higher bucket width but nonetheless show that metabolites responsible for the biological difference between samples are the same. Therefore when instruments of different field strength are used in the same study, selection of a larger bucketing width can ''reduce'' the contribution of technological variability to the first Principal Component in a PCA model.
Statistical comparison of quantitative data
The advantage of [ 1 H] NMR is that if certain NMR acquisition conditions are fulfilled (e.g. delay between scans (D1) [ five times the relaxation time (T1)) it is a quantitative technique and with the addition of an internal standard, one can calculate concentrations of individual metabolites in the solvent extract if they display a clear non-overlapping peak. To illustrate the robustness of NMR and its quantitative ability, a selection of characteristic chemical shift regions which related to key metabolites found in typical broccoli polar solvent extracts were selected across the spectral width. These represent a range of metabolites of different concentrations from high (sucrose) to low (valine). Concentrations of these individual metabolites are given as histogram plots in Fig. 7. As can clearly be seen, irrespective of magnetic field strength, the concentrations of metabolites derived from the [ 1 H] NMR data are very reproducible, agree with expected literature values (Gomes and Rosa 2000) and demonstrate which metabolites contribute to the biological difference between the two broccoli cultivars.
Finally, by taking the matrix of selected metabolite concentrations derived, via integration of characteristic metabolite peaks, from the NMR spectra of participating laboratories, and modelling these using PCA, one can see that the subsequent scores plot (Fig. 8) now contains only two clusters which simply relate to the biological difference between the two samples. All variance due to field strength and set-up parameters has been minimised. It can be seen that there is more variance between the higher field instruments in this analysis, and this can be attributed to a greater sensitivity of these instruments to changes in ambient temperature. Obviously the approach we have taken would clearly be suitable for a multi-laboratory metabolomics study where all the project data may need to be modelled together or against other Y datasets, such as transcriptomics or proteomics results.
Concluding remarks
In this study we have demonstrated that, with attention to experimental design and careful set-up, data collection for large-scale plant metabolite fingerprinting using [ 1 H]-NMR can be carried out as a dispersed activity across laboratories using different NMR instruments. Data from the two cultivars provided a means to examine different data processing techniques to reveal the biochemical differences whilst minimising the effect of the different spectral resolutions. Thus, for plant [ 1 H]-NMR fingerprinting, multi-laboratory experiments with pooling of data are now perfectly feasible and the way forward for co-ordinated multi-national screens and databasing of large collections of genotypes is now open. Although we have concentrated here on testing the ability to collect data from different instruments and batch process them together, as we envisage would happen in large multi-national screens where raw data from different instrumentation is deposited centrally, we realise that dispersal of the sample preparation to the different laboratories is another potential source of data variation. This has not been tested in this work. However, the experience of the Rothamsted laboratory, where more than 20,000 plant samples (yielding [ 60,000 NMR) samples have been processed, in the past few years, indicates that this source of operator/wet laboratory variation can be eliminated with strict adherence to the standard operating procedure that was also used in the sample preparation in this work.
Standardisation in methodology and reporting of results is a focus of concern across all the 'omics' technologies Fiehn et al. 2007;Hardy and Taylor 2007) and other inter-laboratory studies have been reported. An inter-laboratory plant metabolomics study by GC-MS across three research groups has recently been reported ), This study, which utilised identical GC-MS instrumentation and samples prepared in a single laboratory, but derivatised locally, concluded that major metabolite features could be consistently measured across the partners, but that automated peak retrieval, mode of injection, chromatographic performance and data processing all contributed to variation and that more standardisation would be necessary for large-scale dispersed data collection by this technique. In contrast we have shown that NMR profiling is a technique that is much more robust and free from such machine variation. | 2017-08-02T23:56:11.852Z | 0001-01-01T00:00:00.000 | {
"year": 2010,
"sha1": "b7637db22be4046d167c0525474f43e2eae65527",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11306-010-0200-4.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b7637db22be4046d167c0525474f43e2eae65527",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
227744966 | pes2o/s2orc | v3-fos-license | Alfvenic Perturbations in a Sunspot Chromosphere Linked to Fractionated Plasma in the Corona
In this study, we investigate the spatial distribution of highly varying plasma composition around one of the largest sunspots of solar cycle 24. Observations of the photosphere, chromosphere, and corona are brought together with magnetic field modelling of the sunspot in order to probe the conditions which regulate the degree of plasma fractionation within loop populations of differing connectivities. We find that in the coronal magnetic field above the sunspot umbra, the plasma has photospheric composition. Coronal loops rooted in the penumbra contain fractionated plasma, with the highest levels observed in the loops that connect within the active region. Tracing field lines from regions of fractionated plasma in the corona to locations of Alfvenic fluctuations detected in the chromosphere shows that they are magnetically linked. These results indicate a connection between sunspot chromospheric activity and observable changes in coronal plasma composition.
INTRODUCTION
Early observations of elemental abundance variations on the Sun showed systematic differences between the composition of the corona and that of the photosphere (e.g. Widing & Feldman 1989Sheeley 1995Sheeley , 1996. In the closed-loop solar corona, in the slow solar wind, and in solar energetic particles (SEPs), elements with low first ionization potential (FIP <10 eV) are more abundant by a factor of 2-4 compared to the photosphere (e.g. Meyer 1985a,b;Gloeckler & Geiss 1989;Feldman & Widing 2003;Brooks et al. 2015), whereas, high FIP elements (FIP > 10 eV) retain their photospheric elemental distribution. Plasma composition in the open magnetic field of coronal holes remains relatively unfractionated when it is observed in the corona (e.g. Feldman & Widing 1993;Feldman et al. 1998;Doschek et al. 1998;Brooks & Warren 2011). Abundance variations are typically characterized by FIP bias which is the ratio of an element's abundance in the solar atmosphere to its abundance in the photosphere. FIP bias of ∼1 indicates unfractionated photospheric plasma composition and >1.5 is fractionated plasma of coronal composition. Feldman et al. (1990) provided one of the few early studies of plasma composition around a sunspot based on spatially unresolved, slit observations obtained from a rocket flight of the High Resolution Telescope and Spectrograph (HRTS). The authors determined that in the atmosphere above a sunspot, the elemental abundances had a photospheric distribution compared to the plasma of the nearby plage region, which was highly enriched in low FIP elements. Similarly, Sheeley (1995) et al.
noted in a Skylab slitless spectrogram that plasma above a sunspot umbra was enriched in high-FIP Ne VI whereas in the adjacent penumbra, the plasma was enriched in low-FIP Mg VI. Ne VI rich plasma occurred only in areas of flux emergence in two nearby active regions.
Subsequent studies of high FIP bias plasma in active regions refer to features such as Mg IX sprays (Sheeley 1996), spikes at the edges of active regions , fan loops (Warren et al. 2016), and upflow/outflow regions (e.g. Brooks & Warren 2011). Typically, the magnetic field associated with these features is the decaying or dispersed unipolar areas of strong magnetic field at the periphery of active regions. Strong plasma fractionation is observed at the footpoints of loops rooted in the unipolar regions where FIP bias levels are 3-4 (Brooks & Warren 2011;Baker et al. 2013;Brooks et al. 2015). High FIP bias of ∼3 is also observed in the cores of quiescent active regions (Del Zanna & Mason 2014).
According to the plasma fractionation model of Laming (2015), a compelling explanation for the separation of ions from neutrals is the ponderomotive force arising from the reflection or refraction of Alfvén waves in the chromosphere. The Alfvén waves act only on the ions while leaving neutral elements unaffected. Though the fractionation is influenced by the origin and flux of the Alfvén waves as well as the wave-wave interactions in the chromosphere, in general, the time averaged ponderomotive force is directed upward, giving rise to the enrichment of easy-to-ionize low FIP elements in the corona (Laming 2015(Laming , 2017. The direction and ultimately the resonance of the Alfvén waves are all-important to the degree of fractionation observed in the corona (Laming et al. 2019). These features are set by where the Alfvén waves are generated. In open field regions, typical waves with 3 and 5 min periods (e.g. Khomenko & Collados 2015, and references therein) generated from below the photosphere propagate upward at the base of the field and either continue along the open field or are reflected back down; there is little resonance, therefore, little fractionation. Upward propagating waves with such long periods do not resonate with the closed loop corona so like with open field regions, the waves are reflected back down at the loop footpoints resulting in little or no fractionation. Conversely, Alfvén waves generated in the corona due to magnetic reconnection are directed downward to loop footpoints at the top of the chromosphere and then are reflected back upward at the steep density gradient located there. Laming (2017) proposed that resonant waves are excited within the coronal loop itself as a result of nanoflare reconnection in the corona thereby creating enhanced fractionation at magnetically connected loop footpoints (Baker et al. 2013;Dahlburg et al. 2016;Laming 2017;Laming et al. 2019). It is not observationally clear if these oscillations generated by reconnection in the corona are linked to enhanced fractionation, however.
In this regard, the quest for magnetic fluctuations associated with magneto-hydrodynamic waves (MHD) in solar magnetic structures assumes a particular importance. Observationally, MHD waves in solar magnetic structures are generally detected as intensity and velocity oscillations (Bogdan 2000;Centeno et al. 2006;Chorley et al. 2010;Morton et al. 2011;Stangalini et al. 2012;Grant et al. 2015;Jafarzadeh et al. 2017;Jess et al. 2017), although simultaneous magnetic fluctuations are also expected from theory for different types of MHD modes (Edwin & Roberts 1983;Roberts 1983). The required magnetic oscillations for ponderomotive fractionation to take place can therefore be associated with a number of different wave modes (Roberts 1983;Khomenko et al. 2003;Goossens et al. 2009;Morton et al. 2015) e.g. locally excited waves (Alfvén, magnetoacoustic fast mode in high-β regimes), or global eigenmodes of the magnetic structure (e.g. sausage mode, torsional Alfvén mode,...). We will refer to magnetic fluctuations associated with any wave mode as Alfvénic waves, to distinguish from a purely Alfvén mode. The detection of magnetic oscillations associated with MHD modes is a difficult task as opacity effects or instrumental crosstalk with other physical quantities can easily mimic the effect of these oscillations (e.g. Khomenko & Collados 2015;Joshi & de la Cruz Rodríguez 2018, and references therein). One way to disentangle intrinsic magnetic oscillations in the solar atmosphere is through the investigation of the phase lag between the polarization signals associated with magnetic field disturbances and other physical quantities such as intensity and Doppler velocity (Stangalini et al. 2018(Stangalini et al. , 2020. Oscillating physical quantities associated with MHD waves may have different phase relations depending on the MHD mode and the propagation state of the wave (Fujimura & Tsuneta 2009;Hinode Review Team et al. 2019), thus the analysis of the phase relations between them can be exploited for their identification (Stangalini et al. 2018), and the identification of the specific mode producing them.
In this paper, we show a detailed, spatially resolved coronal composition map of a strong and coherent leading sunspot in AR 12546. We find that the elemental abundance variation in the corona above the sunspot is highly structured with extremes in the level of fraction-ation among the distinct loop populations. The distribution of the highly fractionated plasma appears correlated with the spatial locations at which intrinsic magnetic oscillations are identified in nearly simultaneous high spatial resolution spectropolarimetric observations of the solar chromosphere (Stangalini et al. 2020). Magnetic field modeling is used to investigate the connectivities of the loop populations within the sunspot to seek an understanding of the distribution of plasma composition observed there. We interpret our findings in the wider context of coronal heating and the ponderomotive force model of elemental fractionation (Laming 2015). AR 12546, one of the largest ARs of the last 20 years, was a relatively simple, bipolar region composed of a strong, coherent leading positive polarity sunspot and a dispersed following field of negative polarity. Asymmetric flux concentrations are typical of bipolar regions, however, both the extent of the dispersion of the following polarity and the coherency of the leading spot are extreme in this case. At the time of its CM crossing on May 20, half of the total unsigned magnetic flux of the active region was 4.1×10 22 Mx and the magnetic field strength was exceptionally high, exceeding 4,000 G in the center of the sunspot umbra in the photosphere (Stangalini et al. 2018). There was no significant evolution of the large-scale field during the two days prior to the EIS and IBIS observations; the sunspot was globally stable. Small-scale evolution was limited to moving magnetic features streaming radially from the positivepolarity sunspot and ongoing fragmentation and disper-sal of the negative field of the following polarity in the decaying active region (see also Murabito et al. 2019).
During the period of May 18-20, there were no flares > B-class or coronal mass ejections (CMEs) attributable to AR 12546. The stability of the loops rooted in the sunspot umbra and penumbra and the lack of activity reflect the absence of consequential evolution in the magnetic field (see the included animation of Figure 1).
IBIS Observations and Methods
IBIS full Stokes spectropolarimetric scans were used to identify possible signatures of magnetic field oscillations in circular polarization (CP ) measurements in the umbra. A full account of the observations and data reduction techniques is provided in Stangalini et al. (2018); Murabito et al. (2019); Houston et al. (2020), andStangalini et al. (2020). Here we precis the aspects which are relevant to this analysis. The data set consists of a time series of Ca II 8542Å scans beginning at 13:39 UT on May 20 and continuing for 184 minutes at a cadence of 48 sec. The Ca II is a chromospheric magnetically sensitive line and therefore suitable for detecting magnetic field oscillations at chromospheric heights. Figure 2 shows an IBIS intensity image in the photospheric Fe I 6173Å line for context (a) and an intensity image in the Ca II 8542Å line (b). The IBIS FOV of 28 ×70 encompasses the umbra in the X-direction and a significant portion of the penumbra in the Y-direction. The CP measurements were obtained from the amplitude of the Stokes-V profile. CP is defined as follows: where V max is the maximum amplitude of the Stokes V spectral profile and I cont is the local continuum intensity (Stangalini et al. 2018).
To identify possible intrinsic magnetic field oscillations in the same IBIS data set, Stangalini et al. (2020) performed a specific phase lag analysis, between the CP and the core intensity of the Ca II. The Figure 2 (c) shows the Stokes V max /I cont CP map saturated at 0.3. The authors selected the CP instead of Doppler velocity of the Ca II line for the phase lag analysis due to the presence of shocks that can turn the Ca II line from absorption to emission, rendering the line Doppler velocity undefined. The CP is directly related to line-of-sight magnetic field but its variation can be caused by either intrinsic magnetic oscillations or opacity effects. In the case of opacity effects, the intensity is expected to be in/out of phase due to the fluctuation of the line formation height. This does not mean that real magnetic waves with the same phase relations (i.e. ±π) do not exist, but the phase lag analysis provides a high level of confidence we are not observing a mix of real and unreal oscillations. A strong coherence is needed to ensure that unreliable phase measurements are excluded from the analysis. In this work we make use of the results of Stangalini et al. (2020) and investigate the spatial distribution of the CP oscillations with the aim of assessing their role in the distribution of the observed FIP bias in and around the sunspot.
Hinode/EIS Observations and Methods
Hinode EIS spectral data were used for the plasma composition analysis of the sunspot within AR 12546. A FOV of 120 ×160 was created using the 2 slit in 2 steps, taking 60 sec exposures at each slit position. The single scan began at 07:24 UT on May 20 and finished two hours later. Study #404 (Atlas 60) is a full spectral atlas of both CCDs therefore it contains the diagnostic spectral lines required for constructing a spatially resolved composition map.
Data reduction was carried out using the eis prep routine that is available in Solar SoftWare (Freeland & Handy 1998). The CCD signal in each pixel was converted into calibrated intensity units of erg cm −2 s −1 sr −1Å−1 and pixels affected by cosmic ray hits, dust, and electric charge were removed/replaced. All data were corrected for instrumental effects of orbital spectrum drift (Kamio et al. 2010), CCD spatial offsets, and the grating tilt.
To construct the composition map, spectral lines from consecutive ionization stages of Fe VIII-Fe XVII and the low FIP Si X (FIP = 8.15 eV) and high FIP S X (FIP = 10.36 eV) were fit with single Gaussian functions except where the lines are blended in which case the line was fit with multiple Guassian functions. The Si X/S X line ratio was used to determine FIP bias and the density was measured with the Fe XIII 202.04 A/203.83Å line ratio. The specific emission lines are given in Table 1. The CHIANTI Atomic Database, Version 8.0 (Dere et al. 1997;Del Zanna et al. 2015) was used to carry out the contribution function calculations, applying the photospheric abundances of Grevesse et al. (2007) for all of the spectral lines while assuming the measured Fe XIII densities. The Markov-Chain Monte Carlo (MCMC) algorithm contained within the PINTo-fALE software package (Kashyap & Drake 2000) was used to compute the emission measure (EM) distribution for the Fe lines. The EM distribution was then convolved with the contribution functions and fit to the observed intensities of the low-FIP Fe spectral lines. Si is also a low-FIP element therefore the EM derived from the Fe lines was scaled to reproduce the intensity of the Si X line. Finally, the FIP bias was determined to be the ratio of the predicted to observed intensity for the high FIP S X line. The estimated uncertainty of the FIP bias ratio is 0.30 assuming an intensity error of 20%. A full account of the method is available in Brooks et al. (2015) and Baker et al. (2018). Fe XII 195.12Å relative Doppler velocities were measured versus a reference wavelength defined by averaging the centroid wavelengths of all pixels within the data array. This method was adopted as EIS does not have an absolute wavelength calibration. Excess broadening in the deblended Fe XII 195.12Å emission line spectra was calculated from where δλ is the observed line width, λ 0 is the line centroid, k B is Boltzmann's constant, T i is the ion temperature, m is the mass, ξ is the nonthermal velocity, and σ I is the instrumental width (e.g. Brooks & Warren 2016). Figure 3 shows the Hinode/EIS Fe XII intensity, Doppler and nonthermal velocity maps, Si X/S X composition map, and Fe XIII density map at 07:24 UT on 2016 May 20. SDO/HMI continuum contours have been overplotted on the EIS maps to mark the umbral and penumbral boundaries of the sunspot. Above the umbra, the plasma is blue-shifted with upflow speeds of 10-20 km s −1 and nonthermal velocities range from ∼15 km s −1 above the center to ∼30 km s −1 toward the umbra/penumbra boundary. Plasma density is ∼5×10 8 cm −3 . Plasma composition above the umbra is photospheric with a FIP bias of 1 above its core; to-et al. ward the boundary, the composition becomes more fractionated with FIP bias ∼1.5-2, especially on the eastern edge.
Plasma parameters are more extreme above the penumbra where the loops surrounding the sunspot are rooted. Upflows transition to downflows in the loops at the outer boundary. Nonthermal velocities are 30-50 km s −1 in loops located to the east and south of the penumbra; they are ∼30-40 km s −1 in the west (in between the dashed lines in the EIS maps). Plasma density increases by an order of magnitude above the penumbra compared with the umbra. FIP bias exceeds 3 + above the eastern penumbra and reaches 4 + at the boundary in the east and to the south. The strongest fractionation is located in the southern region in the vicinity of the highest nonthermal velocities of 45-50 km s −1 .
CORONAL LOOP CONNECTIVITIES
A Potential Field Source Surface (PFSS) extrapolation was computed to model the coronal loop system surrounding the sunspot. The rationale behind the use of the PFSS extrapolation is that this model captures the global field of the sunspot for comparison with the SDO/AIA coronal images of Figure 1. The PFSSPY package (Yeates 2018; Stansby 2019) was employed to extrapolate the coronal field from the HMI synoptic radial field map of CR2177. Figure 4 shows the extrapolation with selected field lines. A green contour represents 500 G in the positive polarity sunspot. In general, there is good qualitative correspondence of the loops in the SDO/AIA images at the times of the EIS and IBIS observations with field lines in the extrapolation ( Figure 4).
The coronal loop configuration of AR 12546 is characteristic of a bipolar region with distinct loop popu- Figure 4. Selected field lines of the PFSS extrapolation of AR 12546 based on an SDO/HMI synoptic radial field magnetogram at 13:00 UT on 2016 May 20. Field lines are colorcoded as closed within the same active region (red), closed to a neighboring active region (orange), or open (yellow); the radial magnetic field saturated at 100 G is represented in greyscale, and the green contour on the sunspot represents its 500 G isocontour.
lations. Yellow field lines on the western side of the sunspot are long, extended loops that reach the source surface of the PFSS model (=2.5 R sun ) therefore these field lines are considered to be open. To the north, the orange, long loops are connected with the negative polarity of the active region located to the north-east of AR 12546. The plasma density in these regions is ∼5-6×10 8 cm −3 (see the mean densities within the boxes of the density map in Figure 3) and plasma composition is partially fractionated with FIP bias of 1.5-2. In contrast, the red loops on the east and south of the sunspot are compact loops that connect mainly with the opposite polarity within the active region. The density of the short, closed loops is an order of magnitude higher and the plasma is highly fractionated with FIP bias of 3-4.
ALFVÉNIC PERTURBATIONS IN THE
SUNSPOT CHROMOSPHERE Stangalini et al. (2020) found the presence of Alfvénic perturbations in the sunspot chromosphere in the 3minute band of the IBIS Ca II time series (see Section 2.2). The frequency band corresponding to this period, which is also the dominant period in the solar chromosphere (e.g. Khomenko & Collados 2015, and references therein), is smaller than the ion-cyclotron frequency, thus in the regime appropriate for the model of Laming (2015). A phase lag analysis between CP and intensity ruled out the possibility of opacity and other spurious effects by identifying a specific phase lag of the order of −35 degrees between the two quantities, which is not consistent with the phase values expected from cross-talk or radiative opacity effects. Due to the vertical gradient of the magnetic field, any plasma density perturbation can induce a height variation of the response function of the spectral line, thus resulting in an observed spurious magnetic oscillation which is merely a consequence of an opacity change and has nothing to do with a real magnetic wave. In this regard, the study of the phase relation between different diagnostics is helpful in the identification of real magnetic oscillations. Indeed, by collecting phase measurements corresponding to high coherence it was possible to discriminate between different effects and identify real magnetic oscillations in the sunspot chromosphere. Coherence is independent of the wave amplitude, thus this technique is able to detect correlations between two signals even if their amplitudes are small. It is worth noting that Alfvénic shocks were independently detected by Houston et al. (2020) at the same spatial locations, thereby confirming the interpretation of these disturbances in terms of real magnetic fluctuations. The locations of the Alfvénic perturbations are indicated by the blue dots overplotted on the Hinode/EIS FIP bias map of Figure 5(a) and on the CP map in Figure 2(d). The dots are aligned in a distinct C-shaped structure running from the north to the south along the eastern edge of the sunspot umbra.
Within the same EIS FOV in Figure 5(a), coronal loops containing highly fractionated plasma are present with particularly high values both to the east and to the south-west of the sunspot. The projected spatial proximity of the C-shaped structure and high FIP bias values poses the question whether the loops of highly fractionated plasma observed in the corona are magnetically connected to the specific locations of Alfvénic perturbations in the chromosphere identified by Stangalini et al. (2020). The context to this question is provided by theoretical models based on the hypothesis that the fractionation process producing the FIP-effect is powered by the conversion of magnetic waves at chromospheric heights (e.g. Schwadron et al. 1999;Laming 2015).
In order to answer this question, a magnetic model of the sunspot area is needed that is representative of the sunspot field so that we can determine if the field lines threading regions of high FIP bias values are rooted in the blue dots of Figure 5. The main difficulty is that the FIP bias map is the result of a pixel-dependent, line-integrated emission of coronal lines to which it is challenging to attribute a height, and even more so a single height for the entire map. In addition, the paths followed by the modelled field lines depend on the properties of the chosen magnetic field model. Finally, there are more than eight hours between the beginning of the EIS observation, which was used to compute the FIP bias map, and the end of the IBIS observation, which was used to identify the Alfvénic perturbations. This complicates the alignment of the different observations, especially considering that both EIS and IBIS lack absolute pointing information. Therefore, the choice of the time of the magnetic field observations that are used to build the magnetic field model is also a factor of uncertainty. Given these difficulties, we adopted a heuristic approach by testing whether a combination of magnetic model and height of the FIP bias map exists where field lines starting from areas of high FIP bias values are rooted in the proximity of the blue dots.
A magnetic model for the sunspot can be obtained using a force-free extrapolation of photospheric measurements (see, e.g. Wiegelmann & Sakurai 2012). The photospheric magnetogram used as input for the extrapolation is the SDO/HMI SHARP magnetogram taken at 13:00 UT, in between the end of EIS rastering and the start of the IBIS observation. SHARP data provide vector magnetograms of regions of the Sun in a Cylindrical Equal Area (CEA) projection, where the spatial dimensions are given in CEA degrees with 0.03 CEAdeg 0.5 365 km at the center of the disk. We use this approximate conversion factor in the scale estimations given below. The full SHARP field presents a significant flux imbalance (about 15%), therefore we use the linear force-free extrapolation method of Seehafer (1978) which does not require strict flux balance to be enforced. The entire FOV of the SHARP magnetogram covering an area of 22.89×12.99 CEA-deg was used to compute the Fourier coefficient of Seehafer's solution. However, in order to reduce the computing time required by the parametric study described below, the extrapolated magnetic field was computed in a smaller et al. The ratio of the vertical current density to the vertical magnetic field, α N L = J z /B z , represents the local torsion of field lines and is constant along individual field lines in force-free extrapolations. Within the linear approximation, this function is a free, constant parameter that is limited in magnitude by the inverse of the linear dimension of the extrapolated magnetogram. For Seehafer's method applied to the entire FOV of the SHARP magnetogram, α max = 0.27 CEA-deg −1 (corresponding to 0.022 Mm −1 ). Different magnetic field models are then obtained for different values of the constant α (|α| < α max ) and the same magnetogram at the bottom boundary.
The alignment problem between the magnetic model based on the reprojected SDO/SHARP magnetogram on the one hand, and the plane-of-the-sky SDO/AIA, IBIS, and EIS observations on the other was treated as follows. First, the locations of the Alfvénic perturbations were derotated from 13:39 UT (starting time of the IBIS measurements) to the starting time of the EIS raster. Second, since both EIS and IBIS lack absolute pointing information, the EIS raster and derotated locations of Alfvénic perturbations were aligned to multiple images of SDO at the time of the EIS raster. In particu-lar, this operation produces a co-alignment between the (derotated) location of the Alfvénic perturbations and the HMI line of sight magnetogram at 07:24 UT. Finally, a 500 G isocountour of the latter was used to match a similar contour in the SHARP vertical magnetic field (at 13:00 UT) to co-align the location of the Alfvénic perturbations and FIP bias map with the extrapolated field of the coronal models.
In summary, the heuristic method that we adopted consists of the following steps: 1. producing magnetic field models for different (constant) α values between −α max and +α max ; 2. for each field model, placing the co-aligned FIP bias map at different heights; 3. for each height of the FIP bias map, tracing the field lines in the given model starting from FIP bias values above 2.7 in the entire EIS FOV. The value 2.7 is chosen as the lowest possible value consistent with capturing most of the yellow area in the Figure 5(a), but reducing the number of pixels at the edge of the FIP bias map. This is done to avoid that the rectangular shape of the EIS FOV produces a misleading boundary of the field line distribution in the following steps. The filtering of field line seeds results in selecting relatively high-FIP bias field lines. As an example, Figure 5(b) shows the 3D rendering of the spatial arrangement of Alfvénic oscillations (blue dots), a few selected field lines (orange), and the EIS map for the parameters of Figure 6(g); 4. flagging the footpoints of the field lines with high FIP bias at chromospheric heights (1 ); 5. comparing the location of the footpoints of field lines with high FIP bias with the location of the co-aligned Alfvénic oscillations; 6. finally, verifying if combinations of α and FIP bias map height exist that produce a distribution of high FIP bias field line footpoints similar to the distribution of blue dots in Figure 5(a).
The above procedure yielded the maps in Figure 6 where, in addition to the blue dots representing the location of Alfvénic oscillations, the footpoints of the high FIP bias field lines are shown as orange dots. These maps answer our initial question: there are indeed combinations of model parameters for which the orange and blue dots are closely located. In other words, our parametric, heuristic study supports the existence of a magnetic link between high FIP bias values and the locations of Alfvénic perturbations in the chromosphere identified by Stangalini et al. (2020). One can further try to deduce which is the value of α that results in the best match between the orange and blue dots in Figure 6(b-h). This is inevitably very subjective, as it strongly depends on which subset of dots is given priority in the match. If, for instance, only the number of orange dots overlapping the blue structure is considered, one would likely choose the potential case (α = 0 in panel e) or even a slightly positive value of α as the best matching case. This criterion discards the more isolated dots in the center and upper parts of the sunspot as not significant. On the other hand, if the shape of the distribution is chosen as the primary matching criteria, then one may recognize how, similarly to the blue dots, the orange dots in Figure 6(b-h) are arranged in a pattern roughly shaped as a C with straight arms (except for few orange dots in the center of the sunspot). The red arrows on panels (e) and (g) of Figure 6 indicate how the arms of the C-shape of the orange dots in two cases are identified. For instance, as the annotations in panel (e) indicate, in the potential case both the vertical and upper orange arms are at an angle with the corresponding blue arms, whereas we deem the overlap of the whole structures to be better in panel g. By matching the blue and orange C-shapes as a whole, we would then identify the α = −0.2 CEA-deg −1 as best matching case.
Hence, adopting different matching criteria results in different values for the best matching α, pointing at the et al.
limitations that a constant-α model of the magnetic field has in this particular application. Also, we stress again that to treat the FIP bias map as a flat horizontal plane at a given height is a very crude approximation. On the other hand, the range of heights between 40 and 55 Mm of the FIP bias maps identified in Figure 6(b-h) yields coronal electron temperatures in the range of ≈ [0.9, 1.2] MK, assuming hydrostatic equilibrium (Aschwanden & Schrijver 2002). This is consistent with the temperature range for which the Si X-S X FIP bias line-pair is an effective diagnostic of coronal plasma composition (Feldman et al. 2009;Brooks & Warren 2011) The space-dependent distribution of α in the sunspot area can be computed as J z /B z using the photospheric observation of the SHARP magnetogram. This is plotted in Figure 6a, with the blue contour identifying the distribution of the blue dots, showing that, while still predominantly positive, α values of both signs are present. It is interesting to note that a C-shaped concentration of negative α values is present right to the east of the C-shaped contour of the blue dots. The blue contour and the negative α values are not exactly overlapping, but their appearance is very similar in shape with just a slight shift (of about 0.3 CEA-deg) that is comparable to the alignment accuracy among the different instruments/maps. The possible spatial correlation between Alfvénic oscillations and negative values of α suggested by Figure 6(a) is interesting because it may give some clues about the background field in which the oscillations took place. However, given the limitations of alignment and modeling discussed so far, we do not pursue this point any further, and leave the investigation of this aspect to future studies on the nature of the Alfvénic oscillations (e.g. Section 6).
In summary, the heuristic method defined two parameters (the magnetic twist and the height of the FIP bias map) that resulted in a positive spatial correlation between high values of FIP bias and the Alfvénic 3-minutes magnetic oscillations detected in the IBIS Ca II time series by Stangalini et al. (2020). This is indicative of a magnetic connection between chromospheric magnetic oscillation regions and high values of FIP bias observed in the corona above the sunspot. Given the intrinsic limitations of observations and magnetic modeling discussed above, these findings are nevertheless consistent with the theory of FIP fractionation outlined in Laming (2015).
DISCUSSION
In this work, we have investigated the spatial distribution of coronal plasma composition in the vicinity of the large sunspot of AR 12546. Hinode/EIS observations re-vealed that compositional variation ranged from little or no fractionation in the corona above the core of the umbra to partially or highly fractionated plasma in groups of coronal loops rooted in the surrounding penumbra.
Using a PFSS extrapolation of the large scale magnetic field of the positive-polarity sunspot, we identified distinct loop populations based on their connectivities: open field or long loops connected externally to the negative polarity of an active region ∼ 570 Mm to the northeast and short, dense loops internally connected to the negative polarity of the active region.
Plasma composition is relatively uniform within each loop population but varies across populations. FIP bias was 1.5-2 in the open field or long loops connecting externally to a another active region whereas it was 3-4 in the loops connecting to the opposite polarity within AR 12546 (Figure 3). Loops within a particular group are likely to share similar global properties and evolve together. In AR 12546, the very simple bipolar topology coupled with little evolution of the large-scale magnetic field prior to the EIS and IBIS observations, suggest that there was limited mixing of plasma compositions via magnetic reconnection of different loop populations.
Among the global properties shared by coronal loops within the same population, loop length and resonance frequencies are at least theoretically related to the observed plasma fractionation distribution in and around the sunspot. The Laming ponderomotive force fractionation model predicts that resonant Alfvénic waves, generated by nanoflaring reconnection in the corona, increase plasma fractionation in the vicinity of footpoints of resonant closed loops whereas there is little resonance and fractionation along open field. In support of the Laming model, numerical simulations of Dahlburg et al. (2016) showed that ponderomotive acceleration occurs at loop footpoints as a consequence of MHD waves generated by magnetic reconnection in the corona and that the FIP effect is a natural outcome of coronal heating. Significantly, the ponderomotive acceleration increases with increasing temperature and with decreasing length in closed coronal loops in the simulations. In the corona above the umbra of the large sunspot, the field had photospheric plasma composition. At photospheric and chromospheric heights, sunspot umbrae are regions of temperature minima so that elements are mainly neutral or singly ionized (e.g. Loukitcheva et al. 2014;Lodders 2019). As a consequence, local temperatures are not high enough to create a sufficient reservoir of ionized elements in the chromosphere, inhibiting plasma fractionation. Plasma transported from the chromosphere to the coronal field above the umbra is therefore likely to be unfractionated photospheric plasma.
On the western side of the sunspot, which is magnetically connected to another active region or has open field lines, FIP bias values were somewhat higher, between 1.5 and 2. The highest FIP bias of 3-4 was found on the eastern/southern sides of the spot in high temperature active region core loops. These loops are rooted in the penumbra and are subject to convection-driven footpoint motions leading to higher frequency heating than in the loops rooted in the umbra (see Del Zanna & Mason 2018, and references therein).
Our results are consistent with the predictions of the Laming model and the output from the simulations of Dahlburg et al. (2016) in that high FIP bias plasma occurs in high temperature, short loops where nanoflaring in the model generates high Alfvénic wave flux which is amplified by resonance. In the other sets of loops with different connectivities e.g., open field lines, lower wave flux is expected as there is no repeated reflection of any potential Alfvénic waves created there, and consequently, these loops are likely to contain plasma with lower FIP bias. In fact, these findings do not exclude other theoretical models based on either wave interactions with chromospheric ions e.g. the ion cyclotron wave heating model of Schwadron et al. (1999) or on processes linked to coronal heating e.g. heat conduction of the thermoelectric driving model of Antiochos (1994).
In the wider context, we still fundamentally lack an understanding of what is happening in the chromosphere when we see activity in the corona, and a key goal generally is to elucidate the connection between activity in the low atmosphere and observable changes higher up. We have found that the internally connecting core loops with the highest FIP bias are rooted in areas where the Alfvénic perturbations were found in the chromosphere. This is the first observational evidence of detectable Alfvénic perturbations in the chromosphere being linked to coronal loops containing highly fractionated plasma. Whether this is the result of a response to coronal heating, as in the ponderomotive force model, or further evidence of heating at coronal heights being driven from below, remains an open question.
CONCLUDING REMARKS
This work represents a first attempt to investigate the role of magnetic fluctuations in plasma fractionation, made possible thanks to nearly simultaneous observations at chromospheric and coronal heights by IBIS and EIS, respectively. Our results demonstrate a possible link between magnetic perturbations observed at chromospheric heights as small fluctuations of the spectropolarimetric quantities, and the locations of high FIP bias observed in the corona. They therefore observationally support a role for MHD waves in the generation of the FIP effect and wave-based theoretical models.
Some questions still remain open and identify possible future research directions. As already mentioned, magnetic fluctuations are expected for a number of different MHD wave modes. These can be magneto-acoustic modes which are locally excited within the umbra by residual convection and/or p-mode absorption, or globally excited eigenmodes of the sunspot. In this study, the exact identification of the wave mode responsible for the observed magnetic fluctuations was not possible with the available data. Indeed, different modes can coexist in the same structure hampering the identification process. However, as noted in Stangalini et al. (2020), the locations where the Alfvénic waves are observed correspond to a narrow range of magnetic field inclinations, suggesting a possible role of the magnetic field geometry. In this regard, it is worth recalling that MHD waves in magnetic structure can undergo a mode conversion at the Alfvén-acoustic equipartition layer, with part of the energy contained in the acoustic-like components (fast MHD mode in the plasma-β > 1 regime) being converted to a combination of fast magneto-acoustic (in the plasma-β < 1 regime) and magnetic-like waves. This physical mechanism is dependent on the attack angle between the wavevector and the field lines (see for instance Gary 2001;Newington & Cally 2010;Cally 2011;Hansen et al. 2016;Cally 2011), thus Stangalini et al. (2020) speculated on the possible role of the mode conversion and magnetic field geometry in the appearance of magnetic waves at chromospheric heights. In support of this scenario, Grant et al. (2018) uncovered evidence for Alfvénic waves, with the observed signatures being consistent with induced ponderomotive forces at the umbra/penumbra boundary of a sunspot chromosphere, suggesting that such wave-coupling effects may be linked to the increasing attack angles found in these locations.
Nevertheless, the Alfvénic waves identified by Stangalini et al. (2020) might be associated with different wave modes not all necessarily producing the fractionation, however, they could be considered a proxy to identify the spatial locations where, given a magnetic field geometry, the conversion of acoustic-like to magneticlike waves is particularly efficient.
In addition to the unambiguous identification of the wave process responsible for the magnetic waves, another important aspect is the propagation direction of the MHD waves, which plays a significant role in the FIP and I-FIP model of Laming (2015). Both the equipartition layer and the transition region can represent a reflective mirror for different types of waves (e.g. Hansen et al. et al. 2016). For this reason, MHD waves can undergo several reflections/conversions, thus leaving open both the possibilities of waves coming from below or above (i.e. due to nanoflares).
In our view, these remain important aspects to be further addressed in the future, and they may provide useful information to constrain and validate existing theoretical models. The heuristic method employed in this study combines the linear extrapolation of photospheric magnetic field and the FIP bias map to investigate the possible connectivity of the Alfvénic chromospheric perturbations identified by Stangalini et al. (2020). However, further studies that involve the full inversion of the chromospheric spectropolarimetric signals at the location of the perturbations are required to properly identify the properties of the background and transient magnetic field.
ACKNOWLEDGMENTS
Hinode is a Japanese mission developed and launched by ISAS/JAXA, collaborating with NAOJ as a domestic partner, and NASA and STFC (UK) as international partners. Scientific operation of Hinode is performed by the Hinode science team organized at ISAS/JAXA. This team mainly consists of scientists from institutes in the partner countries. Support for the post-launch operation is provided by JAXA and NAOJ (Japan), STFC (UK), NASA, ESA, and NSC (Norway). SDO data were obtained courtesy of NASA/SDO and the AIA and | 2020-12-09T02:41:23.459Z | 2020-11-15T00:00:00.000 | {
"year": 2021,
"sha1": "ca42b34abaa9c955ca2e8b5b0798a62c9e1e4bc1",
"oa_license": null,
"oa_url": "https://discovery.ucl.ac.uk/10120146/1/Baker_2021_ApJ_907_16.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ca42b34abaa9c955ca2e8b5b0798a62c9e1e4bc1",
"s2fieldsofstudy": [
"Physics",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
125296546 | pes2o/s2orc | v3-fos-license | Search for MSSM Higgs with the CMS detector at LHC
In the Minimal Supersymmetric extension of the Standard Model (MSSM), the Higgs sector contains two Higgs boson doublets, including, after electroweak symmetry breaking, the CP-odd neutral scalar A, the two charged scalars H±, and the two CP-even neutral scalars h and H. The results in the search for neutral and charged Higgs bosons with the CMS detector at LHC are presented, based on the data samples collected at √ s = 7 and 8 TeV. The neutral Higgs boson is searched in the bb, ττ and μμ final states, whereas the charged Higgs state is searched in top quark decays with at least one τ in the final state. Presented at LHC on the March LHC on the March Search for the MSSM Higgs boson with the CMS detector at LHC. Federica Primavera on behalf of the CMS Collaboration∗† Univerisity of Bologna and INFN, Bologna, Italy E-mail: federica.primavera@cern.ch In the Minimal Supersymmetric extension of the Standard Model (MSSM), the Higgs sector contains two Higgs boson doublets, including, after electroweak symmetry breaking, the CP-odd neutral scalar A0, the two charged scalars H±, and the two CP-even neutral scalars h and H0. The results in the search for neutral and charged Higgs bosons with the CMS detector at LHC are presented, based on the data samples collected at √ s = 7 and 8 TeV. The neutral Higgs boson is searched in the bb, ττ and μμ final states, whereas the charged Higgs state is searched in top quark decays with at least one τ in the final state. LHC on the March IHEP-LHC, 20-22 November 2012 Institute for High Energy Physics, Protvino,Moscow region, Russia ∗Speaker. †Presented at the LHC on the March 2012 Conference. c © Copyright owned by the author(s) under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike Licence. http://pos.sissa.it/ Short Title for header Federica Primavera on behalf of the CMS Collaboration
Introduction
The electroweak symmetry breaking mechanism of the Standard Model (SM) predicts the existence of a neutral scalar boson, the Higgs particle. While a boson consistent so far with its expected properties has been recently observed at a mass of about 125 GeV/c 2 [1,2], its exact properties and the detailed structure of the Higgs sector still need further investigation.
However, the SM Higgs boson suffers from quadratically divergent self-energy corrections at high energy. Numerous extensions to the SM have been proposed to address these divergencies.
In the model of supersymmetry (SUSY), a symmetry between fundamental bosons and fermions, a cancelation of these divergencies occurs. In the MSSM the Higgs sector contains two Higgs boson doublets [3,4]. One doublet couples to the up-type and one to the down-type fermions. After electroweak symmetry breaking, five Higgs bosons remain: the CP-odd neutral scalar A 0 , the two charged scalars H ± and the two CP-even neutral scalars h and H 0 . The model is described by a large number of parameters, but, by constraining a lot of them in the most conservative way, it can be described in terms of just two free parameters: m A 0 , the mass of the neutral scalar A 0 , and tan β , the ratio between the vacuum expectation values of the two doublets. For this scenario, the most conservative m max h , the masses of the other four Higgs bosons can be expressed as: where m W and m Z are the masses of the W ± and Z 0 bosons. In this lecture the results concerning both searches for charged and neutral MSSM Higgs bosons in the m max h scenario are presented: the charged Higgs has been studied in the low mass hypothesis, in the dominant τν decay channel; the neutral Higgs have been studied in the bb, τ + τ − and µ + µ − channels.
Search for charged Higgs
The searches for charged Higgs performed at CMS [?] concern just a light mass hypothesis [5].
The assumption that the charged Higgs mass is smaller than the difference between the masses of the top and the bottom quarks is applied. If m H + < m t − m b , the Higgs can be produced in the top quark decays t → H + b. For values of tan β > 5, the charged Higgs boson preferentially decays to a τ lepton and a neutrino, therefore, deriving the experimental limits, we assume that the branching fraction B(H + → τ + ν τ ) is equal to 1.
The dominant top quarks production process at LHC is pp → tt + X via gluon fusion. The possible decays of the top pairs are tt → H ± W ∓ bb and tt → H ± H ∓ bb, where each charged Higgs boson decays into a τ lepton and a neutrino. Depending on the τ decay, three not-overlapped final states are searched for, all requiring missing transverse energy and multiple jets coming from the hadronization of b-quarks: fully hadronic, semi-leptonic and leptonic channel. After a first common pre-selection, each final state is independently studied: more specific selection cuts are applied, systematic errors and limits are calculated, and finally the results are combined between them.
Depending on the final states, we have different background sources. Several processes affect all the three categories like SM decays of top pairs; multi-jets events with large E miss T , where the jets are misidentified as τ h or b-jets; and W + jet events. Processes as Drell-Yan affect only the semileptonic and the leptonic channels. Therefore a different background estimation is performed for each category: for the hadronic and semi-leptonic channels both data and simulation are used, while for the leptonic channel only the simulation is exploited. The results are obtained for an amount of
Search for neutral Higgs
The MSSM neutral Higgs production pp → Φ 0 +X at the LHC is dominated by two processes: bb-associated production, where Φ 0 is produced together with a bb pair, and the gluon-gluon (gg) fusion process. For relatively large values of tan β , the Higgs couplings to u-type particles are suppressed while the couplings to d-type particles are enhanced by a factor tanβ , relative to the SM. Therefore, in the MSSM, the combined cross section of Higgs boson production in association with b quarks is enhanced by a factor 2 tan 2 β .
For the same reason the Higgs decay into b quarks has a very high branching fraction (90%), even at large values of the Higgs mass, with the disadvantage to be difficult to separate from the very large QCD background. Despite their low branching ratios, the Φ 0 → τ + τ − and the Φ 0 → µ + µ − decay channels provide higher sensitivity than Φ 0 → bb. While the first process has a branching ratio larger by a factor (m τ /m µ ) 2 and provides better sensitivity in terms of limits calculation, the Φ 0 → µ + µ − has a cleaner experimental signature, due to the full reconstruction of the final state. The analyses performed so far at CMS concern these three final states.
In case of bb final state, in order to discriminate the signal, only the associated production is taken into account. There are two analyses for this channel: one using a full hadronic trigger based on an high p T threshold for jets and on-line b-tagging; the other exploiting dedicated triggers based on the detection of moderately high transverse momentum p T , non-isolated muons and two jets with on-line b-tagging. The use of a muon-and jet-triggered dataset allows to tag the semimuonic decay of one of the b-quarks, and also to tolerate lower energy thresholds on jets in the trigger, improving the overall sensitivity, especially in the low mass region. The main background arises from the heavy flavor multi-jet QCD. Both the analyses adopt a data-driven approach for the background estimation.
The first analysis [6], done on 4.0 fb −1 of data taken in pp collision at √ s = 7 TeV, reconstructs the invariant mass of the two b-tagged leading jets, for events containing at least three leading jets that are also coming from b-quark.
Even the second analysis [7], done on 4.8 fb −1 of data taken in same condition of the previous one, reconstructs the invariant mass of the two b-tagged leading jets, but for events with three b-tagged leading jets of which one of them contains the not isolated muon selected by the trigger. There is no evidence of signal in data, and the upper limit is set on the σ (pp → bΦ) × B(Φ → bb), and also projected in the (m A 0 ,tan β ) plane, by combining the two analyses ( fig. 3).
The searches for neutral Higgs in leptonic decays, are instead sensitive to both production mechanisms, bb-associated and gluon-gluon fusion production, that dominates at low tan β values.
The search for Φ 0 → τ + τ − is done on data collected during 2011 and 2012, that correspond to an integrated luminosity of 4.9 fb −1 in pp collision at 7 TeV and 17 fb −1 at 8 TeV [8]. Four different τ + τ − final states are studied where one or two taus decay leptonically eτ h , µτ h , eµ, µ µ, where τ h denotes a hadronic decay of τ.
In all the cases two high p T -isolated leptons, E miss T in a compatible direction with visible τdecay, and two high p T jets are required. In order to maximize the sensitivity, the selected events are split in two categories: one with at least one jet coming from b-quark (associated production) and the other with the rest of events (gluon fusion). After all the cuts the main backgrounds come from Drell-Yan processes, QCD multijets where one jet is misidentified as an isolated electron or muon, and W+jets events where one jet is misidentified as a τ h . All these contribution are estimated from data, while other less relevant processes are evaluated on simulation. The τ-pair mass is reconstructed using a maximum likelihood technique. The algorithm computes the τ-pair mass that is most compatible with the observed momenta of visible τ decay products and the missing transverse energy reconstructed in the event. The algorithm gives a τ-pair mass distribution ( fig. 4) consistent with the true value and a width of 15-20%. The signal for Φ 0 → µ + µ − is characterized by the presence of two oppositely charged muon tracks with high p T , isolated from the other particles and jets in the event. Such events also have a rather small missing transverse energy E miss T . In order to have the best significance the events are divided in three non-overlapping categories: events with at least one jet tagged as a b-jet candidate, events without b-jet but with an additional third muon, and all other events that do not belong to previous categories.
The analysis [9] is performed on 4.96 fb −1 collected in pp collision at √ s = 7 TeV. As can be seen from figure 6 the main remaining sources of background, after the overall selection, are the Drell-Yan events, in particular the bb-Z 0 process is an irreducible background for bb-associated production, and the decays from tt. The background is estimated by fitting the data and Monte Carlo simulation is used only to compute the expected signal efficiency. No evidence of the MSSM m max h scenario Higgs boson production is found within the sensitivity of each category. Upper limits are calculated in the (m A 0 ,tan β ) plane, excluding, for each m A 0 point, all the values above the first one that excludes the signal at 95% CL. Using this limit on the ratio σ MSSM /σ SM and the knowledge of the MSSM cross section, also the limit on the σ MSSM × B.R. is obtained ( fig. 7).
Conclusions
In this letter MSSM Higgs boson searches are presented, for both the charged and neutral higgs physical states. The results are obtained with the full statistic collected at CMS during the 2011 at 7 TeV, except for the Φ 0 → τ + τ − that exploits also a part of 2012 data at 8 TeV.
No signal was observed, providing 95% CL exclusion limits for cross section times B.R. of each process, and their projection in the MSSM (m H , tan β ) parameter space, where m H denotes a generic charged or neutral Higgs boson. These limits are expected to improve with the overall statistics of the 2012. | 2019-04-22T13:11:03.242Z | 2013-07-25T00:00:00.000 | {
"year": 2013,
"sha1": "bf9eb618225c5240581812956c0884a50324c348",
"oa_license": "CCBY",
"oa_url": "http://cds.cern.ch/record/1545359/files/CR2013_101.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "0b03a55ccd01f37284372ba00eabbcb2223317c5",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
6350960 | pes2o/s2orc | v3-fos-license | Block Copolymers in Electric Fields: A Comparison of Single-Mode and Self-Consistent Field Approximations
We compare two theoretical approaches to dielectric diblock copolymer melts in an external electric field. The first is a relatively simple analytic expansion in the relative copolymer concentration, and includes the full electrostatic contribution consistent with that expansion. It is valid close to the order-disorder transition point, the weak segregation limit. The second employs self-consistent field (SCF) theory and includes the full electrostatic contribution to the free energy at any copolymer segregation. It is more accurate but computationally more intensive. Motivated by recent experiments, we explore a section of the phase diagram in the three-dimensional parameter space of the block architecture, the interaction parameter and the external electric field. The relative stability of the lamellar, hexagonal and distorted body-centered-cubic (bcc) phases is compared within the two models. As function of an increasing electric field, the distorted bcc region in the phase diagram shrinks and disappears above a triple point, at which the lamellar, hexagonal and distorted bcc phases coexist. We examine the deformation of the bcc phase under the influence of the external field. While the elongation of the spheres is larger in the one-mode expansion than that predicted by the full SCF theory, the general features of the schemes are in satisfactory agreement. This indicates the general utility of the simple theory for exploratory calculations.
I. INTRODUCTION
Block copolymers (BCP) consist of several chemically distinct sub-chains. They are not only interesting as a model system for self-assembly, but also for their chemical versatility and affordability which have enabled their use in applications such as photonic waveguides [1], tough plastics [2], ordered arrays of nano-wires [3], etc. At a given chemical architecture and temperature, there is one thermodynamically stable meso-phase, with typical length-scales comparable to the chain size (∼ 10-500 nm). However, the material is rarely perfectly ordered, but rather is composed of many randomly oriented grains of size ∼ 1 µm. This has an adverse effect on nanotechnological applications.
A useful way to achieve improved long-range order is to subject the BCP sample above its glass transition to an external electric field E 0 . Due to the coupling between the field and the spatially-varying dielectric constant κ(r), there is a preferred orientation of the grains with respect to the field [4,5,6,7,8,9,10,11,12,13]. It has been shown by Amundson et al. [4,5] that the electrostatic free-energy penalty associated with dielectric interfaces which are not parallel to the electric field direction is the driving force for structures to reorient so that their interfaces are parallel to the field (∇κ(r) perpendicular to E 0 ). While the free energy penalty can be eliminated by this reorientation of lamellae and cylinders, it cannot be eliminated in the body-centered-cubic (bcc) phase, but only reduced by distorting the bcc spheres. Thus, the free energy of this distorted bcc phase, whose symmetry is reduced to R3m, increases with respect to the full disor-dered liquid (dis), lamellar (lam) and hexagonal (hex) phases [11], a circumstance which can bring about a phase transition. The effect of the electric field on the BCP morphology has been substantially accounted for recently [14] by incorporating the electrostatic Maxwell equations in the full set of selfconsistent field (SCF) equations, which permits calculation of the phase diagram at arbitrary degrees of segregation.
In this paper we compare two theoretical approaches to such a system; the aforementioned SCF study and a simple analytical approximation consisting of a Ginzburg-Landau expansion of the free-energy [15], valid only close to the orderdisorder temperature (ODT). The paper is organized as follows. In Section 2 we present the free-energy model which includes the electrostatic energy of the BCP in the field. In Section 3 we calculate the way in which an initial mesophile deforms under the influence of the field, and also find the relative stability of the competing phases. A comparison is made with the results of the SCF model. Section 4 contains a brief conclusion.
II. MODEL
Although the effect we consider here is generic to any multi-block BCP melts, we will restrict the discussion in this paper to the simplest A/B di-block copolymer, where a spatial variation of the relative A/B monomer concentration yields a spatial dependence of the dielectric constant and, hence, of the response to an external electric field.
We also assume for simplicity that the A monomeric vol-ume is equal to the B one. Then the volume fraction of the A monomers f , (0 ≤ f ≤ 1), is equal to its molar fraction. The order parameter φ (r) is defined as the local deviation of the A-monomer concentration φ A (r) from its average value: φ (r) = φ A (r) − f . From an incompressibility condition of the melt we also have at each point r, φ B (r) = 1−φ A (r). In the absence of any external electric fields, the bulk BCP free-energy per polymer chain, F b , in units of k B T , can be written as a functional of the order parameter, φ (r). One way of generating a simple analytical expansion in the order parameter relies on a Ginzburg-Landau-like free energy, which can be justified close to the order-disorder point (ODT) [15,16,17] and is repeated here without further justifications: where Ω is the system volume, and b is the Kuhn length, R g the radius of gyration, χ is the Flory parameter, N = N A + N B the total chain polymerization index, Nχ s the spinodal value of χN [15], c is a constant of order 1, and λ and u are functions of f as in refs. [15,16,17]. The phase diagram in the ( f ,χN) plane, as derived from the freeenergy, eq 1, is symmetric with respect to exchange of f and 1 − f . For small values of χ ∼ 1/T , the melt is disordered: φ (r) = f is constant. For χN larger than the ODT value of ≃ 10.5 and for nearly symmetric BCP composition ( f ≈ 1 2 ), the lamellar phase is the most stable. As | f − 1 2 | increases, the stable phases are doubly-connected gyroid, hexagonal and bcc phases [15,16,18].
Let us now consider a BCP slab placed in an external electric field, E 0 . The free-energy per polymer chain, again in units of k B T , is F tot = F b + F es , where the electrostatic energy contribution F es is given by the integral Here ε 0 is the vacuum permittivity, κ(r) is the local dielectric constant, v p is the volume per chain, and ψ is the electrostatic potential obeying the proper boundary conditions on the electrodes. The local field is E(r) = −∇ψ. We note that the variation of F es with respect to ψ yields which is the usual Maxwell equation ∇·D = 0 for the displacement field D = ε 0 κE. We consider a simple geometry of a BCP slab filling the gap between two parallel and flat electrodes separated by a distance d and potential difference V . Even when a non-homogeneous dielectric material like a BCP fills the gap between the two electrodes, the spatially averaged electric field in between the electrodes E is constrained to be E 0 = V /d. The local field E(r) differs from its average due to the nonuniformity of the dielectric constant, since κ = κ(φ ) depends on the local concentration φ (r) through a constitutive equation. In this paper we assume for simplicity a linear constitutive relation, where throughout this paper we use κ A = 6.0 and κ B = 2.5, thus modelling an A/B diblock copolymer where the A block is polymethylmethacrylate (PMMA) and the B block is polystyrene (PS), as is used in many experiments. Other constitutive relations can be considered [19]. When a field is applied on a melt in the lamellar or hexagonal phases, it exerts torque which causes sample rotation. The torque is zero, and the energy lowest, when the lamellae or cylinders are oriented parallel to the field. In such states, as well as the disordered phase, the electrostatic energy, eq 3, of the system is equal to a reference energy, given in eq 6. The bcc array of spheres, on the other hand, always has dielectric interfaces that are not parallel to the field, and its electrostatic energy is higher than the reference value. Hence, the spheres elongate in the applied field direction, to an extent which is a balance between electrostatic and elastic forces, as calculated below.
The reference energy per polymer chain, in units of k B T , is simply − 1 2 κ Ê 2 0 , whereÊ 0 is the applied field measured in the natural unit (k B T /ε 0 v p ) 1/2 , Let us estimate the value of the actual applied field corresponding toÊ 0 = 1. At 100 • C and using typical polymer volume per chain in the range v p ≃ 50 − 250 nm 3 , we find E 0 ≃ 47 − 107 V/µm. This is a relatively large field that can cause dielectric breakdown in some BCP films. Therefore, the experimentally interesting regime is usuallyÊ 0 < ∼ 1. The free-energy F tot as formulated above is valid close to the ODT point (weak segregation limit), where the concentration variations are small, φ (r) ≪ 1, and therefore the analysis can be carried out within the so-called one-mode approximation. Motivated by recent experiments [11,13], we concentrate on the transition from distorted spheres to cylinders or disordered melt in presence of an applied electric field. Taking E 0 to be in the (1,1,1) direction, we write the order parameter φ as a linear superposition of six components where The q's and k's are wave-vectors given by and all have the same magnitude q 0 . The three linearly dependent q i are orthogonal to the (1,1,1) direction and describe a hexagonal phase with axis along that direction. The three k i have equal and non-zero projections on the (1,1,1) axis. The six wavevectors transform into one another under the symmetry operations of the bcc phase. In the absence of an external field, each of these wavevectors would contribute equally in the order parameter expansion, so that g and w would be equal. These wavevectors characterize the first mode in such an expansion. Hence the name of the approximation. The amplitudes w(E 0 ) and g(E 0 ) depend on the magnitude of the average external field E 0 . Depending on the values of the two amplitudes, g and w, we can represent the order parameter of all phases of interest in the form of eq 7: w = g = 0 represents an undistorted bcc, while an R3m (distorted bcc) phase oriented along the (1,1,1) direction is represented by two non-zero amplitudes w = g. A hexagonal phase of cylinders whose long axis is in the (1, 1, 1) direction has w = 0 and g = 0. And finally, g = w = 0 represents the disordered melt. As was mentioned above, the spatially averaged electric field is simply the magnitude of the external field, E 0 . However, local changes in φ (r) give rise to local changes in κ(r). Consequently, the electric field can be written as follows: where δ E is the deviation from the average. The symmetry of the electrostatic potential follows from the Maxwell equation, eq. 4, the constitutive equation, eq. 5, relating the dielectric constant to the order parameter, and the R3m symmetry of the distorted bcc phase. From the symmetry of the potential, one finds that δ E can be decomposed in terms of the same bases of k's and q's given above: wherek i = k i /|k i | andq i = q i /|q i | are unit vectors. Within the one-mode approximation, the local field at each point is given in terms of two amplitudes, α and β . They determine the projection of the deviation field δ E onto the E 1 or E 2 directions.
In order to proceed we insert the E-field expressions, eq 11, into the electrostatic free-energy, eq 3. Using the definitions of eqs 7-9 and the properties: and we can perform the rather straightforward spatial averages of the various terms in the free energy, eq 3, and obtain: The last term is simply the reference energy which is common to all phases. For a given state of φ (a given BCP morphology), which is determined by a given value of w and g, the values of α(w, g) and β (w, g) are determined by the Maxwell equation, eq 4. This is equivalent to obtaining them by varying F es with respect to α and β . The procedure yields α = 0 and β = − 2/3∆κg/( κ + 1 2 ∆κw). Thus, F es is given by It is instructive to compare this result with the perturbation expression used by Amundson, Helfand and co-workers [4,5], a result obtained from eq 15 in the limit ∆κ/ κ → 0. In particular, the result of eq 16 yields a free energy which is symmetric under the interchange of monomers A and B, while the result of eq 15 does not. Such a symmetry is not expected in general: a system of ultra-high κ spheres (metallic limit) embedded in an insulator matrix has a different energy than the system of insulating spheres embedded in a ultra-high κ matrix, even when the average dielectric constant is the same κ .
Employing the single-mode Ansatz φ = wφ 1 + gφ 2 in eq 1, we finally obtain for the total free-energy per polymer chain in units of k B T In the next section we minimize this energy with respect to w and g at a given dimensionless external fieldÊ 0 and polymer architecture f , calculate the elongation the spheres of the bcc phase, and obtain the phase diagram. We compare the one-mode calculation (solid line) as obtained from minimization of eq 17 with a SCF calculation (dashed line). The R3m phase in the SCF calculation has a lower free energy that the solid line (one mode), and crosses the hex energy at higher value ofÊ 0 of about 0.49, while the one-mode approximation crosses atÊ 0 ≈ 0.43 (both marked with arrows). In this figure and following ones we used κ A = 6 and κ B = 2.5, modelling a PMMA-PS copolymer.
III. RESULTS
As noted above, the functional form φ = wφ 1 + gφ 2 allows us to distort smoothly a bcc array of spheres (having nonzero w = g) via a distorted bcc phase (w = g), and into a hexagonal array of cylinders (non-zero w but with g = 0). The disordered phase is given by w = g = 0. One is thus able to obtain the full phase diagram by minimizing eq 17 with respect to the amplitudes w and g.
Before presenting the phase diagram, let us consider a point in the ( f , Nχ) plane for which the stable phase at zero E field has a bcc symmetry. For presentation purposes, in figure 1 we have subtracted from the free energy the reference electrostatic energy, − κ Ê 2 0 /2, common to all phases, also subtracted the total free energy of the bcc phase in zero field, F bcc tot (0), and normalized the resulting free energy by that of the hex phase in zero field; that is we have plotted In the figure we show how the free energy f n changes witĥ E 0 for f = 0.3 and Nχ = 14.4. AtÊ 0 = 0 the bcc is the stable phase, and its free energy increases with increasing field E 0 , until it equals the free energy of the hex phase at a transition fieldÊ 0 ≃ 0.43. At larger fields the stable structure is a hex phase of cylinders oriented along the external field E 0 . The solid line in figure 1 is the result obtained from the one-mode approximation given above, while the dashed line is a obtained from the SCF theory, (as in Ref [14]). It has a lower free-energy. Consequently, the transition field in the SCF framework is higher and occurs at aboutÊ 0 ≃ 0.49. Figure 2 is a plot of the amplitudes w(Ê 0 ) and g(Ê 0 ), normalized by their zero-field value w(Ê 0 = 0) = g(Ê 0 = 0). Both amplitudes start at their common value in the undistorted bcc phase. As the field increases, w increases while g decreases. The spheres elongate in the direction of the field as a result of competition between electrostatic and elastic forces. At the transition field, there is a sharp, discontinuous, transition in the order parameter. Above this field, w attains a fixed value while g drops abruptly to zero. In this state the BCP morphology is that of cylinders oriented parallel to the external field. The dashed lines correspond to the values obtained from the SCF theory. Clearly, in the one-mode approximation, the spheres' deformation and eccentricity are larger than in the SCF theory.
The above calculation can be repeated for any ( f , Nχ) andÊ 0 field values and allows the construction of the full three-dimensional phase diagram in the ( f , Nχ,Ê 0 ) parameter space. In figure 3 we present a cut of the phase diagram at fixed f = 0.3. The region of a stable R3m phase (distorted bcc) is bound by two lines of phase transitions: one between this phase and the disordered phase, and the other between it and the hex phase. These two lines meet at the triple point (χ t , E t ). In figure 3, the different triple point values obtained from the two calculations are used to rescale both axes: χ/χ t andÊ 0 /E t . At fields larger than E t the R3m is not stable at any value of χ. For an additional comparison between the theories, we have examined the case in which, at a fixed value of f = 0.3, the dielectric constants of the majority and minority components are interchanged (i.e. κ A = 6.0 ↔ κ B = 2.5, hence, ∆κ → −∆κ). In both theories we find an increase in the value of the external field needed to bring about a transition from the distorted bcc phase to the hex phase. Thus, this subtle effect, which is not captured by the perturbation result of eq 16, is obtained in the simple one-mode approximation, eq 15.
IV. CONCLUSIONS
A simple theory for a non-homogeneous diblock copolymer (BCP) melt in an external electric field is presented, and compared with a more accurate, but more computationally intensive self-consistent field (SCF) one. The differences be-tween the two theories in zero external field are well known. In particular, the accuracy of the phase boundaries produced by the one-mode approximation deteriorates outside the vicinity of the ODT point (weak segregation) as compared to the SCF theory [18]. However, as in the zero field case, the qualitative behavior of the system in the presence of a field is described surprisingly well. The full electrostatic free-energy contribution is used, consistent within the one-mode approximation, eq 15. This was not accounted for in previous analytical studies [4,5,11,13], where only quadratic terms in the electrostatic potential were retained. The simple one-mode approximation captures the elongation of the spheres of a bcc phase when placed under an external field. The elongation is in the direction of the applied E 0 field. The two amplitudes describing this elongation, w and g, as shown in figure 2. At a threshold value of the electric field, a first-order transition to a hexagonal phase occurs and the amplitudes jump discontinuously.
As shown in figure 3, the simple, analytic, one-mode approximation also captures the essence of the phase diagram; the reduction in the phase space occupied by the distorted bcc phase as the field increases, and its eventual disappearance at a triple point.
Lastly the one-mode theory also captures the subtle interplay between structure and electrostatic response as evidenced by its prediction of a different critical field for phase transitions when the dielectric constants of the constituents are interchanged, a prediction in agreement with the more accurate theory [14].
Given its ability to capture all of the above effects, and given its extreme simplicity, such a theory could serve for useful exploratory studies in other problems concerning the effect of electric fields on block copolymers. | 2016-10-26T03:31:20.546Z | 2005-08-07T00:00:00.000 | {
"year": 2005,
"sha1": "997b69aef5535d336f00f3320282971e3709b8cd",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0508179",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a594106fef7f8a327fdf84218cfab39d6f726d77",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Chemistry",
"Physics"
]
} |
129945435 | pes2o/s2orc | v3-fos-license | Differentiation between non-hypervascular pancreatic neuroendocrine tumour and pancreatic ductal adenocarcinoma on dynamic computed tomography and non-enhanced magnetic resonance imaging
Purpose To determine the differentiating features between non-hypervascular pancreatic neuroendocrine tumour (PNET) and pancreatic ductal adenocarcinoma (PDAC) on dynamic computed tomography (CT) and non-enhanced magnetic resonance imaging (MRI). Material and methods We enrolled 102 patients with non-hypervascular PNET (n = 15) or PDAC (n = 87), who had undergone dynamic CT and non-enhanced MRI. One radiologist evaluated all images, and the results were subjected to univariate and multivariate analyses. To investigate reproducibility, a second radiologist re-evaluated features that were significantly different between PNET and PDAC on multivariate analysis. Results Tumour margin (well-defined or ill-defined) and enhancement ratio of tumour (ERT) showed significant differences in univariate and multivariate analyses. Multivariate analysis revealed a predominance of well-defined tumour margins in non-hypervascular PNET, with an odds ratio of 168.86 (95% confidence interval [CI]: 10.62-2685.29; p < 0.001). Furthermore, ERT was significantly lower in non-hypervascular PNET than in PDAC, with an odds ratio of 85.80 (95% CI: 2.57-2860.95; p = 0.01). Sensitivity, specificity, and accuracy were 86.7%, 96.6%, and 95.1%, respectively, when the tumour margin was used as the criteria. The values for ERT were 66.7%, 98.9%, and 94.1%, respectively. In reproducibility tests, both tumour margin and ERT showed substantial agreement (margin of tumour, κ = 0.6356; ERT, intraclass correlation coefficients (ICC) = 0.6155). Conclusions Non-hypervascular PNET showed well-defined margins and lower ERT compared to PDAC, with significant differences. Our results showed that non-hypervascular PNET can be differentiated from PDAC via dynamic CT and non-enhanced MRI.
Introduction
Pancreatic ductal adenocarcinoma (PDAC) is one of the most aggressive cancers. It has a poor prognosis, and the five-year survival rate is less than 4% [1][2][3]. This high mortality rate is due to the cancer's biological aggressiveness and advanced state at the time of diagnosis [4]. In recent years, the incidence of PDAC has been increasing, and it is projected to be the second most common cause of cancer-related death in the United States in 2020 [5].
Surgery is the only curative treatment for patients with PNET or PDAC. Patients with PDAC need a more radical surgery that includes lymphadenectomy [4,9,10], while those with PNET, which is non-invasive and smaller in size (< 20 mm), may only require a limited resection without lymphadenectomy, such as tumour enucleation, central pancreatectomy, or laparoscopic surgery [10]. Therefore, it is important to differentiate between PNET and PDAC preoperatively to estimate the prognosis and plan a surgical strategy.
PNET usually shows arterial enhancement with progressively decreased enhancement [2,3,6,11]. Although clinicians can easily diagnose PNET on preoperative imaging because, other than PNET, the number of hypervascular pancreatic tumours is very small [11], up to 48.6% of PNET does not show arterial enhancement, as is common in most PDAC [9,12,13]. Therefore, non-hypervascular PNET can be a differential diagnosis during assessment for PDAC via imaging.
Only a few studies have focused on the differences between non-hypervascular PNET and PDAC found on diagnostic images. The purpose of this study was to evaluate the findings on dynamic computed tomography (CT) and non-enhanced magnetic resonance imaging (MRI) for differentiation between non-hypervascular PNET and PDAC.
Patient selection
This retrospective cohort study was approved by the review board of our institution, and the need for informed consent was waived. The records of the patients with pathologically proven PNET or PDAC were reviewed when clinical information was available. All patients underwent surgery, including pancreatoduodenectomy, distal pancreatectomy, central pancreatectomy, and tumour enucleation, between April 2011 and June 2017. According to the European Society for Medical Oncology (ESMO) Clinical Practice Guidelines, dynamic CT plays a central role in the evaluation of pancreatic tumour, especially when pancreatic cancer is suspected, and should be the first choice of imaging investigation [14]. Furthermore, MRI could be useful as a supplementary imaging modality to evaluate vessel involvement and biliary anatomy, detect liver metastasis not detected by CT, and differentiate cystic lesions [14]. Based on the literature, we evaluated pancreatic tumours with dynamic CT and non-enhanced MRI. The inclusion criteria were as follows: (a) patients who underwent dynamic CT and non-enhanced MRI according to our institutional routine protocol within six months before surgery and (b) patients who had a detectable tumour, and the region of interest (ROI) could be set via CT and MRI. Meanwhile, the exclusion criteria were patients who had tumours with visually higher enhancement than pancreatic parenchyma on arterial phase.
CT scanning protocol
All CT images were obtained using either a 64-channel scanner (SOMATOM Perspective, Siemens, n = 34) or a 128-channel scanner (SOMATOM Definition Flash, n = 32 or SOMATOM Definition AS+, Siemens, n = 36). The scanning protocol consisted of non-enhanced and biphasic contrast-enhanced scans. The non-enhanced phase was obtained through the upper abdomen, including the entire liver and pancreas. Then, a bolus of 600 mgI/kg of iodine contrast medium was administered using an automatic power injector at a rate of 2.0-3.0 ml/s for 33 seconds. The contrast media used included iohexol (Omnipaque 350 Injection, Daiichi Sankyo, n = 34 or IOVERIN 350, Teva Takeda Pharma, n = 25), iopamidol (Iopamiron 370 Inj., Bayer Yakuhin, n = 20 or Oypalomin 370 injection, Fuji Pharma, n = 18), or iomeprol (Iomeron 350, Eisai, n = 5). Arterial phase was obtained 10 seconds after reaching 80 Hounsfield units with the ROI placed on the aorta at the level of the celiac artery, while portal venous phase was obtained 90 seconds after triggering. The arterial and portal venous phases were obtained through the upper abdomen and through the whole abdomen, respectively.
MRI scanning protocol
All MRI images were acquired using a 1.5-T whole-body MRI system with a six-channel phased array as the receiver coil (MAGNETOM Avanto, Siemens, n = 43 or MAGNETOM Symphony, A Tim System, Siemens, n = 59). The following objects were analysed in the routine abdominal MRI protocol: transverse T1-weighted image (T1WI) using a fat-saturated 2D or 3D gradient echo (2D fast low-angle shot or 3D volumetric interpolated breath-hold sequence; transverse T2-weighted image (T2WI) with fast spin-echo; and transverse diffusion-weighted image (DWI) performed as a single-shot echo-planar imaging pulse sequence with b-values of 50 and 800 sec/mm 2 using respiratory triggering. The apparent diffusion coefficient (ADC) was calculated with b values of 50 and 800 sec/mm 2 . The MRI pulse sequence parameters are summarised in Table 1.
Imaging analysis
An abdominal radiologist with 11 years of experience, who was blinded to pathological diagnosis and clinical information, retrospectively reviewed anonymised dynamic CT and non-enhanced MRI images on a Picture Archiving and Communication Systems workstation monitor.
For qualitative analysis, the following imaging parameters were evaluated: (a) tumour margin (well-defined or ill-defined), (b) cystic change or necrosis (present or absent), (c) calcification (present or absent), (d) upstream pancreatitis (present or absent), and (e) dilated main pancreatic duct (present or absent). Upstream pancreatitis was defined as pancreatic parenchyma showing high intensity on DWI and low intensity on fat-saturated T1WI. The dilated main pancreatic duct was defined as positive when its diameter was more than 3 mm. Pancreatic parenchyma was defined as a presumed non-pathological pancreatic region in which focal abnormalities such as pancreatitis and/or cystic lesions were not included.
For quantitative analysis, the following parameters were evaluated: (a) tumour size, (b) CT attenuation and MRI signal intensity of tumour and pancreatic parenchyma, (c) ADC values of tumour, (d) CT attenuation and MRI signal intensity ratios of tumour to pancreatic parenchyma, and (e) enhancement ratio of tumour (ERT). ROI was set to the largest solid portion of the tumour, avoiding the cystic component on CT images of non-enhanced, arterial phase, and portal venous phase, and on MRI images of fat-saturated T1WI, T2WI, DWI, and ADC map. We measured the density and signal intensity three times by ROI, and the average value was calculated. The CT attenuation and MRI signal intensity ratios of tumour to pancreatic parenchyma were calculated using the following formula: mean CT attenuation or MRI signal intensity of tumour/mean CT attenuation or MRI signal intensity of pancreatic parenchyma. The ERT during arterial and portal venous phases were calculated as follows: (Tp-Ta)/ (Ta-Tn), where Tn, Ta, and Tp were the attenuation of the tumour in Hounsfield units, measured during non-enhanced, arterial, and portal venous phase, respectively.
To assess the reproducibility of significant variables, a second radiologist with three years of experience in abdominal imaging evaluated variables that showed significant differences on multivariate analyses, while being blinded to the pathological and clinical information.
Statistical analysis
Fisher's exact test was used for qualitative variables, and the Mann-Whitney U test was used for quantitative analysis. Multivariate logistic regression analysis was performed using variables estimated to be related to outcomes based on knowledge and clinical judgment from previous reports [6,10,11,13].
The diagnostic performance of each quantitative variable was estimated via receiver operating characteristic (ROC) analysis. The optimal thresholds for differentiating between the PNET group and the PDAC group were chosen at the highest possible sensitivity and specificity on the ROC curves. Variables set with optimal thresholds were fit to the multivariate logistic regression analysis. Statistical analysis was executed using Ekuseru-Toukei 2015 (SSRI, Tokyo, Japan) and R (The R Project for Statistical Computing, version 3.3.0). For all tests, a p-value of less than 0.05 was considered statistically significant. Inter-observer agreement of findings was evaluated by calculating κ values for dichotomous variables or intraclass correlation coefficients (ICC) for continuous variables.
Results
A total of 102 patients (15 patients with non-hypervascular PNET and 87 patients with PDAC) met the inclusion Table 2. The mean ROI of the tumours was 121.5 ± 88.5 mm 2 . Age and tumour size showed significant differences. PNET patients were younger (p < 0.001), and their tumour sizes were smaller than those of PDAC patients (p = 0.04).
The results of quantitative and qualitative assessments are presented in Table 3. The well-defined margin was significantly different between non-hypervascular PNET and PDAC (p < 0.001). Only two PNETs (13.3%) showed ill-defined margin. These two tumours were poorly differentiated neuroendocrine carcinomas classified as histopathological G3. The remaining 13 tumours were G1 or G2. The absence of both upstream pancreatitis and dilated main pancreatic duct was significant between non-hypervascular PNET and PDAC (p < 0.001) (Figures 2-4). On univariate analysis, CT attenuation ratio of tumour to pancreatic parenchyma on arterial phase (p < 0.001) and ERT (p < 0.001) showed a significant difference, but CT attenuation ratio of tumour to pancreatic parenchyma on portal venous phase did not (p = 0.05). MRI signal intensity ratio of tumour to pancreatic parenchyma on all sequences and ADC also did not show a significant difference on univariate analysis.
The results of reproducibility tests of significant variables showed that the κ value for tumour margin was 0.64,
Discussion
We evaluated the findings of dynamic CT and non-enhanced MRI that contributed to differentiation between non-hypervascular PNET and PDAC. We also conducted blinded observer tests to assess the reproducibility of such imaging findings. Our results showed that there was a significant difference in well-defined margins between non-hypervascular PNET and PDAC (p < 0.001). These results were similar to previous studies that reported the same morphological tendency of hypervascular PNET [6,10,11]. The presence of well-defined margins was also a useful feature for dif-ferentiating between PDAC and PNET, even when exclusive to the non-hypervascular type, when dynamic contrast enhancement was inconclusive. Additionally, in our study, tumours with ill-defined margins were either PDAC or non-hypervascular G3 PNET. Although G3 PNET was identified in two cases, these results could also indicate that radical surgery should be considered if a patient presents with a pancreatic tumour with an ill-defined margin on preoperative imaging, because it increases the possibility of non-hypervascular G3 PNET or PDAC.
Our results show that ERT of non-hypervascular PNET is significantly lower than that of PDAC. ERT is considered to be affected by the degree of tumour fibrosis because abundant fibrosis reduces blood inflow to the tumour [15]. Histologically, PNET shows a lower degree of fibrosis, while PDAC shows abundant internal [12,16]. As such, it is assumed that non-hypervascular PNET shows lower ERT than PDAC. ERT is not affected by attenuation changes of the surrounding pancreatic parenchyma caused by tumour-induced pancreatitis, and it could be calculated using only the attenuation of the tumour. Therefore, the use of ERT is valuable in differentiating between PNET and PDAC. Jeon et al. [13] reported that hyper-or iso-enhancement in the portal venous phase are useful for differentiating between non-hypervascular PNET and PDAC. They also reported a line chart analysis of temporal contrast-to-noise ratio (CNR), which is the enhancement ratio of the tumour adjusted by the paraspinal muscle for standardisation [13].
The line chart analysis of temporal CNR of PDAC showed a higher rate of contrast enhancement over time than that of non-hypervascular PNET. This was similar to our results that showed a lower ERT on non-hypervascular PNET and a higher ERT on PDAC. Our results on ERT also supported their results in the line chart analysis of temporal CNR as a dynamic curve. However, our results cannot be directly compared to their results because our CT acquisition times for arterial and portal venous phases were different from those used in their protocol, and we used ERT as a variable, rather than a dynamic curve. The CT attenuation ratio of tumour to pancreatic parenchyma on arterial phase showed a relatively good discriminative performance on multivariate analyses, although not significant (p = 0.21). This could indicate that non-hypervascular PNET shows a substantially higher enhancement degree compared with PDAC in arterial phase on quantitative analysis, even though both nonhypervascular PNET and PDAC showed a similar internal enhancement in arterial phase on visual assessment. This may reflect that the inherent histopathological property of the rich capillary network of PNET is associated with vascularity in the arterial phase [6].
A previous study by d ' Assignies et al. showed that the blood flow of PNET is related to its grade, and that blood flow was significantly higher in the group of benign tumours [17]. In our study, 86.7% (13/15) of tumours were low-grade tumours classified as G1 or G2. Although excluded from this study, all hypervascular PNET were [13]. Although non-hypervascular PNET is more likely than hypervascular PNET to include high grade tumours, the majority of non-hypervascular PNET are low-grade tumours that require less extensive surgery; thus, it is clinically important to differentiate non-hypervascular PNET from PDAC.
Little is known about the added value of non-enhanced MRI for differentiation between PNET and PDAC. In our study, the signal intensity of MRI including DWI and ADC showed no significant difference between PNET and PDAC. Such a result was similar to previous reports [11,18]. Concerning PNET, some studies showed that DWI and ADC have predictive value for tumour grading. This is particularly useful to differentiate between G1-2 and G3 tumours [19][20][21].
The limitations of our study are its single-site retrospective design and the higher number of PDAC patients than non-hypervascular PNET. Non-hypervascular PNET is the less frequent type of PNET, which is generally a rare tumour, and only 15 cases of non-hypervascular PNET were seen in our study. Future studies with larger sample sizes are required to provide additional accuracy to our study. Furthermore, our study included all grades of PNET. Some previous studies reported that DWI is useful for the differentiation between G1-2 and G3 tumours [19][20][21]. Thus, if we subdivide the cases into different grades of PNET (G1-3) and PDAC prior to evaluation, there may be some imaging features that can differentiate these tumours. Additionally, our arterial phase protocol was conducted earlier than recommended in the European Society for Medical Oncology (ESMO) Clinical Practice Guidelines [4,14]. In our CT imaging protocol, the arterial phase was taken 10 seconds after reaching 80 Hounsfield units with ROI placed on the aorta. This was approximately 30 seconds after administration of the contrast agent, which is earlier than the 40 seconds recommended in the ESMO Clinical Practice Guidelines. The pancreatic arterial phase should also be acquired to better compare CT imaging findings to previous reports.
In conclusion, a well-defined margin and a lower ERT of non-hypervascular PNET contributed to differentiation between non-hypervascular PNET and PDAC. Consequently, by interpreting the images correctly, unnecessary extensive surgery can be avoided.
Conflict of interest
The authors report no conflict of interest. Tn, Ta, and Tp are the attenuation of tumour (in Hounsfield Units) during non-enhanced, arterial, and portal venous phase measured, respectively. CI -confidence interval, ERT -enhancement ratio of tumour, CT -computed tomography | 2019-04-26T13:08:00.511Z | 2019-03-13T00:00:00.000 | {
"year": 2019,
"sha1": "bd63d1beb049448f73c99126b5a8b649aedaa763",
"oa_license": "CCBYNCND",
"oa_url": "https://www.termedia.pl/Journal/-126/pdf-36279-10?filename=Differentiation%20between.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bd63d1beb049448f73c99126b5a8b649aedaa763",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
35101612 | pes2o/s2orc | v3-fos-license | Gold medals, vitamin V and miscreant sports
It's all but become an unofficial Olympic sport: garnering gold by eluding doping authorities. Ever-craftier miscreants dabble in performance-boosting pharmaceuticals one step ahead of testing capabilities, leaving viewers to marvel at freakish athletes whose necks are wider than their heads and
I t's all but become an unofficial Olympic sport: garnering gold by eluding doping authorities. Ever-craftier miscreants dabble in performance-boosting pharmaceuticals one step ahead of testing capabilities, leaving viewers to marvel at freakish athletes whose necks are wider than their heads and thighs are wider than their waists. Some sports, such as weightlifting, seem little more than a competition between drug designers.
It's a far cry from the horror that ensued when drug testing was first introduced for the 1968 Winter and Summer Olympic Games and an athlete was busted, for of all things, drinking beer. Now, citizens cross their fingers that the urine samples of their athletic favourites will pass muster and only wish that the embarrassment was as tame as that suffered by the Swedes when modern pentathlete Hans-Gunnar Liljenwall was stripped of a bronze medal for dipping into the local cerveza at the Mexico City Summer Games.
Subsequent developments over the ensuing 4 decades have forced the science of detecting performance-enhancing drugs to evolve drastically in the cat-andmouse game between those who develop new screening tests and those who constantly seek new ways to artificially boost performance while evading the law.
The latest advances on both sides will be on display at the upcoming Beijing Olympics, where officials expect to administer a record 4500 tests between Aug. 8 and 24 in the hunt for minute traces of any of the hundreds of substances banned by the World Anti-Doping Agency.
Much of the effort goes toward uncovering new and illicit uses for otherwise everyday pills and medicines that were "created for good reasons but are abused by athletes," says David Howman, director general of the Montréalbased agency.
In that battle, Viagra (sildenafil citrate) has surfaced as the latest everydayproduct-turned-performance-booster.
Beijing (once known as Peking) is the widespread belief that the little blue pill's cardio boost will offset the effects of smog in the capital of the People's Republic of China.
So concerned about pollution are Olympic authorities that they are allowing asthmatic athletes to use inhalers to stave off the air's effects.
In the face of such concerns, it's expected Viagra use will soar in Beijing, particularly as its use will be permitted. Results of the Viagra tests announced by the World Anti-Doping Agency are not expected until sometime in 2009, meaning 2010 is the earliest a ban could be put in place.
Yet the environmental conditions in Beijing are hardly the entire explanation for the increasing number of instances in which Viagra is surfacing in World Anti-Doping Agency officials confirmed in June that they are studying the drug for potential inclusion on the list of banned substances. Athletes are said to be popping the antiimpotence pill because it increases blood flow to the lungs, thereby boosting cardiovascular capacity. One study found that in some cases the pill improved cyclists' times on a 6-kilometre course by up to 15% (J Applied Physiology 2006;100:2031-40).
The pill also made headlines for turning up in the lockers of professional baseball players, becoming the latest chapter in the sport's growing drug scandal. A cyclist at a race in Italy in May was busted when 82 Viagra pills, and syringes hidden inside tubes of toothpaste, were found inside his father's car.
Compounding the lure for athletes in
Gold medals, vitamin V and miscreant sports
Fuwa, the official mascot of the Beijing 2008 Olympics, is depicted here in a lantern featuring the 5 Olympic rings, which represent the 5 major regions of the world: Africa, the Americas, Asia, Europe and Oceania. At least 1 of the 5 colours of the rings -blue, yellow (appears orange in photo), black, green and red -can be found in every national flag of the world.
Stringer Shanghai/Reuters stances like steroids and those efforts were inconsistent. But the doping landscape has been greatly professionalized since then.
Authorities trumpet the progress that has been made in catching those who cheat at the Olympics. But there have been few successes in addressing the systemic use that occurs between Games.
"We know we are not addressing the issue properly," particularly in the early stages of an athlete's career, Ayotte says. "The Olympics are unique but this is not the real life. The athletes do not start doping on the eve of their first Tour de France or their first Olympics." Canada has often led efforts to curb doping, and has often been asked to introduce and monitor programs around the globe. It's a regime that in many respects is the product of national shock and embarrassment that ensued when sprinter Ben Johnson was stripped of a gold medal at the 1988 Olympics when his urine sample was found to contain the testosterone-based anabolic steroid Stanozolol.
Yet, the Seoul Olympics also heralded a decade in which anti-doping efforts sagged, fewer athletes were getting caught and as a result, public faith in the integrity of the system suffered.
Until then, there had been fairly steady progress, with most Summer Olympics producing anywhere between 6 and 12 doping busts (Box 1). But the numbers dropped conspicuously in the 1990s, bottoming out at 2 doping infractions at the 1996 Olympics.
Out of that came the World Anti-Doping Agency, tasked with regaining some of the lost credibility. Either it's a new-found performancebooster or there's an eyebrow-raising prevalence of impotence amongst the world's top athletic performers.
Doping authorities appear entirely skeptical of the latter hypothesis. Yet stickhandling between those taking the substance for legitimate cause and those who are looking for a boost creates a logistical nightmare.
"How on earth are they going to ban one of the most frequently taken medications in the world?" asks Dr. Chris-tiane Ayotte, director of the doping control laboratory at the l'Institut national de la recherche scientifique in Laval, Quebec, while discussing the prevalence of Viagra and Cialis use amongst athletes.
Before the World Anti-Doping Agency was formed in 1999 following a Tour de France doping scandal that rocked the cycling world, debates about the unexpected benefits of everyday products like caffeine (initially banned but then reinstated) and Sudafed, were few and far between. Efforts were largely focused on specialized sub-Sprinter Ben Johnson rocked Canada by testing positive for the testosterone-based anabolic steroid Stanozolol after winning the gold medal in the 100-metre dash at the 1988 Seoul Olympics. Runner-up American Carl Lewis promptly expressed shock and outrage. Johnson was stripped of his medal and it was awarded to Lewis. Roughly 15 years later, it was revealed that Lewis, who won 10 Olympic medals in his career, had tested positive for banned substances 3 times in the months before the 1988 Olympics, but the US Olympic Committee covered up the findings and cleared him to compete, as they did for hundreds of other US athletes who'd been dipping into the steroid box. The parade of disgraced athletes has essentially been unabated since. caught doping. Seven of them were cross-country skiers and 4 were hockey players.
The increased number of positive tests is in part a function of the increased number of tests administered at each Games. At the 2000 Olympics, about 2000 doping tests were administered. That number grew to 3700 by the 2004 Olympics. Officials expect as many as 5000 tests will be conducted at the 2012 Olympics in London, England.
The increasing number is largely a result of the expansion of the rules governing who gets tested. In the past, the top 4 finalists in an event and 1 other athlete chosen randomly were subjected to tests.
But in Beijing, the top 5 athletes will be tested in addition to 2 chosen at random in each final. As well, random tests will be conducted throughout earlier stages of competition.
Beginning in 2000, Olympic athletes were also subject to pre-Olympic, out-of-competition testing to detect substances consumed prior to competition that wouldn't later appear on a test.
The sophistication of the tests has changed as well. Blood testing was introduced on a limited basis at the 1994 Winter Olympics and at the 2000 Summer Olympics. Urine remains the test subject of choice, but in their effect. It also added any agents modifying myostatin functions, particularly myostatin inhibitors.
A ruling was also made on intravenous infusion. The agency prohibited such treatments except in the case of acute medical conditions. "There has not been much change since 2006 on banned substances," Howman says. "It is more tidying up the list so that it is both scientifically and legally proofed to prevent challenges to the list." A complete list of prohibited substances is available at www.wada-ama.org.
Selective androgen receptor modulators are typical of sort of drugs now preferred by athletes. They are normally used to treat men who lack adequate amounts of the male sex hormones. But for athletes, they have potential to have the desirable effect of bolstering testosterone levels, which builds strength and bone density, without the less dangerous side effects typical of steroid use.
Myostatin inhibitors, similarly, are normally prescribed to patients suffering from muscle atrophy. But for athletes, the inhibitors remove the muscles' normal size regulators, thereby allowing the muscles to grow unencumbered. some drugs such as erythropoietin and human growth hormone are better detected in blood tests.
"We will be testing for [human growth hormone] in Beijing, using blood not urine," says Howman. "That's an advance." It is the International Olympic Committee itself which will administer and monitor athlete testing in Beijing, not the World Anti-Doping Agency. The latter focuses on policies, regulations and monitoring the 33 facilities worldwide that have been approved for testing athletes' samples. An agencyapproved lab in Beijing, though, will handle the bulk of actual lab work. The agency will have 12 people in Beijing on an independent monitoring team, auditing the International Olympic Committee and its anti-doping efforts. Another dozen staff will work in the Olympic Village, distributing information to the athletes on anti-doping efforts and reminding them of what is and is not allowed.
At its annual summit late last year in Montréal, the World Anti-Doping Agency added several substances to its list such as selective androgen receptor modulators, a form of non-steroidal molecules similar to anabolic steroids Neither of the groups has yet surfaced amongst athletes in competition testing. But the World Anti-Doping Agency added them to the list in a proactive move.
Authorities also say they are on particularly high alert because the Games are being held in Beijing. Many black market drugs originate from China and having the athletes in such close proximity to manufacturers risks making the drugs more readily available.
"There have been concerns in China," Howman says. "The greater and cardiovascular health of team members. That's because all the healthy hamstrings and strong muscles in the world will be of little use if an athlete is left unable to breathe comfortably.
In fact, concerns over air quality in Beijing have so worried Olympic officials that the International Olympic Committee ruled asthmatic athletes will be permitted to use inhalers in Beijing, under the therapeutic-use exemption. W hen Canadian athletes arrive in Beijing to compete in the upcoming Summer Olympics, it will not necessarily be pulled hamstrings, broken bones or other traditional ailments that keep Team Canada's medical staff up at night.
Instead, their biggest challenge will be preparing elite athletes to perform in a steamy, smog-filled climate that could severely affect the respiratory | 2018-01-05T18:55:39.347Z | 2008-07-29T00:00:00.000 | {
"year": 2008,
"sha1": "343f57da7b8d654d0bdcb5fd782536896faee0fa",
"oa_license": null,
"oa_url": "http://www.cmaj.ca/content/cmaj/179/3/219.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "343f57da7b8d654d0bdcb5fd782536896faee0fa",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
198194426 | pes2o/s2orc | v3-fos-license | Clinical characteristics and treatment of thallium poisoning in patients with delayed admission in China
Supplemental Digital Content is available in the text
Introduction
Thallium is an odorless, colorless, and rare heavy metal that occurs in nature, primarily in the form of oxides, halides, sulphates, carbonates, and acetate compounds. [1] Thallium compounds have been widely used for rodenticides; however, due to cases of thallium poisoning, use of these compounds has been banned in many countries. [2][3][4] Despite this, thallium poisoning persists in developing countries, including China, primarily as a result of criminal actions. [5][6][7] Thallium salts are extensively absorbed through almost all routes of exposure. Oral bioavailability of hydrophilic thallium salts approaches 90% to 100%. [1] Since thallium salts can be distributed in many organs of the human body, the symptoms of thallium poisoning are diverse and non-specific. Therefore, individuals who experience thallium poisoning are prone to a delayed admission. There is also a lack of evidence for effective treatment methods in patients whose admission was delayed. Thus, this study aimed to summarize the clinical features and treatment experience of patients with thallium poisoning and delayed admission.
Study design and setting
This retrospective, descriptive, single-center study was conducted at the Affiliated Hospital Academy of Military Medical Sciences (the Poisoning Treatment Center of the Army and National Key Clinical Specialties) and analyzed data of patients who were diagnosed with thallium poisoning between 2008 and 2018. Blood and urine thallium tests were performed by the poisondetection laboratory of our hospital and measured by atomic absorption spectrometry. All study participants provided informed consent, and the retrospective study design was approved by the appropriate ethics review board of our hospital.
Patient selection and grouping
Eligible participants included patients diagnosed with thallium poisoning in our department. Patients with chronic thallium poisoning and patients with other poisoning were excluded. To distinguish the clinical features of patients with different admission times and according to the stages of thallium poisoning, [8] we divided patients into 3 groups according to the time from symptom onset to admission. Patients admitted to the hospital within 7 days were considered to have immediate admission or a mild delay in admission, and we classified this group of patients as early admission. The patients who presented between 7 and 14 days were categorized as moderate delay in admission, and the patients admitted after 14 days from symptom onset were categorized as severe delay in admission.
Data collection and variables
Data were obtained through the hospital case management office and included all patients with thallium poisoning within the specified research period. Variables were determined by QZW, WYA, and LGD; 2 doctors, BLL and LYQ, carefully reviewed the cases and collected case data. The follow-up data were obtained by telephone. Collected variables included the following: 1. Descriptive analyses of basic patient characteristics, including sex, age, the etiologies of poisoning, modes of exposure, time from onset to presentation, symptoms at admission, organ damage and blood and urine thallium concentrations at admission, laboratory findings, and EMG (electromyogram), MRI (magnetic resonance imaging), and EEG (electroencephalogram) findings; 2. treatment methods employed, including detailed descriptions of the Prussian blue (PB) and blood purification methods; 3. prognoses of patients, including changes in blood and urine thallium concentrations, symptoms at discharge; 4. follow-up data.
Statistical methods
Data analysis was performed using SPSS Statistics Software (version 20; IBM Corp., Armonk, NY). The Shapiro-Wilk test was used to test the normal distribution of numerical variables. Continuous variables were expressed as means with standard deviations or medians with the interquartile range (IQR), if the assumption of a normal distribution was violated. Categorical variables were given in numbers and percentages. Either the oneway ANOVA (with post-hoc least square differences method) or Kruskal-Wallis test (with post-hoc Dunn multiple comparison test) was used for 3-group comparisons of continuous variables. The Fisher exact test (with a Bonferroni correction for multiple comparisons) was used to analyze contingency tables with small sample sizes. A 2-sided P value of < .05 was considered significant.
Baseline characteristics
A total of 34 patients with thallium poisoning were included in this study, and all patients were diagnosed in our department.
Detailed patient information is summarized in Table 1. The mean and standard deviation of patient ages were 39.9 ± 13.2 years; 3 patients were under the age of 18. This study included 19 men (55.9%). Among the cohort, 8 were admitted early (23.5%), 9 (26.5%) experienced a moderate delay in admission, and 17 (50%) experienced a severe delay in admission.
The time from symptom onset to admission to our department was 13 days (IQR, 7.5-26). Of the 34 patients, 22 patients were poisoned through criminal activities, 11 patients had unexplained poisoning, and 1 patient was suicidal. All patients were poisoned by digestion. Three patients were admitted directly to our hospital and 18 were referred to a number of hospitals (none were clearly diagnosed, and all had been given symptomatic supportive treatment), with a median referral frequency of 2.5 (IQR, 1-3) times. Additionally, 13 patients were misdiagnosed with other diseases, including Guillain-Barre syndrome (50%), gastritis (41.7%), rheumatic immune disease (8.3%), skin disease (8.3%), and mental illness (8.3%). The initial symptom of 18 (52.9%) patients was abdominal pain, 12 (35.3%) patients had pain in extremities, and 4 (11.8%) patients had abdominal distension. In addition, due to the concealment of thallium poisoning, most patients were unable to determine the exact time of poisoning and the poisoning dose. Table 1. Patients with severe delayed admission had significantly lower blood and urinary concentrations at admission compared to those with early admission. The proportion of those who had abdominal pain symptoms on early admission was significantly higher compared to those with severe delayed admission, whereas the proportion of those with hair loss who experienced severe delayed admission was significantly greater. In terms of neurological symptoms, with the exception of 2 children and a patient with a severe coma, the other patients all developed pain in the extremities. Of all the patients, 2 developed difficulty urinating (catheterization was performed); 2 developed vulvar pains (excluding vulvitis); and 3 developed significant central nervous system damage on admission (one patient had memory loss; one patient had confusion; 1 patient was in a deep coma, and all the 3 patients experienced severe delayed admission).
Clinical examinations.
There was no difference between those who experienced severe delayed admission and early admission in terms of all organ injuries; however, the proportion of those who experienced liver injury was the highest (50%) compared to other organ injuries. In addition, at admission, the majority of patients (26 patients, 76.5%) had a normal white blood cell count and 8 patients (26.5%) had an increase in white blood cell levels (10.12-22.24 Â 10 9 cells/L). The results of the examinations of the nervous system were as follows. Eleven patients (32.4%) (2 early admission patients, 3 moderate delay admission patients and 6 severe delay admission patients) underwent cranial MRI. All patients were examined by MRI after admission, and the time of MRI was 14 (IQR, 10-44) days from onset. The results showed that 3 patients had obvious abnormal MRIs, all of whom experienced a severe delayed admission; 2 among them had central nervous system symptoms on admission (confusion and deep coma), and the other one was found to have a personality change during follow-up.
Five patients underwent EEG examination. One of them experienced a moderate delay in admission with a slightly
Treatment and patient outcomes.
After admission, the treatments received by patients included stomach protection, acid suppression, circulation improvement, nerve nutrition, protection of vital organs, pain relief, and PB. The method of treatment and patient outcomes are presented in Table 2. The median blood thallium concentration of all patients treated with blood purification treatment was 696 (IQR, 293.5-1098.3) ng/ml. Among the early admission patients, seven were given a blood purification treatment based on PB and symptomatic supportive therapy, and the blood thallium concentration of these patients at admission was 1009 (IQR, 649-1981) ng/ml. Among the patients who had a moderate delay in admission, 6 were given blood purification treatments based on PB and symptomatic supportive therapy, and the average blood thallium concentration of these patients at admission was 696 (IQR, 250.5-902.5) ng/ml. Among those who had a severe delay in admission, 5 patients were treated with blood purification based on PB and symptomatic supportive therapy, and 12 patients were treated with Prussian blue alone; the median blood thallium concentration of these patients at admission was 300 (IQR, 289-762.3) ng/ml and 19.7 (IQR, 2.6-108.3) ng/ml respectively. Details on the PB treatment method and blood purification are summarized in Supplemental File 2, http://links.lww.com/MD/D126. None of the patients in this study died after treatment. The total length of hospital stay was 24.7 ± 12.3 days. The blood and urine thallium concentrations of all patients significantly reduced after both treatments, as shown in Table 2. In addition, all patients experienced improvement in symptoms and organ function following treatment.
Follow-up.
Overall, 31 patients (91.2%) were followed up; the median follow-up time was 41 (IQR, 21-96) months. A total of 26 patients (83.9%) recovered well without any sequelae, and 5 of the patients had significant sequelae. Of these 5 patients, 2 who had a moderate delay in admission still had mild pain and numbness in the lower extremities at the time of follow-up. Among the other 3 patients, 1 person developed a personality change, and the patient who was in a deep coma could communicate normally after being treated in our hospital; however, that patient could not stand independently at the time of follow-up. The patient with confusion, due to early discharge, eventually suffered serious sequelae including ataxia, optic nerve damage, and respiratory muscle weakness. All 31 patients grew new hair 1 to 2 months after discharge.
Discussion
Thallium compounds can accumulate in the human body and be rapidly distributed throughout it, including in the skin and hair. [1,9] The common symptoms of thallium poisoning include gastroenteritis, peripheral neuropathy, and hair loss. [2,[9][10][11] Due to complex multi-organ involvement, the symptoms of thallium poisoning are diverse and non-specific; therefore, early diagnosis is difficult, and delays in admission are common. Similar to previous research, [3,4] most of the patients in this study had a delayed admission; specifically, the proportion of those with a severe delay (>14 days) in admission was the largest (median: 25 days), indicating that delayed admission after thallium poisoning remains a serious issue. In contrast to previous studies, we grouped patients according to time from onset of symptoms to hospital admission; a previous study classified thallium poisoning into immediate phase (within hours), intermediate phase (from hours to days), and late phase/residual phase (after 2 weeks). [8] Grouping helped us to better understand the clinical characteristics of patients at different stages, especially those who had a severe delay in admission.
According to the literature, [1,10,11] gastrointestinal symptoms such as abdominal pain occur soon after poisoning, while hair loss might occur 2 to 3 weeks after poisoning. [1,7,12] In this study, those who were admitted early had significantly more symptoms associated with abdominal pain and less hair loss than those who had a delay in admission, indicating that most of the early admission patients were still in the early stage of poisoning; these differences in symptoms might help to judge a patient's time of poisoning. In our study, those who had a delay in admission had lower blood thallium concentrations, and this might be due to the fact that the thallium ions absorbed into the body had been redistributed from the blood circulation to various tissues due to the longer poisoning time. Thallium poisoning can cause damage to multiple organs, especially to the liver, kidneys, and heart. [1,10] Liver, kidney, and heart damage all occurred in patients in this study, with the largest proportion (50%) experiencing liver damage. However, the damage to patients' organs in all cases was not serious, and quickly alleviated after protective organ treatment. In addition, most patients in the present study had normal levels of white blood cells, with only 8 patients showing an increase. Progressive peripheral neuropathies develop into severe painful sensations 2 to 5 days after exposure [1] ; in our study, pain in the extremities occurred in almost all patients. However, because of the lack of systematic pain grading, it was not possible to understand the severity of peripheral neurological symptoms in patients in different groups. Analysis of 16 patients who had a moderate or severe delay in admission and who underwent EMG testing found that most patients with delayed admission had neurogenic damage, indicating that a delay in admission might be associated with serious peripheral nervous system damage. In addition, 3 of the patients who had a severe delay in admission developed central nervous system damage (experiencing symptoms such as memory loss, confusion, and even a deep coma); MRI results and EEGs indicated that injury occurred only in those who had a severe delay in admission. Furthermore, the follow-up results showed that there were no sequelae in those who had an early admission, while some patients who had a delay in admission suffered obvious nervous system sequelae. These results suggest that delayed admission is associated with a high probability of central nervous system injury.
Treatment for thallium poisoning consists of removal from exposure, supportive care, and enhanced elimination. [10] For years, PB has been the most commonly prescribed antidote to treat this poisoning because it interrupts re-adsorption of thallium in the intestine and increases its elimination from the body. [1,10,13] However, its availability is limited in many regions. In China, only a few research institutions have PB reserves. In addition to PB treatment, many studies have reported that blood purification is also an important treatment option. [3,4,6,10,11,14,15] Although studies have shown that blood purification has a low clearance rate of thallium salt, it is still superior to other current removal methods and is therefore recommended for the treatment of thallium poisoning, especially severe thallium poisoning. [10] Many experts have reached a consensus that the earlier that blood purification begins, the better the patient outcomes will be. It is best to start within 24 to 48 hours, and blood purification is recommended when blood thallium concentration is >0.4 mg/ L. [10] Some scholars believe that even patients admitted to hospital later than 48 hours should be given blood purification treatment, but no consensus has been reached. [10] A study showed that PB combined with hemodialysis is helpful in treating patients with delayed admission thallium poisoning. [4] One thallium poisoning case report showed that the condition of a patient who was poisoned for 3 weeks had improved after hemodialysis treatment. [15] Our research supports the existing studies of the efficiency of blood purification for treating those with delayed admission for thallium poisoning. In this study, all patients with relatively high blood thallium concentrations after admission (696; IQR, 293.5-1098.3 ng/ml) were given a blood purification treatment based on PB treatment. This treatment was given even for those who had a severe delay in hospital admission; the median blood thallium concentration of those patients was 300 (IQR, 289-762.3) ng/ml. After treatments, the condition of all patients greatly improved, and blood thallium concentrations gradually decreased, demonstrating that the treatments used were effective. Further research is needed on the criteria for blood purification, especially for those with a delay in hospital admission. The 3 children in this study were treated with PB alone, due to their age. Their blood and urine thallium concentrations decreased significantly, and the symptoms were generally relieved. However, this is still insufficient evidence for blood purification efficacy, as we did not have a parallel control patient group.
There are some limitations of this study. First, this was a retrospective descriptive analysis of a small patient sample, and the level of argumentation was low. Second, since most patients had delayed diagnoses, it was impossible to assess the clinical features of the early stage of thallium poisoning, and we were unable to collect the dose-response data. Third, there was no control group; therefore, the efficacy of blood purification therapy could not be determined.
In conclusion, the clinical manifestations of thallium poisoning are diverse and non-specific. In addition, this condition is prone to misdiagnosis and delayed treatment. Patients who experience a delay in admission are more prone to serious peripheral nervous system and central nervous system injury. In this study, PB combined with blood purification treatment was associated with the improvement of all patients' condition, even those who experienced a severe delay in admission. | 2019-07-25T13:03:51.629Z | 2019-07-01T00:00:00.000 | {
"year": 2019,
"sha1": "a5893c17758bf48d511a83706f8ba888b3843bb3",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1097/md.0000000000016471",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "16ff500949a18913a5b9ed93f43cb3dd26be97f4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
56341178 | pes2o/s2orc | v3-fos-license | Variation in growth , photosynthesis and water-soluble polysaccharide of Cyclocarya paliurus under different light regimes
Wanxia Yang , Yang Liu , Shengzuo Fang , Haifeng Ding , Mingming Zhou , Xulan Shang (1-2) As a highly valued and multiple function tree species, Cyclocarya paliurus is planted and managed for timber production and medical use. Responses of growth, photosynthesis and phytochemical accumulation to light environment are useful informations to determine suitable habitat conditions for the cultivation of C. paliurus. A split-plot design with five light quality and three light intensity levels was adopted to compare the variations in plant growth, photosynthesis and water-soluble polysaccharide yield in C. paliurus leaves. Both light intensity and quality treatments significantly affected total biomass, photosynthetic rate and water-soluble polysaccharide yield in C. paliurus leaves. Treatments under red light and blue light with 1000 μmol m s achieved the highest values of biomass growth, photosynthetic rate, specific dry leaf mass per area and accumulation of water-soluble polysaccharide. These results indicate that red light and blue light with higher light intensity level were effective for increasing plant growth, photosynthesis and production of water-soluble polysaccharide in C. paliurus leaves. Manipulating light conditions might be an effective means to improve biomass and achieve higher water-soluble polysaccharide yield in C. paliurus plantations.
Introduction
Cyclocarya paliurus (Batal) Iljinskaja belongs to the Juglandaceae family and is widely distributed in mountainous regions of sub-tropical China (Fang et al. 2006).Leaves of this plant are traditionally used in China as a medicine or nutraceutical tea because of its unique taste (Birari & Bhutani 2007, Fang et al. 2011).Many studies have demonstrated that C. paliurus has a variety of bioactivities, including hypoglycemic activity (Kurihara et al. 2003), antihypertensive activity (Xie et al. 2006), anti-HIV-1 (Zhang et al. 2010), antioxidant activity (Xie et al. 2010), and anticancer (Xie et al. 2013).However, most studies on C. paliurus were focused on plant compounds (including triterpenoids, flavonoids, steroids and saponins) and the extract activities whereas less attention was paid to the silvics of the species (Deng et al. 2012(Deng et al. , 2015)).
Light intensity and quality are important environmental factors for plant growth and development (Yu et al. 2016).Specifically, changes in light quality strongly affect plant morphological, physiological, and biochemical parameters due to the spectral properties of tissue pigments (Fan et al. 2013).However, the responses of plants to light quality are species specific (Cope & Bugbee 2013).For example, Ouyang et al. (2003) reported that Cistanche deserticola cultured under blue light achieved higher biomass than those under red light.Yan et al. (2004) demonstrated that red light improved salidroside production and root growth of Rhodiola sachalinensis.However, Johkan et al. (2012) reported that green light was effective in promoting photosynthesis and plant growth of Lactuca sativa.Thus, it is necessary to determine the optimum light condi-tions for better growth of C. paliurus.
Polysaccharides are very common natural polymers in plants, animals and microorganisms (Xiao et al. 2011, Cui et al. 2013).Recently, polysaccharides from plants have attracted more and more attention due to their extensive biological activities, such as hypoglycemic activity (Wang et al. 2001), free-radical-scavenging activity (Cui et al. 2013), anticancer activity (Xie et al. 2013), and improvement of immunomodulation activity (Huang & Ning 2010).Due to its biological activities, polysaccharide from C. paliurus leaves have become a focal point for research and development.The structure and antioxidant activities of polysaccharide as well as sulfated polysaccharides from C. paliurus leaves had been investigated by Xie et al. (2010Xie et al. ( , 2015)).Fu et al. (2015) also reported the seasonal and genotypic variation of leaf polysaccharide accumulation in C. paliurus, whereas knowledge of the environmental effects on polysaccharide accumulation in C. paliurus is limited.
The aims of this study were to quantify and compare the influences of varying light quality with different light intensities on plant growth, photosynthetic capacity, and water-soluble polysaccharide accumulation in C. paliurus leaves.Findings from the study are needed to better understand the responses of C. paliurus growth to differential light environment, and to provide a theoretical basis for standardized cultivation of C. paliurus plants.
Plant material and growth conditions
Seeds of C. paliurus were collected from Tonggu (30° 73′ N, 116° 47′ E), Jiangxi province, China in late October 2014 and were subjected to chemical scarification, exogenous gibberellin A3 (GA3) treatments, and stratification treatments in early January 2015, according to the method proposed by Fang et al. (2006).After a 3 month stratification treatment, the germinated seeds were transplanted into plastic pots (8.5 cm inner-diameter, 10 cm height, with holes in the bottom, one seedling per pot) filled with a substrate mixture of perlite: fowl manure: peat: soil (2: 2: 4: 2, v/v/v/v).The substrate was a loam with pH 6.44, organic matter content of 73.3 g kg -1 , total N content of 72.35 g kg -1 , total P content of 2.19 g kg -1 , and total K content of 9.55 g kg -1 .Eight weeks later, plants were moved into climate chamber and then exposed to LED lamps (Guangdong Philips Lamp Co., China).
A split-plot randomized design was used to establish three light intensity levels and five light quality treatments.Three light intensity treatments were subjected to three intensity regimes: L1 (500 ± 30 μmol m -2 s -1 ), L2 (750 ± 30 μmol m -2 s -1 ), and L3 (1000 ± 30 μmol m -2 s -1 ), respectively.The light intensity of LED lamps in each treatment was measured by LI-6400 ® system (Li-Cor, Lincoln, NE, USA).Five light quality treatments were WL (white light), BL (blue light), RL (red light), GL (green light) and PL (purple light), respectively.Spectral features of the LED lamps were recorded by means of a NIR-VIS spectrometer (Ocean Optics, USA) and reported in Fig. 1.Each treatment contains 5 replications and 8 plants per replication (plastic pot).All treatments were kept at 25 ± 2 °C and 60% relative humidity (RH) during the day, 22 ± 2 °C and 70% RH at night with a 12 h dark/light photoperiod.The plants were kept well watered once every two days until the end of the experiment.
Growth and biomass assessment
After the growth of 5 months in the chamber, growth and biomass assessments of the plants were conducted on October 20, 2015.Intact C. paliurus seedlings in each treatment (5 seedlings) were harvested and separated into shoots and roots for biomass and water-soluble polysaccharide analysis.The leaf area (LA) (cm 2 ) of the third and fourth fully-expanded leaves from the top of the shoots was measured at the same time with an area meter (Li-Cor Model 3100 ® ).Biomass samples were dried (70 °C, 48 h) to constant weight and weighed.The total dry mass of each was calculated as the sum of leaf, stem, and root dry weights.The specific leaf mass (SLM) was calculated by dividing dry leaf weight by corresponding leaf area (LA - Tang et al. 2015).
Measurement of photosynthetic parameters
Fully developed leaves from the top of the shoots were randomly selected for gas exchange measurements, using a LI-6400 ® portable photosynthesis system (LiCor Inc., USA) with a standard leaf chamber equipped with a 6400-02B LED light source (LI-6400).Measurements were conducted at an air concentration of 21% O2, 380 µmol mol -1 CO2, 1000 μmol m -2 s -1 photosynthetically active radiation (PAR), 50% relative humidity and a temperature of 25 ± 2 °C.Photosynthetic rate (Pn) and stomatal conductance (gs) were recorded.
Extraction and measurement of watersoluble polysaccharide
Extraction of polysaccharide in C. paliurus leaves was carried out as described previously by Fu et al. (2015) with slight modifications.Each sample (0.5 g) of leaves was extracted with 30 ml of 70% ethanol at 70 °C for 60 min to remove most pigments, small molecular sugars and impurities.The insoluble residues were separated, dried and then extracted twice with 20 ml distilled water at 100 °C for 75 min.The extracts were filtered and the filtrate was centrifuged at 5000 ×g for 15 min.Finally, the supernatant was combined for measurement.
Water-soluble polysaccharide content was measured using the phenol-sulphuric acid colorimetric method (Dubois et al. 1956), using glucose as a standard, with absorbance measured at 490 nm.Concentration of water-soluble polysaccharide was quantitatively determined by the calibration curve.Water-soluble polysaccharide yield per plant was calculated by multiplying the water-soluble polysaccharide content by the leaves biomass per plant.
Statistical analysis
Data are reported as the mean ± standard deviation (SD), and all tests were performed using the SPSS ® 16.0 statistical software package (SPSS, Chicago, IL, USA).A two-way ANOVA model with light quality and light intensity as the main fixed factors plus a light quality × light intensity interaction term, followed by Tukey's multiplerange test, was performed for biomass accumulation, photosynthesis parameters, and leaf characteristics as well as the water-soluble polysaccharide yields.The data were tested for normality (Shapiro-Wilk normality test) before analysis of variance.All statistical analyses were performed at a 95 % confidence level.
Variation in growth and biomass production
Two-way ANOVA showed that both light quality and light intensity treatments, as Tab. 1 -Summary of significance levels (Two-way ANOVA) for the effects of light quality, light intensity and their interaction on biomass production, water-soluble polysaccharide content and water-soluble polysaccharide yield in Cyclocarya paliurus leaves.
iForest -Biogeosciences and Forestry
Growth, photosynthesis and phytochemical accumulation in Cyclocarya paliurus well as the interaction between them significantly affected the biomass production and allocation of C. paliurus (Tab.1).The total biomass per seedling varied among light intensity treatments with the following order L3 > L2 > L1.This trend persisted across the growth of leaves, stem, and root (Tab.2).Across three light intensity levels, the highest value of total biomass was achieved at BL, RL and WL treatments (Tab.2).However, the highest value of leaf biomass was achieved at WL treatment.Compared to WL treatment, leaf biomass of RL, GL, BL, and PL decreased by 40.1%, 60.8%, 24.5%, and 54.7%, respectively.Also, the five light quality treatments produced different biomass allocation among the seedlings parts.The highest ratios of leaf to total biomass were observed in treatments PL (48.7%) and WL (41.6%), whereas the greatest ratios of root and stem were achieved in treatment RL (76.2%),GL (75.9%) and BL (70.9%), respectively (Tab.2).
Variation in photosynthesis and leaf characteristics
The seedlings grown under blue light and red light treatments had significantly higher photosynthetic rate (Pn) and stomatal conductance (gs) values than other light quality treatments (Fig. 2A, Fig. 2B).In the 15 treatments, the highest Pn value was detected in treatments R3 (7.09 μmol m -2 s -1 ) and B3 (6.89 μmol m -2 s -1 ), whereas the lowest value was observed in treatment P1 (0.79 μmol m -2 s -1 ).Two-way ANO-VA showed that both light quality and light intensity treatments, as well as the interaction between them significantly affected the Pn and gs of C. paliurus (Tab.3).Meanwhile, there was a significant decrease in Pn and gs under all light qualities over the range of light intensities from 1000 μmol m -2 s -1 (L3) to 500 μmol m -2 s -1 (L1 -Fig.2A, Fig. 2B).
and specific dry leaf mass per area (SLM) of C. paliurus were found to be significantly different under various light quality and intensity treatments (Fig. 2C, Fig. 2D).Moreover, a significant interaction of light quality and intensity was observed in LA and SLM of C. paliurus (Tab.3).Light intensity of 500 μmol m -2 s -1 (L1) resulted in the highest LA, and LA was significantly higher in PL treatment than that in other light quality treatments (Fig. 2C).Variation trend in SLM of C. paliurus was consistent with that of Pn in leaves.In the 15 treatments, the highest SLM values were detected in treatments B3 (35.69 g m -2 ) and R3 (34.51 g m -2 ), whereas the lowest was observed in treatment P1 (3.53 g m -2 -Fig.2D).
Tab. 3 -Summary of significance levels (Two-way ANOVA) for the effects of light quality, light intensity and their interaction on photosynthetic rate (Pn), stomatal conductance (gs), leaf area (LA), and specific leaf mass per area (SLM) in Cyclocarya paliurus.
Variation in water-soluble polysaccharide content and yield per plant
The highest water-soluble polysaccharide contents were observed in P3 (44.58 mg g -1 ) and R3 (43.69 mg g -1 ) treatments, whereas the lowest contents were found in P1 (23.31 mg g -1 ) and W1 (19.09 mg g -1 ) treatments (Fig. 3).Two-way ANOVA showed that both light quality and light intensity treatments, as well as the interaction between them significantly affected water-soluble polysaccharide content of C. paliurus leaves (Tab.1).The water-soluble polysaccharide content in leaves varied among light intensity treatments with the following order L3 > L2 > L1 (Tab.2).
on the leaf biomass and water-soluble polysaccharide content, the integrated effect of light quality and light intensity on the accumulation of water-soluble polysaccharide in leaves per plant was significant (p < 0.05 -Fig.4).The greatest accumulation of water-soluble polysaccharide in the leaves per plant was achieved in treatment B3 (77.86 mg plant -1 ), followed by treatment R3 (70.95 mg plant -1 ), whereas the lowest was found in treatment P1 (11.18 mg plant -1 ).Compared to treatment B3, water-soluble polysaccharide accumulation in other treatments was decreased by 8.9-85.6 %.Moreover, a two-way ANOVA showed that light intensity and light quality treatments as well as their interactions significantly affected water-soluble polysaccharide accumulation of C. paliurus (Tab.1).
Variation in plant growth and photosynthesis
It is generally recognized that light intensity and light quality play an important role in plant growth, photosynthetic capacity, as well as various aspects of physiology (Müller et al. 2013, Liu et al. 2015).Typically, optimal light irradiance is central to the productivity of plants, as excessive high or low light intensity often impacts photosynthesis, and then severely limit plant growth (Ma et al. 2015).The present study demonstrated that biomass production of C. paliurus was much lower at light intensity levels of L1 (500 μmol m -2 s -1 ) and L2 (750 μmol m -2 s -1 ) than at L3 (1000 μmol m -2 s -1 -Tab.2), indicating that C. paliurus is a heliophyte.The observed growth response of C. paliurus to light intensity was similar to that of many tree species, such as Rauvolfia species and Camptotheca acuminata, which were reported to grown better under higher light irradiance (Cai et al. 2009, Ma et al. 2015).
In higher plants, the regulation and perception of the light changes are controlled by a system of photoreceptors, including cryptochromes (blue/UV-A light receptors, 340-520 nm), phytochromes (red/far-red receptors, >520 nm) and phototropins (phot1 and phot2 - Liu et al. 2015).Thus, varying light wavelength produces different growth responses in plants.The per-centage absorption of red or blue light by plant leaves is about 90% (Terashima et al. 2009), consequently plant development is strongly influenced by red or blue light (McNellis & Deng 1995).This was supported by data in our study, i.e., there were significantly higher values of total biomass under BL and RL treatments.Similar results were observed in studies of other trees such as C. deserticola (Ouyang et al. 2003) and C. acuminata (Liu et al. 2015).
Photosynthesis, one of the most important chemical processes in higher plants, is directly linked to production of plant biomass, however, photosynthesis of plants is very sensitive to light conditions (Ma et al. 2015).In this study, variation in biomass accumulation in C. paliurus grown under different light quality and intensity treatments were closely linked with photosynthetic rate (Pn).Red light and blue light of 1000 μmol m -2 s -1 achieved significantly higher Pn and gs values (Fig. 2), suggesting the photosynthetic rate of C. paliurus plants increased under red and blue light, which was consistent with previous reports in other plants such as C. acuminata and cucumber (Liu et al. 2015, Hernández & Kubota 2016).It has been reported that red light is related with a highly effective light absorption through chlorophyll accumulation for photosynthesis (Evans 1987), while blue light may promote leaf stomatal opening by activating phototropin (Inoue et al. 2010).However, chlorophyll contents and stomatal opening of C. paliurus leaves under varying light qualities need to be further studied, as we did not measure them in this study.
The plasticity in leaf morphological and physiological characteristics may be crucial to the success of plant to establish itself in a new environment.Low light intensity may lead to increase in leaf area and seedling height.These changes may maximize the capture of available light to meet the demand for leaf photosynthesis (Steinger et al. 2003).This was supported by the changes of LA in different light quality and intensity treatments, as we observed the highest LA values achieved at light intensity of 500 μmol m -2 s -1 , especially under white light and purple light (Fig. 2C).Meanwhile, the higher SLM of plant is often considered as an index related to higher leaf photosynthetic capacity and chemical defense (Pearcy & Sims 1994).Similarly, we suggest that the higher SLM may protect C. paliurus leaves against photoinhibition under blue and red light treatments.
Variation in water-soluble polysaccharide accumulation
The content of phytochemicals is often induced by environmental factors, including light quality and intensity.For example, leaf camptothecin concentrations in C. acuminata display a significantly increase under blue light and 50% shading treatments (Liu et al. 2015, Hu et al. 2016).Visible light had been reported to induce proantho-cyanidin biosynthesis and affect their composition, whereas UV light specifically induced biosynthesis of flavonols (Koyama et al. 2012).In our previous studies, flavonoids production in C. paliurus plantations had been demonstrated to significantly positively correlate with total solar radiation (Liu et al. 2015).In the present study, water-soluble polysaccharide content in C. paliurus leaves also followed the order L3 (1000 μmol m -2 s -1 ) > L2 (750 μmol m -2 s -1 ) > L1 (500 μmol m -2 s -1 -Tab.2).These results support the carbon/nutrient balance theory, i.e., if light becomes limiting, the decline in photosynthesis may limit plant growth and accumulation of carbonbased phytochemicals (Deng et al. 2012).
The effects of light quality on phytochemical accumulation are more complex and often reported with mixed results (Giliberto et al. 2005, Ohashi-Kaneko et al. 2007).
In the present study, the highest watersoluble polysaccharide contents were observed at RL and BL treatments across the three light intensity levels, which may due to the higher percentage absorption of red or blue light by leaves of C. paliurus and higher photosynthetic rate (Fig. 2A) related with carbohydrate accumulation (Evans 1987).The goal of silvicultral practices is to obtain higher water-soluble polysaccharide yield (equal to water-soluble polysaccharide content multiplied by leaf biomass).In the present study, treatment under RL and BL at 1000 μmol m -2 s -1 were the most effective way to induce the accumulation of water-soluble polysaccharide because it resulted in the highest leaf biomass with the passage of time (Tab.2, Fig. 4).Overall, in order to achieve the highest water-soluble polysaccharide yield per area in C. paliurus plantations, it is important to manipulate growing conditions such as light intensity and light quality.However, high-yield production of water-soluble polysaccharide in C. paliurus through manipulating light conditions needs to be further confirmed with better designed large-scale field tests.
In conclusion, blue light and red light at 1000 μmol m -2 s -1 achieved the highest total biomass, photosynthetic rate and specific leaf dry mass per area in C. paliurus.Meanwhile, treatments under blue light and red light at 1000 μmol m -2 s -1 achieved the highest water-soluble polysaccharide yield per plant, due to the higher polysaccharide content and leaf biomass.These results indicate that manipulating light intensity and quality might be an effective means to obtain higher biomass and water-soluble polysaccharide yield in C. paliurus plantations. | 2018-12-15T10:23:29.190Z | 2017-04-30T00:00:00.000 | {
"year": 2017,
"sha1": "ae7c1d2e18ef2c07b9e4db09bdee888fc6e27d29",
"oa_license": "CCBYNC",
"oa_url": "http://www.sisef.it/iforest/pdf/?id=ifor2185-010",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "ae7c1d2e18ef2c07b9e4db09bdee888fc6e27d29",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
237269591 | pes2o/s2orc | v3-fos-license | CAD-CAM complete denture resins: an evaluation of biocompatibility, mechanical properties, and surface characteristics
: Objectives This study evaluated the biocompatibility, mechanical properties, and surface roughness of CAD-CAM milled and rapidly-prototyped/3D-printed resins used for manufacturing complete dentures. Methods Six groups of resin specimens were prepared, milled-base (MB), milled-tooth shade (MT), printed-tooth shade (PT), printed-base with manufacturer-recommended 3D-printer (PB1), printed-base with third-party 3D-printer (PB2), printed-base in a vertical orientation (PB2V). Human epithelial (A-431) and gingival (HGF-1) cells were cultured and tested for biocompatibility using Resazurin assays. Three-point bending and nanoindentation tests measured the mechanical properties of the resin groups. Surface roughness was evaluated using a high-resolution laser profilometer. ANOVA and post-hoc tests were used for statistical analyses ( = 0.05). Results : There were no significant differences in biocompatibility between any of the investigated groups. MB revealed a higher ultimate strength (p = 0.008), elastic modulus (p = 0.002), and toughness (p = 0.014) than PB1. MT had significantly higher elastic modulus than PT (p < 0.001). Rapidly-prototyped resin samples with a manufacturer-recommended 3D-printer (PB1) demonstrated higher ultimate strength (p = 0.008), elastic modulus (p < 0.001), hardness (p < 0.001) and a reduced surface roughness (p < 0.05) when compared with rapidly-prototyped groups using a third-party 3D-printer (PB2). Rapidly-prototyped samples manufactured with a vertical printing orientation (PB2V) revealed a significantly lower elastic modulus than samples groups manufactured using horizontal printing orientation (PB2) (p = 0.011). Conclusions : Within the limits of this present study, CAD-CAM milled and rapidly-prototyped complete denture resins performed similarly in terms of biocompatibility and surface roughness. However, the milled denture resins were superior to the rapidly-prototyped denture resins with regard to their mechanical properties. Printing orientation and type of 3D-printer can affect the resin strength and surface roughness. Objectives: This study evaluated the biocompatibility, mechanical properties, and surface roughness of CAD-CAM milled and rapidly-prototyped/3D-printed resins used for manufacturing complete dentures. Methods: Six groups of resin specimens were prepared, milled-base (MB), milled-tooth shade (MT), printed-tooth shade (PT), printed-base with manufacturer-recommended 3D-printer (PB1), printed-base with third-party 3D-printer (PB2), printed-base in a vertical orientation (PB2V). Human epithelial (A-431) and gingival (HGF-1) cells were cultured and tested for biocompatibility using Resazurin assays. Three-point bending and nanoindentation tests measured the mechanical properties of the resin groups. Surface roughness was evaluated using a high-resolution laser profilometer. ANOVA and post-hoc tests were used for statistical analyses ( α = 0.05). Results: : There were no significant differences in biocompatibility between any of the investigated groups. MB revealed a higher ultimate strength ( p = 0.008), elastic modulus ( p = 0.002), and toughness ( p = 0.014) than PB1. MT had significantly higher elastic modulus than PT ( p < 0.001). Rapidly-prototyped resin samples with a manufacturer-recommended 3D-printer (PB1) demonstrated higher ultimate strength ( p = 0.008), elastic modulus ( p < 0.001), hardness ( p < 0.001) and a reduced surface roughness ( p < 0.05) when compared with rapidly-prototyped groups using a third-party 3D-printer (PB2). Rapidly-prototyped samples manufactured with a vertical printing orientation (PB2V) revealed a significantly lower elastic modulus than samples groups manufactured using horizontal printing orientation (PB2) ( p = 0.011). Conclusions: : Within the limits of this present study, CAD-CAM milled and rapidly-prototyped complete denture resins performed similarly in terms of biocompatibility and surface roughness. However, the milled denture resins were superior to the rapidly-prototyped denture resins with regard to their mechanical properties. Printing orientation and type of 3D-printer can affect the resin strength and surface roughness.
Introduction
For over half a century, the conventional flask-pack-press or compression molding method has been used to fabricate removable complete dentures (CDs). Traditionally, CDs have been processed using polymethylmethacrylate (PMMA) resin with heat polymerization [1]. This method has evolved over the years in response to advancements in the PMMA resin's properties and associated processing protocols, including the use of auto-polymerizing, microwave-processing, or injection-molding techniques [2]. The principle has remained essentially unchanged. PMMA resin is shaped into the desired mold under pressure and polymerized. However, this protocol has undergone a remarkable transformation in the recent years following the introduction of computer-aided design and computer-aided manufacturing (CAD-CAM) procedures for CDs.
The fabrication of CDs using CAD-CAM techniques was first published in the 1990s [3] and has increased dramatically over the last decade [4]. This technology-driven manufacturing process owes its rapid development to several critical factors, including, but not limited to, shifting of the clinicians' and dental technicians' behaviors, the use of improved materials along with a possible decrease in the clinical chairside time, total patient visits, and dental laboratory costs [5][6][7]. Evidence in literature reveal a clear preference for CAD-CAM fabricated CDs by both patients and clinicians [8,9]. Furthermore, evidence in literature also suggest that in terms of the trueness of the intaglio surfaces, CAD-CAM fabricated CDs were not inferior to conventionally manufactured CDs [10][11][12][13].
Two manufacturing processes exist for fabricating CDs: a subtractive, and an additive technique.. The subtractive or CAD-CAM milling process entails milling the CD out of a commercially manufactured PMMA disc that has been pre-polymerized. Due to the fact that this disc is manufactured under high pressure and well-controlled conditions, many studies have demonstrated that milled resins show superior mechanical and surface properties [14][15][16], comparable color stability [17][18][19], reduction of microbial colonization [20,21] and a lower leech rate of residual monomer [22], compared to compression molding resins. On the other hand, the additive manufacturing process, also known as rapid-prototyping or 3D-printing, involves serial apposition of the liquid resin material on a support structure followed by curing by visible light, ultraviolet light, heat, or laser [23]. This layering and curing process is replicated until the CD form specified in the CAD is achieved.
CAD-CAM rapid-prototyping process is being increasingly used in the dental field, for manufacturing fixed prostheses, surgical templates, occlusal splints, and even CDs [23,24]. However, there are only a few publications that report on the mechanical properties and surface characteristics of CAD-CAM rapidly prototyped resins used for fabricating CDs [25,26]. Furthermore, there is a paucity of studies comparing the differences in biocompatibility between CAD-CAM milled and rapidly-prototyped CDs. Therefore, this study was undertaken to evaluate the differences in the biocompatibility, mechanical properties, and surface roughness of resins used to manufacture CAD-CAM milled and rapidly-prototyoped CDs. The study further evaluated the influence of different printers and the printing orientation on the mechanical properties of rapidly-prototyped resins. The primary null hypothesis set for this in vitro study was that there would be no difference in the biocompatibility, mechanical properties, or surface roughness between CAD-CAM milledand rapidly-prototyped resins used to manufacture CAD-CAM CDs. The secondary null hypothesis set for this study was that there would be no influence of different 3D-printers or the printing orientation on the mechanical and surface properties of rapidly-prototyped resins that are used for the fabrication of CAD-CAM CDs.
Study design
Custom specimens were manufactured using resins employed for the fabrication of CAD-CAM subtractively-, and additively manufactured CDs and were distributed into various study groups, as shown in Table 1. Resin specimens were fabricated with dimensions and in specific numbers for the various tests, (biocompatibility assays: n = 9 per study group, dimension = 10 × 10 × 2 mm; three-point bending test: n = 5 per study group, dimension = 65 × 10 × 3 mm; nanoindentation test: n = 5 per study group, dimension = 11 × 11 × 2 mm; surface roughness test n = 5 per study group, dimension = 20 × 20 × 1.5 mm) as shown in Appendix 1.
Pre-polymerized PMMA discs with a base pink (AvaDent Denture base puck, AvaDent, Global Dental Science Europe, Tilburg, The Netherlands) and a tooth shade (Avadent Extreme CAD CAM shaded puck YW10, AvaDent, Global Dental Science Europe, Tilburg, The Netherlands) were used for manufacturing the resin specimens for milled group. The discs were sliced using a rotary table saw (Inca, Injecta, Teufenthal, Switzerland) equipped with a 3 mm thick stainless circular blade (Oertli Werkzeuge, Höri, Switzerland) and reduced to the required dimensions manually.
Rapidly-prototyped specimens with a manufacturer-recommended 3D-printer (Rapid Shape D30, Rapid Shape GmbH, Heimsheim, Germany) were manufactured using a base pink (NextDent Base, Vertex-Dental B.V., Soesterberg, The Netherlands) and a tooth shade (Next-Dent C&B, Vertex-Dental B.V., Soesterberg, The Netherlands) resins for 3D-printing. The samples were printed in a horizontal orientation with a layer thickness of 100 mm per layer. The printed samples were rinsed twice in a 96% ethanol solution in an ultrasonic bath to remove excess material. A first rinse of 3 min was followed by a second rinse in a clean 96% ethanol solution for approximately 2 min. The printed samples were then cleaned, dried, and placed in an ultraviolet light box (LC-3DPrint Box, NextDent B.V., Soesterberg, The Netherlands) for 10 min for additional polymerization. The light box had 4 Dulux Blue UV-A lamps and four 18W/71 lamps (Dulux L blue) delivering a wavelength of blue UV-A 315 to 400 nm and an output of 43.2 kJ.
Rapidly-prototyped specimens using a third-party 3D-printer (Form 2, Formlabs, Massachusetts, USA) were manufactured in the same way as the manufacturer-recommended 3D-printer group with only a pink base resin (NextDent Base, Vertex-Dental B.V., Soesterberg, The Netherlands). Further rapidly-prototyped specimens were manufactured employing a vertical orientation, using a manufacturer-recommended 3D-printer (Rapid Shape D30, Rapid Shape GmbH, Heimsheim, Germany) and a pink base resin (NextDent Base, Vertex-Dental B.V., Soesterberg, The Netherlands). After controlling the required dimensions, all specimens were polished manually by a master dental technician as described in our previous published study [27]. After the final polishing, the resin samples were disinfected for culture and mechanical testing. The specimens were first rinsed with normal saline before immersion in a 70% ethanol solution for 30 min, then rinsed again in normal saline twice and then dried with sterile cotton. They were sterilized for 15 min under ultra-violet light with a wavelength of 254 nm emitted by a 15-watt UV lamp present in the safety cabinet (SafeFAST premium 212, LogicAir, Saint-Aubin, Switzerland).
Cells
Two cell lines frequently used in models assessing biocompatibility within the gingival area were used separately in a series of three separate experiments ran in triplicate: human epithelial cells (A-431 ATCC® CRL-1555™, epidermoid cell line, squamous carcinoma) and human gingival cells (HGF-1 ATCC® CRL-2014™, normal primary cells). Cells were cultured in Dulbecco's Modified Eagle's Medium (DMEM medium, Life Technologies, Carlsbad, CA, USA) supplemented with 10% fetal calf serum (Eurobio, Les Ulis, France), 1% Penicillin/Streptomycin/Fungizone (Life technologies) and 2% HEPES (Life technologies). They were used at passages 3-5 for proliferation assays. Four groups of resin samples (n = 9 per study group) were placed in 24-well plates (TPP techno Plastic Products, Trasadingen, Switzerland), and cells were seeded at a density of 2600/cm 2 . As control of normal cell growth, cells were seeded directly on culture dish polystyrene. On days 4, 7, 14, and 21, resazurin assays were performed as described before [28,29].
Cell culture and proliferation (Resazurin assay)
The A-431 and HGF-1 cells were cultured on MB, MT, PT, and PB1 substrates plates. For each cell line (A-431 and HGF-1) and each resin specimen group, cells were cultured for 21 days in triplicate. This assay was repeated three times (n = 9) for the two cell lines separately, as described in earlier publications [29]. Resazurin assays were done on days 4, 7, 14, and 21 within each test run. Once the cultures were ready, on the day of measurement, resazurin (Resazurin Sodium salt, Sigma Aldrich, St. Louis, MO, USA) with a concentration of 10 μg/ml was added in the culture media to measure the proliferation of the cells. The cells were maintained at 37 • C and 5% CO 2 for 4 h Resazurin is transformed to resofurin during this incubation. The absorbance of resofurin in culture media was assessed at 570 and 630 nm. The percentage of reduction of the resazurin was then calculated according to the manufacturer's instructions and used for data analysis.
Three-point bending test
Three-point bending tests evaluated the mechanical properties of the resin samples from the six groups (MB, MT, PT, PB1, PB2, and PB2V). The resin samples were stored in water at 37 • C for 24 h The span length of the specimen was 50 mm, and a vertical load was applied at the midpoint of the specimen at a crosshead speed of 1 mm/min by a universal testing machine (AG-X Plus, Shimadzu Corporation). The ultimate strength, flexural elastic modulus, stress at the proportional limit (yield point), flexural strain at the proportional limit, and toughness were determined.
Nanoindentation test
A nanoindenter equipped with a Berkovich indenter (CSM Instrument, Peseux, Switzerland) was used to perform the tests on resin samples from five groups (MB, MT, PT, PB1, and PB2). The Berkovich diamond tip was calibrated using a fused silica standard provided by the manufacturer. A load of 8 mN was applied at a rate of 76 mN•min −1 . At maximum constant load, a 10-s holding period was imposed. The applied load and penetration depth were continuously recorded during the loading and unloading cycle. Five indentations were placed for each specimen at different random locations. Elastic modulus and hardness were obtained from the unloading portion of the indentation curves, using the Oliver and Pharr method [30]. The elastic and plastic energies needed to perform the indents were also estimated. Poisson's ratio was taken as 0.3 to compute the elastic modulus.
Laser profilometry
Resin samples from five groups were measured and the corresponding profiles were generated. Surface roughness profile (R) was measured using a high-resolution white light non-contact laser profilometer (CyberSCAN CT 100, Cyber technologies, Eching-Dietersheim, Germany) with a z-resolution of 20 nm and a lateral resolution of 1 μm. R was calculated using a Gaussian profile filter with the cut-off wavelength (λ c ) set to 0.8 mm and the sampling length to 4 mm, in each total scanning length of 5.6 mm as per the specifications by the International Standards Organization (ISO 11,562). This total scanning length is then split into five sampling lengths. R is analyzed within this total scanning length, and the commonly measured roughness characteristics, average roughness (R a ) and maximum roughness (R z or R max ), were analyzed. R a is the arithmetic mean value of all heights (peaks and valleys) in the given roughness profile. R z is the maximum of all roughness depths (distance between the deepest valley and the highest peak) measured within the complete scan length. Another parameter from the surface roughness profile (R) is the mean height of profile elements (R c ), representing the average value of the height of the curve element along the sampling length.
Statistical analysis
Data collected were verified for normal distribution using the Kolmogorov-Smirnov test and compared for statistical significance using two-way ANOVA and post hoc test (Tukey's HSD test). The level of statistical significance for all tests was at p < 0.05. Statistical analyses were performed using a statistical software (Ver 25.0, IBM SPSS Statistics, IBM, NY, USA).
Biocompatibility assays
Both epithelial cells (A-431) and gingival cells (HGF-1) grew gradually around 3 to 4-fold increase from day 4 to day 21, which is the same trend as the control group on plastic (Fig. 1). For A-431 from day 4 to day 21, MB showed a 3.9-fold increase, MT showed a 4.0-fold increase, PT showed a 3.5-fold increase, and PB1 showed a 3.5-fold increase. For HGF-1 from day 4 to day 21, MB showed a 2.9-fold increase, MT showed a 3.5-fold increase, PT showed a 3.4-fold increase, and PB1 showed a 2.8-fold increase. However, the two-way ANOVA results from Table 2 show that there was no statistical difference among the different resin groups (MB, MT, PT, and PB1) from day 4 to day 21 for either A-431 (F (3128) = 0.5000, p = 0.6829) and HGF-1 (F(3127) = 2.035, p = 0.1123).
Mechanical properties
Milled resins demonstrated a higher ultimate strength than the printed resins, the resins printed with manufacturer-recommended 3D- printer revealed higher ultimate strength than the resins printed with a third-party 3D-printer and those printed with a vertical orientation ( Fig. 2A). Post-hoc comparisons in Table 3 show that MB had superior ultimate strength than PB1 (p = 0.0083), MT had superior ultimate strength than PB1 (p = 0.0071), and PB1 had superior ultimate strength than PB2. However, there was no statistical significance between MB-MT, MB-PT, MT-PT, PT-PB1, and PB2-PB2V. Milled resins also demonstrated a higher elastic modulus than the printed resins; the resins printed with the manufacturer-recommended 3D-printer showed higher elastic modulus than the resins printed with a third-party 3D-printer and the resins printed with a vertical orientation. Post-hoc comparisons in Table 3 show that MB had superior elastic modulus than PT (p = 0.0001), MB had superior elastic modulus than PB1 (p = 0.0025), MT had superior elastic modulus than PT (p < 0.0001), MT had superior elastic modulus than PB1 (p = 0.0010), PB1 had superior elastic modulus than PB2 (p < 0.0001), and PB2 had superior elastic modulus than PB2V (p = 0.0112). However, there was no statistical significance between MB-MT and PT-PB1 (Fig. 2B).
Milled resins demonstrated higher toughness than the printed resins, the resins printed with the recommended 3D-printer showed identical toughness as the resins printed with a third-party 3D-printer and the resins printed with a vertical orientation (Fig. 3A). Post-hoc comparisons in Table 3 show that MB had superior toughness than PB1 (p = 0.0137). However, there was no statistical significance between MB-MT, MB-PT, MT-PT, MT-PB1, PT-PB1, PB1-PB2, and PB2-PB2V.
The milled resins showed similar yield point as the printed resins, resins printed with the recommended 3D-printer had the same yield point as the resins printed with a third-party 3D-printer and the resins printed with a vertical orientation (Fig. 3B). Additionally, post-hoc comparisons in Table 3 show no statistically significant difference between MB-MT, MB-PT, MB-PB1, MT-PT, MT-PB1, PT-PB1, PB1-PB2, and PB2-PB2V.
Milled resins had the same strain at yield point as the printed resins, resins printed with the recommended 3D-printer had the same strain at yield point as the resins printed with a third-party 3D-printer and the resins printed with a vertical orientation (Fig. 4A). Additionally, posthoc comparisons in Table 3 revealed no statistically significant difference between MB-MT, MB-PT, MB-PB1, MT-PT, MT-PB1, PT-PB1, PB1-PB2, and PB2-PB2V. Milled resins had the same hardness as the printed resins, the resins printed with the recommended 3D-printer showed higher hardness than the resins printed with a third-party 3D-printer. Post-hoc comparisons in Table 3 show that PB1 had superior hardness than PB2 (p < 0.0001; Fig. 4B).There was no statistically significant difference between MB-MT, MB-PT, MB-PB1, MT-PT, MT-PB1, and PT-PB1. Fig. 5 illustrates the typical profilometer scans of the various study groups. There was no difference between the surface roughness of the milled and the 3D-printed resins. Resin groups fabricated with the manufacturer-recommended 3D-printer were smoother than the resins printed with a third-party 3D-printer. Post-hoc comparisons show that PB1 had a smoother surface than PB2 (R a : p = 0.0085, R c : p = 0.0152, R z : p = 0.0051; Table 2; Figs. 6A, 6B). However, there was no statistically significant difference between MB-MT, MB-PT, MB-PB1, MT-PT, MT-PB1, and PT-PB1. Table 3.
Discussion
Biocompatibility is a critical consideration for the clinical use of dental materials. Adverse reactions of the oral mucosa in direct contact with the introduced foreign materials might result in pain, hypersensitivity, and even allergies or burning mouth sensations [31,32]. As a result, biocompatibility testing must be used to ensure patient safety. The cell lines utilized in the biocompatibility experiments (human epithelial cell (A-431) and human gingival cell (HGF-1)) are well-established and used in the laboratory to evaluate the biocompatibility of materials [33,34]. Biocompatibility experiments demonstrated that both types of cells, A-431 and HGF-1, proliferated on both types of resin substrates, milled and printed. The results demonstrated a healthy proliferation of A-431 on both resin substrate groups, but no statistically significant difference was observed. The HGF-1 cell assay exhibited a similar trend to the A-431 assay with no statistically significant difference. Thus, based on the results of our current investigation, the null hypothesis regarding biocompatibility cannot be rejected.
Even if both milling and rapid prototyping processes utilize a digital 3D-image file created with CAD software to fabricate CDs, the two manufacturing approaches are radically different. Each approach has its own advantages and disadvantages. CDs manufactured from prepolymerized PMMA disc should theoretically exhibit no shrinkage and porosity associated failures that are usually encountered with the packing and polymerization processes because the discs are manufactured under high pressure and optimal temperature. Additionally, these milled CDs should release less monomer and exhibit improved mechanical and surface properties. Concerning the trueness of the intaglio surfaces, milling techniques are limited due to the size of the milling instrument, hence the surface might show a relatively large variability [12]. Intuitively, one might assume that the trueness of printed resins, that are sprayed in their liquid form into the desired mold may result in a better fit, but at present, the evidence is not conclusive. However, the fact that the polymerization process takes place after shaping the material might also be a disadvantage in terms of trueness, as the associated volumetric shrinkage might lift off the palatal plate, hence compromising the upper CD's suction effect in a clinical context. This phenomenon also occurs in traditional heat-polymerized pack and flask CDs and can effectively be compensated by carving a post-dam on the denture. Since in milling techniques the polymerization process takes place before shaping the final CD, future research should investigate if milled CDs still need the same grinding of a post-dam to achieve suction.
The rapid prototyping technique utilizes unpolymerized liquid resins to fabricate CDs, and once processed, the method requires an additional final light polymerization step. Polymerization shrinkage and compromised mechanical properties are potentially conceivable during the rapid prototyping workflow, as the complete dentures are not fully polymerized before the final light-polymerization step. When removing the partially polymerized complete denture from the construction platform, deformation of the prostheses may occur. Additionally, a residual coating of unpolymerized resin is invariably present on the completed prostheses and must be removed thoroughly with a suitable solvent. The additive manufacturing process is said to have several advantages, including increased accuracy, reduced material waste, and minimal infrastructure costs. However, many studies indicate that rapidly prototyped CDs show lower trueness of fit than milled CDs [10,[35][36][37][38]. The reduced material waste and cheap infrastructure costs have not yet been adequately validated. Nonetheless, tabletop 3D printers are less expensive and easier to transport than milling machines, making them more affordable to private clinics and dental laboratories, as well as to low-income countries where edentulism is prominent and skilled dental laboratory personnel are limited. Additionally, on-site manufacturing would eliminate delivery delays and save shipping costs.
In terms of mechanical properties, the current study revealed that milled resins had significantly superior ultimate strength, elastic modulus, and toughness than rapidly prototyped resins, while there was no significance regarding yield point, strain at yield point, and hardness. Hence, the null hypothesis regarding the difference in the mechanical properties between the milled and rapidly-prototyped resins can be partially rejected. Furthermore, the current study evinced that rapidlyprototyped resins with the recommended 3D-printer (Rapid Shape D30, Rapid Shape GmbH, Heimsheim, Germany) showed significantly higher ultimate strength, elastic modulus, and hardness than rapidlyprototyped resins with third-party 3D-printer (Form 2, Formlabs, Massachusetts, USA). Also, printing in a vertical orientation revealed a significantly lower elastic modulus. As a result, the null hypothesis regarding the influence of 3D-printer and the printing orientation on the mechanical properties is rejected. Numerous studies have demonstrated that printing angulation and the layer thickness affect the trueness of rapidly prototyped CDs [24,37,[39][40][41] but no studies exist that have evaluated the effect of the printing orientation on the mechanical properties of the 3D-printed resins. Therefore, future purpose-built studies evaluating the effect of printing angulation on the mechanical characteristics of printed resins is necessary to bolster the current study's conclusions.
The surface roughness could be one of the factors that contribute to microbial colonization on the denture surfaces [20,21] and the color Table 3 Mechanical properties and surface roughness of the different CAD-CAM milled and rapidly-prototyped resin groups. stability of the denture materials [17][18][19]. The current study revealed that milled resins had similar surface roughness as rapidly-prototyped resins, therefore the null hypothesis regarding the surface roughness of milled and rapidly-prototyped CD resins is not rejected. However, this study did reveal that resins printed with the recommended 3D-printer had significantly smoother surfaces than the resins printed with a third-party 3D-printer. Hence, the null hypothesis regarding the influence of use of the 3D-printer on the surface roughness of rapidly-prototyped CD resins is rejected... It is important to bear in mind that in this study, only a few CAD-CAM denture resin materials have been investigated. Therefore, results cannot be generalized to all milled and 3D-printed resins currently available in the market.
Conclusions
Within the limits of this present study, the following conclusions are drawn: 1 CAD-CAM milled and rapidly-prototyped complete denture resins are similar in their biocompatibility and surface roughness. 2 CAD-CAM milled denture resins exhibit better mechanical properties than rapidly-prototyped resins. 3 The printing orientation as well as the use of third-party 3D-printers can affect the resin strength and their surface roughness.
Source of funding
The laser scanner used in this study was acquired by a generous grant
Declaration of Competing Interest
The authors declare that they have no conflict of interests. | 2021-08-24T06:23:05.566Z | 2021-08-01T00:00:00.000 | {
"year": 2021,
"sha1": "14d98cd5ed1f109e86f29be1796586b4a3ff0bc7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.jdent.2021.103785",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "5cf74ca825692b4ca28454c5d6cb0cde857a585c",
"s2fieldsofstudy": [
"Materials Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234003784 | pes2o/s2orc | v3-fos-license | Occurrence of Phytophagous Scarabaeidae (Coleoptera) in a pasture area at the Balsamo municipality, São Paulo, Brazil
Pasture areas have been decreasing over the years in Brazil, largely due to the expansion of agricultural areas, mainly in the Southeast region. Natural or planted pastures are subject to attack by insects that can become pests depending on their level of infestation and among them are the Scarabaeidae beetles. The scarce information on the species of this family existing in the region of Bálsamo (São Paulo) makes a study on the occurrence of these species necessary, thus generating data that can contribute to identification and information on possible existing pest species. During September 2018 to August 2019, collections of phytophagous Scarabaeidae species were done in pasture areas with a light trap. A total of 446 adults from the subfamilies Dynastinae, Melolonthinae and Rutelinae were collected. In Dynastinae the genera Bothynus, Coelosis, Cyclocephala, Chalepides and Actinobolus were collected, in Melolonthinae the genera Plectris and Liogenys and in Rutelinae the genera Anomala, Geniates, Leucothyreus, Lobogeniates, Byrsopolis and Trizogeniates were found. Among the species collected, some are described as pests in crops, including pastures, such as Liogenys suturalis Blanchard, which was the species that had the largest number of individuals during the collection period, being indicated as a species worthy of more detailed studies.
As in any kind of plantation, pastures suffer with insect attacks which may become pests and among these there are the Scarabaeidae beetles found at soy-bean and corn plantations (Pérez-Agis et al. 2008;oliveirA et al. 2012), sugar cane (Coutinho et al. 2011), sunflower (CAmArgo & AmAbile 2001, wheat (DA silvA PereirA & sAlvADori 2011) and pasture areas (bonivArDo et al. 2015;DuChini et al. 2017). The larvae are often responsible for the observed plant damage. These immatures feed on roots, affecting the water-nutrient absorption system of the plants, resulting in decreasing of the plant stand and productivity of the plantations (ÁvilA et al. 2014).
There are Scarabaeidae species that stand out by the wide geographic occurrence and by the damaged caused to plantations . At the soy-bean culture, the damaged caused by beetle larvae are known since 1980 (oliveirA & gArCiA 2003 Phyllophaga triticophaga (Morón & Salvadori) have been observed causing damages to soy-bean, wheat, natural and planted pastures (DA silvA PereirA & sAlvADori 2011;vAlmorbiDA et al. 2018).
Burmeister, in a pasture area located at the Cassilândia municipality, state of Mato Grosso do Sul. However, not all phytophagous Scarabaeidae species are considered to be pests. There are some species considered to be benefic to the environment, such as Bothynus medon (Germar), Bothynus striatellus (Fairmaire), which builds soil galleries that facilitate water infiltrations and aid in the organic matter incorporation (sAlvADori & oliveirA 2001;silvA & sAlvADori 2004).
In the different regions of Brazil, different Scarabaeidae species are reported causing damage to cultivated plants as well as benefiting the local environment. Due to the scarce information on the entomofauna of the Bálsamo municipality area, the data on the phytophagous Scarabaeidae existing in the are limited. The data obtained in the present work is needed due to the expansions of local agriculture areas, being an important tool to predict which species might become potential pests and which can become a benefit to the local future plantations. Thus, the present work recorded the phytophagous Scarabaeidae species present in pasture areas of the Bálsamo municipality.
MATERIAL AND METHODS
The experiment was conducted at the São Luis rural property (20°40'16.13" S, 49°30'51.16" W) at the municipality of Bálsamo, which has 149,881 km² and is located at the Northwestern region of the state of São Paulo (Figure 1) (IBGE 2019). The rural property has 80 ha of area, out of which 28 ha are of pasture, 39 ha of rubber tree and 11 ha of native forest, which is divided in three fragments. This Forest is found in areas of ecological tension, which is caused by the mix of two different kinds of vegetation, being classified as SN -Contact Savannah/Sazonal Forest ( Figure 2) (IBGE 2019). The region has tropical climate Aw as Köppen's classification, with a rainy summer and dry winter, with mean temperature of 18º C in cooler days and 27º C in the warmer ones (rolim et al. 2007). The mean temperature throughout the 51 collect weeks was of approximately 26º C, the accumulated precipitation level was of 160 mm and the relative humidity with mean of 78% ( Figure 3). All meteorologic data were obtained at the "Instituto Nacional de Metereologia" (INMET 2019). At the pasture area cultivated with palisade grass, Urochloa brizantha cv., a light-trap of the "Luiz de Queiroz" type, made with steel foundation and collector cup made of PVC, with a 15 watts 6500k fluorescent white lamp was installed and turned on (silveirA neto & silveirA 1969). The collections were made once a week, every Friday, from 18:00 p.m. to 6:00 a.m. of the following day, from September 7th 2018 until August 30rd 2019, totalizing 51 weeks.
After being collected, the Scarabaeidae adults were stored in 300 mL plastic vials in 70% glycerin alcohol, until brought to the laboratory of entomology of the "Universidade Estadual do Mato Grosso do Sul" (UEMS), campus of Cassilândia, where they were pinned, labeled sent and donate to the Dr. Juares Fuhrmann (Zoology Museum of "Universidade de São Paulo") and Dr. Paschoal Coelho Grossi ("Universidade Federal do Pernambuco") for identification.
RESULTS AND DISCUSSION
A total of 446 Scarabaeidae adults were collected, belonging to 19 species distributed into the Dynastinae, Melolonthinae e Rutelinae (Table 1). Out of the total, 12 species might become potential pests in plantations, such as species of the Cyclocephala, Liogenys, Anomala, Leucothyreus, Geniates, Plectris, Chalepides and Lobogeniates. Benefic species of the Bothynus and Coelosis were collected as well. A percentage of 99,5% of the adults were collected during September 2018 until March 2019, coinciding with the rainy season of the Bálsamo region. During May 2019 to August of the same year, the occurrence of scarabaeidae dropped significantly, fact that may be connected to the dry season, which ranges from May to August (Figure 3).
In areas of natural pasture at the Brazilian Pampas, 28 species of Scarabaeidae were collected, distributed into the Dynastinae, Melolonthidae and Rutelinae ( Bellow are listed the collected species and their respective subfamilies.
Dynastinae
There are approximately 800 species recorded for this subfamily (moron et al. 2004).
Bothynus
A total of 18 adults of B. medon were collected during September until October, representing 4% of all insects collected. PereirA et al. (2013b) at the municipality of Aquidauana in the state of Mato Grosso do Sul, collected adults of B. medon during procreation-flight season, as the females captured had eggs ready to be laid and females ovipositing were observed in field. B. striatellus had a total of 28 collected adults, representing 6,2% of all insects ( Table 1).
Adults of B. medon e B. striatellus were collected by riehs (2006) in different areas of the Paraná state, with B. medon being the most collected. riehs (op. cit.) reported that B. medon occurred from December to February, differing from the present study (Table 1). B. striatellus was also collected during December-February, differing from the present study, which collected the species at September, October and December (Table 1)
Coelosis
A total of 47 adults of Coelosis were captured. They were identified as Coelosis bicornis (Leske) and C. biloba. Coelosis bicornis represented 10,5% of all insects (Table 1). Coelosis is compounded by seven species, all from South America with exception of Coelosis biloba (Linnaeus), which occur from Argentina México (enDröDi 1985). Little is known about this genus, although there are records of C. bicornis found in degraded and flooded areas (gAsCA et al. 2008).
An adult of C. biloba was collected in March (Table 1). This species usually builds its nest in ant nests, where its larvae feed from the fungi cultivated by the ants. Cannibalism of C. biloba adults on up to third instar larvae is also recorded (gAsCA et al. 2008). Coelosis biloba - Subtotal 119
Actinobolus
During October, four adults of Actinobolus trilobus Lüderwaldt were collected (Table 1). Larvae of this species were observed living in termite nests of the Nasutitermes (lueDerwAlDt 1911). Furthermore, its larvae feed on the termite nest wall (neitAmoreno & rAtCliFFe 2011).
Melolonthinae
In this subfamily, there are the species which cause the most damage to plantations. Adults feed on flower and leaves, whilst the larvae feed on roots and stalks of grass, corn, pasture and wheat (ritCher 1966 (2015). L. alvarengai was also collected by DAntAs et al. (2018) in forest regions of Sergipe.
Geniates
A total of nine adults of Geniates borelli Camerano, were collected and represented 2% of all insects (Table 1).
Lobogeniates
A specimen of Lobogeniates sp. was collected in March (Table 1). Adults of Lobogeniates are described as banana, rice and pasture pests at the Colombian Caribbean region (PArDo-loCArno et al. 2012).
Trizogeniates
A total of 70 adults of Trizogeniates planipennis Ohaus were collected. This species represented 15.6% of all insects ( Table 1). The occurrence of T. planipennis was also recorded at Bahia .
The high incidence of this species during collection period might be related to its wide distribution, which according to CArvAlho & grossi (2018) ranges the states of Goiás, Minas Gerais, São Paulo and Federal District.
The raining and dry season might influence the behavior of the Scarabaeidae, as the highest numbers resulted during the rainy season (September to April) and the lowest numbers during the dry season (From May to August).
The phytophagous Scarabaeidae species present at pasture area of the Bálsamo is majoritarely compounded by species that might become pests to cultures cultivated at the region. Among the species considered with great potential of becoming a pest, L. suturalis was the most expressive with 118 specimens. This species is described as pest to plantations.
The presence of benefic phytophagous Scarabaeidae such as B. medon, B. striatellus and C. bicornis at the region is an important factor to soil nutrition and ecosystem equilibrium. | 2021-05-10T00:04:45.995Z | 2021-01-25T00:00:00.000 | {
"year": 2021,
"sha1": "e9cc5c581b8ed49f69954c9d5821ec39c7327071",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.entomobrasilis.org/index.php/ebras/article/download/v14.e928/1474",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "414df3b99a45d1f64c4af384cf5e79e51f0b1adc",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Geography"
]
} |
236564449 | pes2o/s2orc | v3-fos-license | MOBILE PHONE BRANDS PREFERENCES AMONG STUDENTS: COMPARATIVE STUDIES IN INDONESIA AND MALAYSIA
In recent years, various brands of mobile phones have dominated Indonesia and Malaysia market. Most of them are imported from other countries, such as China, America, Europe, and South Korea which are highly competitive by offering the variety of designs and functions. To compete with other competitors, Indonesian telecommunication industry needs to focus on brand management, especially in building brand preference. The aims of this research to examine factors affecting brand preference. Questionnaire surveys have been conducted to 200 respondents in Indonesia & Malaysia. Data were analyzed by descriptive and Structural Equation Modeling Partial Least Square (SEM-PLS). SEM PLS has been conducted in order to examine the relative impact of the identified factors on brand preference. The major findings are there were differences factors affecting brand preference of mobile phone between Indonesian consumers and Malaysian consumers. Indonesian consumers are more focused on brand awareness when they have to choose mobile phone brands. On the other hands, Malaysian consumers are more focused on brand experience.
INTRODUCTION
Brand is a predictor used to establish customers' satisfaction. Brand preference is one of the strategies employed to build brand positioning (Jamal and Al-Marri, 2010). For the sake of markets expansion or a new product development, industries can possibly use brand preferences as the key factor in allocating resources to develop an effective product (Jamal and Al-Marri, 2010;Alamro and Rowly, 2011).
Brand preferences are determined by several factors. In relation to this, Alamro and Rowly (2011) identify ten antecedants of brand preferences: 1) uncontrolled communication, 2) controlled communication, 3) brand personality, 4) price, 5) quality, 6) corporate status, 7) country of origin, 8) satisfaction, 9) perceived risk, and 10) reference group. Additionally, self image congruence is considered positively significant to customers' satisfaction and brand preferences (Ekinci, 2004;Jamal and Godee, 2001). In order to determine a favorable brand, consumers also consider brand experience as an antecedent of brand preference (Ebrahim et al. 2016). An attractive brand visual has good experience to consumers. In addition to brand experience, brand image also has influnce to brand preference. One of the antecedents of a brand image is the attributes of a product. It is the antecedent at which marketers need to notice (Roseli et al. 2016;Lee and Nguyen 2017). These attributes include quality, price, and country of origin. Country of origin is an important attribute to evaluate brands. Chinese and Russian consumers are generally more concerned with a brand country of origin, specifically when they purchase luxurious goods (Godey et al. 2012). Essoussi et al. (2011) also state that country of origin influences the preferences of car and television brands to Tunisian consumers. Previous studies showed that building brand preferences is essential in order to compete in a dynamic market, particularly in high involvement products category. One of the examples of such product category is mobile phone.
Indonesia and Malaysia are potential markets for mobile phones import sector. The import value of cellular phones could reach 3.158 million (Ministry of Industry, Republic of Indonesia 2016). International Data Corporation (2016) noted that 8.3 million units of mobile phones had widely spread throughout Indonesia. This number had increased by 14.4 percent since 2014. This case has also taken place in Malaysia.
International Data Corporation of Asia Pacific (2011) reported that the number of imported mobile phones had increased by 35 percent in Malaysia. The survey from the Indonesian cellular importers' association stated that during 2010, the supply of various mobile phone brands from China had reached 9.6 million units (80 percent of total imported mobile phones). A total of 200 Chinese mobile phone brands entered Indonesian market (Sumariyati, 2012). The increase in Chinese mobile phone brands indicates an increasing preference on such products. On other hand, global brands still exist in Indonesian market (International Data Corporation Asia Pacific, 2020). The information about market share of mobile phones in Indonesia provided on the Table 1. Table 1 indicates that the market share of Chinese mobile phones has increased. Similarly, preference on Chinese mobile phones has also increased in Malaysia. Malaysian consumers' perception to Chinese mobile phones is very positive (Nadia et al. 2016 IDC (2020) The number of global brands in Indonesian and Malaysian markets shows that the telecommunication industry is very competitive (Alamro and Rowly, 2011). In addition, a survey conducted by Agustin et al. (2011) indicated that most youth activities in both countries took the advantage of such technology resources as mobile phones, computers, and gadgets. Hair et al. (2003) also chooses students as respondents for mobile phone brand issues as they are knowledgeable about the topic. In addition, student market is large. Then, students' consumption behaviors and perceptions resemble that of the typical users. Related to the technological advantages, Belwal and Belwal (2009) stated that students feel uncomfortable without their mobile phones so that they set it to active mode all the time. It is because they are addicted to being available online. It shows that students' consumption pattern has changed: they spend their time using their mobile phones.
The striking difference in the two countries is demographic characteristics, namely educational, social, and cultural values. Agustin et al. (2011) reported that Malaysian society, has good education and only 2.5 percent of the society drops out of school. The education system is applied equally throughout the country. However, in Indonesia, the education system is still unevenly applied across the nation. One of the factors is its geographical condition. Based on sociocultural factors, Indonesia has a wide variety of tribes and cultures widely spread all over the region. There are more than 50 ethnic groups in Indonesia. This shows that Indonesian consumers are more heterogeneous than that of Malaysian. Differences in social and economic factors affect consumers' brand preference (Renganathan et al. 2016).
Most studies focused on single factor affecting brand preferences. Therefore, this study tried to focus on wide range of factors that affect such preferences. In addition, studies investigating mobile phones brand preferences are found to be rare. As a potential country for mobile phone products, Indonesia and Malaysia have different characteristics of socioeconomic factors. Therefore, comparative analysis between the two countries can be a new point in a study related to brand preferences. Structural Equation Modeling -Partial Least Square (SEM-PLS) was mainly used to measure a more complex research model of brand preferences. Based on the above elaboration, the problems investigated in this research are (1) what are the factors affecting on brand preference on Indonesia consumers and Malaysian consumers? (2) and whether there are differences factors affecting brand preference on both countries.
METHODS
Data were collected through a survey using questionnaire instrument. Secondary and primary data collection was carried out from April-June 2017 in Indonesia and Malaysia. Jakarta, Bogor, and Depok were chosen because that is the cosmopolitan cities in Indonesia as well as markets for International-branded goods. Similarly, Klang Valley is the center of trade and industry in Malaysia. Sampling was performed by convenience technique. This technique was chosen due to the convenience of access and willingness of respondent to be interviewed (Salkind, 2010). The total sample size was 200 (100 Indonesian students and 100 Malaysian students). The number of samples was taken based on the formula of Roscoe (1975). Roscoe (1975) states that there are several rules in determining the number of samples as follow; (1) the sample size is more than 30 and less than 500; (2) the number of samples categorized into several categories must have at least 30 samples for each category; (3) for multivariate research (including multiple regression), the sample size should be 10 times the number of latent variables in the study. There are 200 questionnaires were distributed. Descriptive analyzed were used to describe respondent characteristics. Factors affecting brand preference were analyzed with structural equation modeling technique using Smart PLS 3.0.
The theoretical model was developed based on previous research related to the antecedants of brand preference. The proposed model, with 14 constructs and 47 indicators, contributed to explaining brand preference. Table 2 summarizes the most relevant studies supporting the selection of variables and relations used in the formulation of model presented in Figure 1.
Based on the literature review related to the antecedants of brand preference, this study divided factors affecting brand preference into three group following Duarte and Raposo's (2010) study. Consumer's behavior is influenced by consumer him self, the situation, and the object. Consumer's preference towards brand is also influenced by their oriented factors, the situation, and the brand related factors.
Consumers' Oriented Factors
There are several antecedents of brand preference related to consumer's behavior, such as consumer's characteristics. Consumer's characteristics are the main interest of this study. These are the dominant influence of individual for guiding to the brand preference (Duarte and Raposo, 2010). Following the same thought, Jamal and Al-Marri (2010) found that marketers can put their brands to establish their strong brand image with consumer self-image in term of automobile product. Self-image congruence could be one of the strategies to promote products into two segments; high involvement and low involvement. Moreover, Ekinci (2004) indicates that customers use their desires as a comparison standard to assess their satisfaction. Based on the literature, we conclude that self-image congruence is one of the important factors that meets customer's satisfaction. Customer's satisfaction has a strong influence on brand preference (Hellier et al. 2003;Jamal 2001;Khazanizadeh and Esfidani, 2014;Wen and Hilmi, 2011). A high-level satisfaction will increase consumer's preference for the brand (Helier et al. 2003).
In addition to customer's satisfaction, brand experience also has a strong influence on brand preference (Ebrahim et al. 2016;Niedriech and Swain, 2003). Ebrahim et al. (2016) demonstrated that brand experience is related to consumer' psychology. It reflects consumers' responses to various brand stimuli and the acquired knowledge can be a source preference so that it could generate evaluations or judgments towards a brand. Brand experience is the foundation of brand preference. Other studies examine the antecedents of brand experience, such as brand personality variable and appearance (Ebrahim et al. 2016;Ramashesan and Stein, 2014). Brand personality is related to brand symbol. Ebrahim et al. (2016) imply the importance of brand experience in order to transfer the brand personality to symbolic meaning so that the brand preference can be enhanced. In addition to brand personality, appearance is also related to aesthetic design of the brand. It includes the hedonic attributes of a brand. Appearance contributes to the consumer's experimental responses. (2003) Self image congruence affects customer satisfaction Ekinci (2004) Jamal (2001) Duarte and Raposo (2010) Jamal and Al Marri (2010) Customer satisfaction affects brand preference Hellier et al. (2003) Jamal and Godee (2001) Khashanizadeh and Esfidani (2014) Wen and Hilmi (2011)
Situational Factors
Situational factors are present at precise moment and place which do not result from consumer's choice or object, but from the condition of the environtment affecting consumer's behavior (Belk, 1974). These factors include communication and social environment. Several authors stated that communication lead consumer's favor towards a brand (Duarte and Raposo, 2010;Alamro and Rowly, 2011). Alamro and Rowly (2011) divided communication into two group namely controlled communication and uncontrolloed communication. Based on Alamro and Rowly's study, advertising has an influence on consumer's awareness towards a brand. Grace and O'cass (2005) also examine the controlled communication in term of service brands. Advertising is one of the controlled communications. It also has a positif infuence on brand attitude. Communication source is used to derive brand information for consumer. Clark et al. (2009) highlight advertising as a means of providing information to consumers, it will increase their awareness of a brand. In addtiton to controlled communication, uncontrolloed communication such as publication and word of mouth have influence to brand awareness (Alamro and Rowly 2011). Grace and O'cass (2005) found that uncontrolled communication has a negative relationship. Research by Khazanizadeh and Esfidani (2014) suggested that brand awareness, advertisement, and demographic factors also had an influence on brand preference perceived quality. It has an influence on brand preference (Wang, 2013).
Brand awareness involves word of mouth, pubication, and advertisement (Alamro and Rowly, 2011). It also influences brand preference (Khashanizadeh and Esfidani 2014). Samsung mobile phone brand preference was examined. Consumers are attracted to Samsung's audio and visual through friends, acquintances, its website, and information collected from social networks. These factors lead consumer's favor towards to the brand. It is in line with research conducted by Alamro and Rowly (2011), they indicated that consumer's awareness has strong infuence to brand preference.
Brand Related Factors
Brand related factors include brand attributes and the product. Prior research suggest that brand image affects brand preference (Alamro and Rowly, 2011;Khan, 2016;Duarte and Raposo, 2010). Brand image is related to country of origin, price, and quality. Duarte and Raposo (2010) suggest that most consumers use brands to express their lifestyle. They prefer the brands which, in terms of the image, are closer to them. Khan (2016) stated that developing smartphone features will increase the brand preference. In addition to features, price indicates the image of the brand (Alamro and Rowly, 2011). Wu et al. (2011) also examined the influence of service quality of private brand label towards its image. It indicates that brand image plays very mportant roles. Service quaity of private brands label can enhance consumer's perception of its image. Similarly with quality of product, country of origin variable has a big effect on the brand image (Kim et al. 2015
Reflective Outer Model of Indonesian and Malaysian Consumers
On the initial model, it is carried out test criteria of outer model. Evaluation of improvement is performed on the initial model by looking at the coefficients of latent variables with the indicator. Coefficient values that are below 0.7 must be removed from the model. Evaluation of reflective outer model is performed by comparing the loading factor's value with its standard value. Elimination process of indicators that are below the standard value is done periodically.
On the final outer model of Indonesian consumers, brand awareness is reflected by two indicators, advertising is reflected by three indicators, publicity is reflected by three indicators, word of mouth is reflected by one indicator, brand image is reflected by three indicators, price is reflected by two indicators, quality is reflected by three indicators, country of origin is reflected by two indicators, brand experience is reflected by three indicators, brand personality is reflected by two indicators, the appearance is reflected by three indicators, customer satisfaction is reflected by four indicators, self-image congruence is reflected by three indicators, and brand preference is reflected by four indicators. The final outer results are shown in Figure 3. On the final outer model of Malaysian consumers, brand awareness is reflected by three indicators, advertising is reflected by three indicators, publicity is reflected by three indicators, word of mouth is reflected by three indicators, brand imae is reflected by three indicators, price is reflected by three indicators, quality is reflected by three indicators, country of origin is reflected by two indicators, brand experience is reflected by four indicators, brand personality is reflected by four indicators, the appearance reflected by three indicators, customer satisfaction is reflected by four indicators, self-image congruence is reflected by three indicators, and brand preference is reflected by four indicators.
Outer Model Assessment of Indonesia and Malaysia
The evaluation of the reflective outer model is conducted using four criteria. The four criteria are composite reliability (ρc), cronbach's alpha, average variance extracted (AVE), and discriminant validity of cross loading criteria. All of latent variables has composite reliability values above 0.7 (Table 4). On both models (Indonesia and Malaysia), the cronbach's alpha value for all variables is above 0.7. Values above 0.7 indicate that internal stability and consistency of latent variable indicators are excellent. Validity is also a standard measure to shows accuracy. This measurement is described by the Average Variance Extracted (AVE) value. The AVE value for the fourteen latent variables is presented in Table 4 The standard AVE value is above 0.5. The overall latent variables in both models (Indonesia and Malaysia) in this study had an AVE score above 0.5. The last criterion is the discriminant validity of cross loading. The result of the cross loading criteria value indicates that both models (Indonesia and Malaysia) have correlation between indicators and latent variables.
Inner Model Assessment of Indonesia and Malaysia
Assessment of inner models is used to test the relationships among latent variables of model. There are four criteria of inner model assessment, namely R 2 from endogenous latent variables, path coefficient estimation, and goodness of fit. The R 2 value from the latent variables shows how large endogenous variables can be explained by exogenous variables. According to Chin (1998) Based on the bootstrap results (Table 5), there was a significant difference among Indonesian and Malaysian consumers. For Indonesian consumers, people recognition about the existence of a brand would increase the brand preference (p-value <0.05). Macdonald & Sharp (2000) found that enhancing brand awareness is the most appropriate strategy used by marketers when consumers have no some experiences on a certain brand, then marketers have to raise consumer awareness of the brand so that the brand becomes the main preference for the consumers.
In contrast on Malaysian consumers, who recognize a brand do not necessarily like the brand (p-value > 0.05). Brand awareness should be enhanced through effective publications in order to increase consumer preference for the brand (Nicholls & Roslow, 2005). One of the predictors to enhancing brand awareness is advertising (p-value<0.05). The more advertisement for a brand, the more people will be aware of the presence of the brand. Advertising provides important information for consumers so that it has a significant influence on brand awareness (Clark et al. 2009). But in this study, it found that publication does not affect Indonesian and Malaysian consumers in order to recognize the existence of a brand (p-value>0.05). Publication and word of mouth were not effective to increase brand awareness. Furthermore, the other significant from a good brand will provide a good experience that can increase consumer's preference for the brand. This finding is in line with the result obtained by Ebrahim et al. (2016). The difference in both countries proves that Malaysian consumers are more sensitive to the stimulant provided by a brand than Indonesian consumers.
difference between Indonesia and Malaysia consumers is a relationship brand experience to brand preference. Brand experience does not affect brand preference on Indonesian consumers (p-value>0.05). In contrast for Malaysian consumers, brand experience affects brand preference (p-value <0.05). it suggests that the stimulant In Indonesian consumers, there is an influence of brand awareness and customer satisfaction on brand preference. In Malaysian consumers, there is an influence of brand experience and customer satisfaction on brand preference. One of antecedent customer satisfaction is self image congruence. Han et al. (2006) states that self image congruence has direct and inderect effect on brand preferences. Brand image variables do not affect brand preferences to Indonesian and Malaysian consumers. The influence of brand awareness on brand preferences shows that Indonesian consumers are more sensitive of the brand through the advertisements. Contrary to Malaysian consumers, brand awareness does not affect brand preferences.
In Malaysian consumers, brand experience has an influence on brand preference. This finding implies that Malaysian consumers are more interested in the visual impression and characteristics of brand. When consumers see an attractive visual of mobile phone brand, it will provide a good experience for the brand. Malaysian consumers prefer brand which have the same characteristics to their image. In addition, brand personality as a symbol their life style (Duarte and Raposo, 2010). Niedrich and Swain (2003) also said that the first experience of a particular brand will increase the brand's level of elegance compared to other brands.
In Indonesian consumer and Malaysian consumer, there is an influence of consumers' satisfaction variable on brand preference. The more satisfied the consumers with the mobile phone brand, the more increase mobile Indonesian and Malaysian consumers no longer consider brand image in evaluating brand preferences. This finding is in line with the previous report that there is a decrease in the influence of brand image on brand preference (Sakjaviee & Samiee, 2011). Customer satisfaction had the most significant impact on brand preference (p-value<0.05). Both countries show that the more satisfied consumers of a brand, it will increase their preference for the brand. Improving consumer satisfaction can increase consumer loyalty to the brand (Ningsih & Segoro, 2014).
Antecedents of brand image are price, quality, and country of origin. Quality and country of origin had an impact toward brand image. Khan (2016) emphasizes that in enhancing a brand image of smartphones, marketers have to pay attention of the brand quality as the consumers' desire. In contrast with the price factor, Indonesian and Malaysian consumers do not see the price as one of the factors that determine the image of a brand. High price levels may not necessarily reflect a good brand image. Brand personality also is one factor that may give the impression and positive experience on a brand. One example is a brand of Apple in which it impressed as elegant and exclusive. When consumers use the Apple mobile phone brand, it would be given an exclusive impression. These research also indicate that self-image congruence to be considered as part of the customer satisfaction (p-value<0.05). Indonesian and Malaysian consumers evaluate a brand that suits their self-image, it will fulfill their satisfaction. This finding is in line with previous research conducted by Ebrahim et al. (2016).
The last criterion is Goodness of fit (GoF). GoF is a model goodness tested that validates the combined performance of measurement models among variables with the indicators and structural models between latent variables. GoF assessment consists of 3 categories: small (0.1), moderate (0.25), and substantial ( they will feel satisfied with the brand. Marketers should pay attention to all aspects and attributes of a brand. In addition, the suitability of the brand characteristics with self-image is also very important. The suitability of self image congruence with the brand used will increase consumers' satisfaction. Hence, marketers should adjust the brand image to the intended target market. One of the examples is that students prefer design and features of the elegant brand of mobile phones. Marketers should create products that are elegantly designed
Conclusions
The conclusions that can be drawn from the results and discussion of this present research is there is different perception of mobile phone brands between Indonesian and Malaysian consumers. Factors affecting brand preferences are brand awareness, brand experience, customers' satisfaction. Brand awareness is influenced by advertisment and publication. Meanwhile, brand experience is influenced by the appearance and brand personality. However, customers' satisfaction is influenced by self-mage congruence. There are different factors affecting brand preferences between Indonesia and Malaysia consumers. Indonesian consumers are more sensitive on the brand advertisement. Meanwhile, Malaysian consumers are more interested in brand personality.
Recommendations
Based on this research, there are several recommendations. In planning marketing strategy, company marketers should consider the level of consumers' preferences on the brand. They should also pay attention to the factors that affect brand preferences. One of the examples is to increase the existence of a brand by creating an engaging advertisement. In addition, marketers should also pay attention to the create attractive designs. Future research can be conducted through expanding the sample size and using such different sampling technique as random sampling. So that, the result can be used to generalize the phenomenon being researched. In addition, future research needs to select new samples, new variables as well as different products to refine the scales used to measure the constructs. This research has several limitations such as the the technique sampling, the total number of sampling and the comparison phone brand preference. Brand image variables do not affect brand preference.
A complex model of brand preferences was examined using Structural equation modeling. From the analysis, there are several factors contribute to brand preferences. The strong differences on both models is that Indonesian consumers are more sensitive to advertising of the brand, while Malaysian are more sensitive to attractive characteristics of the brand. This research finding indicates that brand image does not affect brand preferences. It indicates that in complex model, factors which are individually significant can lose their power when assessed together with other factors due to the interactional effect. This view is however a starting point to know the mobile phone brand preferences among Indonesian and Malaysian consumers.
Managerial Implications
There is the differences perception of mobile phone brands between Indonesian and Malaysian consumers.
There are various factors that affect brand preference. These factors are advertising, publication, brand awareness, quality, country of origin, brand personality, appearance, brand experience, customer satisfaction, and self image congruence. To win the competition in two countries (Indonesia and Malaysia), marketers should be applied the difference marketing strategy. In Indonesia, the level of brand preference will increase if consumers become more aware of the existence of the brand. To increase brand awareness, marketers must increase the frequency of advertising. Attractive advertising will increase consumer awareness of the brand. While in Malaysia, the brand experience also influences on brand preference. Brand experience includes brand sensation, feeling, and cognition. The brand experience is also influenced by the appearance and brand personality. Brand personality is also one of the factors in providing a good brand experience for consumers. A brand must have uniqe characteristics that differentiate it between its competitors. Marketers must create unique brand characteristics in order to attract consumers. It is intended that the brand has its own characteristics. One example is to give particular characteristics to the brand.
Consumers' satisfaction will also increase brand preferences among Indonesian and Malaysian consumers. Thus, marketers should pay attention to their desires. If the consumers' desire can be fulfilled, | 2021-08-02T00:06:40.521Z | 2021-04-30T00:00:00.000 | {
"year": 2021,
"sha1": "a244bb4e7b2b020187a894c9c81a3475b1414fca",
"oa_license": "CCBY",
"oa_url": "https://journal.ipb.ac.id/index.php/brcs/article/download/35979/21811",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "34cd40d3fb5f5c88d0eec0b975f4f5c8e8c19c89",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.